id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
151730458 | pes2o/s2orc | v3-fos-license | Marked Increases in Alpha Power Over the Left Prefrontal Region During Days Following Shift Work : A Case Report
C l i n M e d International Library Citation: Murugan NJ, Rouleau N, Karbowski LM, Lapointe AP, Persinger MA (2015) Marked Increases in Alpha Power Over the Left Prefrontal Region During Days Following Shift Work: A Case Report. J Sleep Disord Manag 1:002 Received: July 30, 2015: Accepted: September 08, 2015: Published: September 10, 2015 Copyright: © 2015 Murugan NJ. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Murugan et al. J Sleep Disord Manag 2015, 1:1
Methods
The subject who volunteered for this "longitudinal" study was a 35 year old man who reported he had not experienced or sustained any significant medical problems within the previous decade.He had been trained as a paramedic and had begun the traditional procedure of variable shift work as required by the local employer, with no prior exposure to shift-working schedules.Previously, his typical sleep interval was between local midnight and about 06 hr.
Written consent was obtained from the subject.For each session he was seated in a comfortable chair housed within a darkened, quiet room (an acoustic chamber) and was fitted with a 10-20 EEG cap for about 30 min per session on nine (9) separate occasions over a 25 day period.The days were: Day 1: baseline, Day 2: after 2 days of day shift (7 AM to 7 PM), 3: after 5 days of night shift (7 PM to 7 AM), 4: after 2 days of no work, 5: after 2 days of night shift, 6: after 5 days of day shift, 7: after 2 days of no work, day 8: after 2 days of day shift, and day 9: after 5 days of night shift (Figure 1).
The procedure was the same for each of the 9 data collection days.QEEG was recorded continuously at 250 Hz (250 samples per second) during the following sequence: 1: 5 min eyes open, 2: 5 min eyes closed, 3: 60 s slow breathing through the nose, 4: 60 s normal breathing, 5: 60 s of a medium rate of breathing through the nose, 6: 60 s of normal breathing, 7: 30 s of hyperventilation through the nose, and, 8: 60 s of normal breathing.At the end of each QEEG episode the short version of the POMS (Profile of Mood States) was administered.
Results
An ANOVA revealed differences in average Alpha (7.5-14 Hz) power over F3 (left prefrontal cortex) during baseline conditions with the participant's eyes open across preceding shift type, F(2,7)
Abstract
Quantitative electroencephalographic (QEEG) measurements were completed for a 35 year old of paramedic following two to five days of shift change and rest periods.The most conspicuous and reliable change was a marked increase (factor of 5) in power within the alpha band over the left prefrontal region and, to a lesser degree, increased power within the low-beta band over the right parietal region during the test periods after no work days.These results indicate that regions of cerebral cortices associated with self-monitoring and spatial vigilance are most affected by a schedule that involves serial shifts in sleep schedule but that the structure of employment may attenuate the most significant changes that occur after days of rest.These results are consistent with previous observations that changing sleep and work schedules affect the activity of regions of the human brain that are essential for awareness of spatial content and reasoning.
Introduction
Disequilibrium of circadian rhythms secondarily to coerced shifts in work schedules have been known to adversely affect normal sleep characteristics and to increase the probability of diminished vigilance during the days that follow these shifts.Industrial accidents have been shown to occur more frequently during the one or two days after the schedule shift [1].There is copious literature [2][3][4] that has shown the effects of schedule shifts on adverse reactions in populations of shift workers.
Considering the types of behaviours that increase the likelihood of an accident after a shift in sleep schedule the most likely region of the human brain to be involved would be the prefrontal lobe, particularly within the left hemisphere.This region is involved with self-monitoring, planning, and inhibitory behaviours [5,6].The development of modern quantitative electroencephalographic (QEEG) equipment and available software has allowed easy, reliable measurements.Here we present conspicuous disruptions in the stability of cerebral activity within the left prefrontal region for a paramedic who was required to maintain a shifting work schedule.= 9.10, p < .05,Ω 2 = .78.Post-hoc analyses revealed that the major source of variance was an increase in Alpha power after 2 days of no work (M = 11.07,SEM = .35)relative to values obtained 2 days after day shifts (M = 2.68, SEM = .08),t(3) = -30.18,p <.001).The graphic representation of this effect can be seen in Figure 2.
An ANOVA revealed differences in average Beta1 (14-20Hz) power over P4 (right parietal cortex) during baseline conditions with the participant's eyes closed across preceding shift type, F(2,7) = 9.36, p <.05, Ω 2 = .79.Post-hoc analyses revealed that the major source of variance was an increase in Beta1 power after 2 days of no work (M= 1.67, SEM =.02) relative to values obtained 2 days after day shifts (M = .61,SEM = .03),t(3) = -23.02,p <.001).After a Bonferroni correction where α = 0.002, these results represent the only statistically significant differences as a function of preceding shift type during the eyes closed baseline condition.These results can be visualized in Figure 3.
Sunday
Night Shift (7PM-7AM) No Work Day Shift (7PM-7AM) Bivariate, non-parametric correlational analysis of Alpha power over F3 during the eyes open baseline condition (n = 9) and Beta1 power over P4 during the eyes closed baseline condition (n = 9) revealed a strong, positive relationship, r = .98,p <.001, rho = .833,p <.005.These results can be visualized in Figure 4.When examining analogous channels overlying contralateral cerebral structures (i.e.F4 and P3), and correlating spectral power values for the same bands and within the same conditions described above, a similar positive relationship was identified, r = .74,p <.04, rho = .85,p <.005.However, Fisher's z-test revealed a statistically significant difference between Pearson r values resultant of the two correlational analyses (z = 2.24, p <.05), indicating the former strength of the association was greater than the latter.
The specificity of the F3-P4 relationship is emphasized when examining the relationship between global Alpha power during the eyes open baseline condition and global Beta1 power during the eyes closed baseline condition which are demonstrably unrelated, r = .09,p >.05, rho = .37,p >.05.These results can be visualized in Figure 5.
Discussion
There has been a general consensus based upon epidemiological data and clinical practice that some people who engage in shift work display alterations in vigilance and increased numbers of behaviours that can contribute to accidents [7,8].The results of the present case study suggest that the changes in the quantitative electroencephalographic power that occurred over the subject's left prefrontal and right parietal regions may support these behaviours.While the latter is associated with vigilance and relative spatial relationships [9,10], which are important for navigating the social world, the former is associated with self-monitoring of behaviour, organization of thoughts that result in subsequent overt behaviours, and the encoding of experiences into verbal labels [11,12].That the results are not an artefact of serial measurements and habituation to the measurement setting is indicated by the dominance of the effect following two days of rest.Only the alpha band reflected this component of the shifts in sleeping schedule.The marked increase in alpha power but no changes in the other EEG bands reflects the specificity of influence from altering sleep schedules.Such relative increase in power within the alpha band within the left prefrontal region would be consistent with less proficient activity within this region that could contribute to less accurate selfmonitoring and inhibition of distraction particularly within spatial The markedly reduced magnitude of the right prefrontal and left parietal (the mirror image) strength of association emphasizes the hemispheric lateralization of the effect and reiterates the strong coherence between self-monitoring and input that relates to the spatial-affective environment.It may be relevant that transient increases in alpha power of this magnitude over the left prefrontal region and right parietal region could be expected to increase the subject's proclivity to suggestion [13] and an increased likelihood of increased self-attribution of these experiences.
Figure 1 :Figure 2 :
Figure 1: The typical work schedule of the paramedic.The asterisks indicate the days where EEG records were obtained.Dark blue indicates days of night shift, light blue indicates days of day shift, and yellow indicates days of no working.
Power (µV 2 .Figure 3 :
Figure 3: Conspicuous increases in Beta1 (14-20Hz) power over the P4 channel as a function of time, coded by shift type.Values represented were obtained during baseline (B) measurement, after day shifts (D), after night shifts (N), and after a day off (O).
Figure 4 :
Figure 4: Scatter plot demonstrating the strong relationship between standardized Alpha (7.5-14Hz) power over F3 during the eyes open baseline condition (Y-axis), and standardized Beta1 (14-20Hz) power over P4 during the eyes closed baseline condition (X-axis) | 2018-08-28T20:34:17.679Z | 2015-12-31T00:00:00.000 | {
"year": 2015,
"sha1": "388ead90c69ee4d94a6dfd62a599f6831395b603",
"oa_license": "CCBY",
"oa_url": "https://clinmedjournals.org/articles/jsdm/journal-of-sleep-disorders-and-management-jsdm-1-002.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "388ead90c69ee4d94a6dfd62a599f6831395b603",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
201869095 | pes2o/s2orc | v3-fos-license | Enhancing the Photovoltaic Performance of Perovskite Solar Cells Using Plasmonic Au@Pt@Au Core-Shell Nanoparticles
Au@Pt@Au core-shell nanoparticles, synthesized through chemical reduction, are utilized to improve the photoelectric performance of perovskite solar cells (PSCs) in which carbon films are used as the counter electrode, and the hole-transporting layer is not used. After a series of experiments, these Au@Pt@Au core-shell nanoparticles are optimized and demonstrate outstanding optical and electrical properties due to their local surface plasmon resonance and scattering effects. PSC devices containing 1 wt.% Au@Pt@Au core-shell nanoparticles have the highest efficiency; this is attributable to their significant light trapping and utilization capabilities, which are the result of the distinctive structure of the nanoparticles. The power conversion efficiency of PSCs, with an optimal content of plasmonic nanoparticles (1 wt.%), increased 8.1%, compared to normal PSCs, which was from 12.4% to 13.4%; their short-circuit current density also increased by 5.4%, from 20.5 mA·cm−2 to 21.6 mA·cm−2. The open-circuit voltages remaining are essentially unchanged. When the number of Au@Pt@Au core-shell nanoparticles in the mesoporous TiO2 layer increases, the photovoltaic parameters of the former shows a downward trend due to the recombination of electrons and holes, as well as the decrease in electron transporting pathways.
Introduction
With the increase of energy demands and the decrease in available fossil fuels worldwide, the development of sustainable energy is one of the most urgent tasks for mankind. Perovskite solar cells (PSCs) are expected to be efficient and low-cost photovoltaic devices for sustainable energy. This is due to their prominent optoelectronic properties, including strong light absorption capacity, long carrier transport length, and low carrier recombination loss [1][2][3]. As a result, since their first use in dye-sensitized solar cells (DSSCs) in 2009, PSCs have attracted a great deal of enthusiasm from researchers worldwide [4][5][6][7][8][9][10][11][12][13]. At present, the maximum photoelectric conversion efficiency of PSC devices has climbed from 3.9% to more than 24%, which is close to the maximum efficiency of single-junction silicon-based photovoltaic devices. Furthermore, PSC devices can be prepared using solution methods with low processing costs and low working temperatures [6,14,15]. The convenience and low cost of fabricating PSC devices makes the commercialization of them possible.
PSCs are usually expressed using the general-duty expression ABX 3 , in which the A-site is usually an organic cation or metal cation such as CH 3
Synthesis of Au@Pt@Au Core-Shell NPs
NPs can be prepared via mechanical grinding, ultraviolet irradiation, chemical reduction, and photochemical methods, among others [41][42][43]. Among them, chemical reduction was the simplest to conduct and had minimal equipment requirements. NPs of different sizes and shapes can be prepared by changing the reaction cases; however, impurities were easily introduced during these reactions. It was therefore necessary to clean the nanoparticles obtained as many times as possible to ensure suitable purity for later use.
In this study, we found a chemical reduction method for preparing Au@Pt@Au NPs. First, an aqueous solution of HAuCl 4 (2.94 × 10 −4 M, 50 mL) was brought to a boil while being stirred continuously with a magnetic stirrer. An aqueous solution of Na 3 C 6 H 5 O 7 (3.88 × 10 −2 M, 1.25 mL) was then added to the HAuCl 4 solution. After 20 min, the color of the mixture turned bright red, showing that Au NPs were successfully synthesized. Second, an aqueous solution of AgNO 3 (5.88 × 10 −3 M, 3 mL) was added dropwise to the mixture and an aqueous solution of Na 3 C 6 H 5 O 7 (3.88 × 10 −2 M, 0.75 mL) was mixed into it immediately. After 1 h, an aqueous solution of H 2 PtCl 6 (1.95 × 10 −1 M, 0.08 mL) was added and then the solution was stirred quickly for 20 min. After the colloidal solution had cooled, the solution was then washed with moderate ultrapure water for several cycles. The obtained product was diluted to 50 mL with ultrapure water and brought to a boil. An aqueous solution of AgNO 3 (5.88 × 10 −3 M, 2.4 mL) and an aqueous solution of Na 3 C 6 H 5 O 7 (3.88 × 10 −2 M, 0.6 mL) were then added to the resulting product and stirred with an intense magnetic stirrer for 1 h. After this, an aqueous solution of HAuCl 4 (2.94 × 10 −4 M, 0.3 mL) and an aqueous solution of Na 3 C 6 H 5 O 7 (3.88 × 10 −2 M, 0.3 mL) were added simultaneously to the colloidal solution. After this, it was stirred quickly for 20 min and Au@Pt@Au NPs were finally formed. After the reaction solution was cooled down, the resulting solution was washed with ultrapure water for several cycles and dried in a drying cabinet for 24 h.
Cell Fabrication
The basic structure of the PSCs prepared in this study was FTO conductive glass/TiO 2 dense film/TiO 2 film/ZrO 2 film/CH 3 NH 3 PbI 3 film/carbon film. Before preparing the PSCs, the FTO conductive glass was cleaned and TiO 2 dense film precursor solution, TiO 2 film colloidal solution, ZrO 2 film colloidal solution, and CH 3 NH 3 PbI 3 precursor solutions were prepared. The FTO conductive glass was sequentially cleaned with ultrapure water, dimethyl ketone, isopropanol alcohol, and an ultrasonic ethanol treatment. The TiO 2 dense layer prefabricated liquid was obtained by mixing 1 mL titanium diisopropoxide bis with 19 mL ethanol, while the TiO 2 mesoporous layer colloidal solution and ZrO 2 mesoporous layer colloidal solution were prepared by adding 2 g ethyl alcohol to 0.5 g TiO 2 or a ZrO 2 sizing agent. Then, 1-3 mg Au@Pt@Au NPs were added to the TiO 2 mesoporous layer colloidal solution, which was ultrasonicated for 30 min and stirred for 48 h. The perovskite precursor solution consisted of 231 mg PbI 2 , 89 mg (CH 3 NH 3 )I, 300 mg DMF, and 78 mg DMSO.
The methods available for the fabrication of PSC devices include: Spin-coating, vapor deposition, and ultrasonic spraying, among others [44][45][46][47][48]. In this study, we utilized a spin-coating method. First, 35 µL of TiO 2 compact layer prefabricated liquid was deposited on glass using the spin-coating method at a speed of 4000 r/min for 20 s, followed by 30 min of annealing at 500 • C. Then, the first step was repeated for the TiO 2 and ZrO 2 mesoporous layer colloidal solutions with Au@Pt@Au or Au NPs. The CH 3 NH 3 PbI 3 film was prepared in an airtight box filled with nitrogen. First, 35 µL of CH 3 NH 3 PbI 3 solution was spin-coated on the substrate at 1000 r/min for 10 s and 4000 r/min for 15 s, followed by 10 min of annealing at 100 • C. During this spin-coating, 300 µL methylbenzene was added quickly to the solution in order to improve the quality of the film being formed [45]. Finally, a carbon film was obtained by using a screen-printing board and being heated at 100 • C for 30 min.
Characterization
A transmission electron microscope (TEM; JEOL, Tokyo, Japan) was used to observe the ultrastructure of the NPs. X-ray diffraction (XRD; AXS, Los Angeles, CA, USA) was utilized to investigate the phases of the as-prepared samples, while X-ray photoelectron spectroscopy (XPS, Thermo Fisher Scientific, Waltham, MA, USA) was used to assess the binding energies of the elements in the samples. The absorption curves were collected using a UV-vis spectrophotometer (UV3600, Shimadzu, Kyoto, Japan). Cross-sections of the PSC samples were imaged by using a scanning electron microscope (SEM; JEOL, Tokyo, Japan). The photoelectric properties of the PSCs were evaluated, based on the photocurrent-voltage (J-V) recorded on an electrochemical workstation (ZAHNER-elektrik GmbH and Co. KG, Kronach, Germany) under simulated solar light (Oriel Sol3A, Newport Corporation, Irvine, CA, USA). The measurements were carried out from -1.1 V to short circuit voltage at a scan rate of 150 mV/s under air mass (AM) 1.5 G irradiation (100 mW/cm 2 ) in ambient air. Incident photon-to-electron conversion efficiency (IPCE) curves were acquired with a device produced by the Newport Corporation, USA, in order to analyze the photoelectric current of the sample cells under dark conditions in ambient air.
Results and Discussion
The process of preparing Au@Pt@Au NPs was shown in Figure 1. First, Na 3 C 6 H 5 O 7 was used to transform Au ions in HAuCl 4 to Au NPs. Then, an Ag shell was prepared on the surfaces of the Au NPs; this was subsequently replaced by a Pt shell. Finally, another Ag shell was synthesized on the surfaces of the Au@Pt core-shell NPs and was then replaced with several small Au spheres around the Au@Pt core-shell NPs. This yielded Au@Pt@Au core-shell NPs. Nanomaterials 2019, 9, x FOR PEER REVIEW 4 of 13 r/min for 15 s, followed by 10 min of annealing at 100 °C. During this spin-coating, 300 μL methylbenzene was added quickly to the solution in order to improve the quality of the film being formed [45]. Finally, a carbon film was obtained by using a screen-printing board and being heated at 100 °C for 30 min.
Characterization
A transmission electron microscope (TEM; JEOL, Tokyo, Japan) was used to observe the ultrastructure of the NPs. X-ray diffraction (XRD; AXS, Los Angeles, CA, USA) was utilized to investigate the phases of the as-prepared samples, while X-ray photoelectron spectroscopy (XPS, Thermo Fisher Scientific, Waltham, MA, USA) was used to assess the binding energies of the elements in the samples. The absorption curves were collected using a UV-vis spectrophotometer (UV3600, Shimadzu, Kyoto, Japan). Cross-sections of the PSC samples were imaged by using a scanning electron microscope (SEM; JEOL, Tokyo, Japan). The photoelectric properties of the PSCs were evaluated, based on the photocurrent-voltage (J-V) recorded on an electrochemical workstation (ZAHNER-elektrik GmbH and Co. KG, Kronach, Germany) under simulated solar light (Oriel Sol3A, Newport Corporation, Irvine, CA, USA). The measurements were carried out from -1.1 V to short circuit voltage at a scan rate of 150 mV/s under air mass (AM) 1.5 G irradiation (100 mW/cm 2 ) in ambient air. Incident photon-to-electron conversion efficiency (IPCE) curves were acquired with a device produced by the Newport Corporation, USA, in order to analyze the photoelectric current of the sample cells under dark conditions in ambient air.
Results and Discussion
The process of preparing Au@Pt@Au NPs was shown in Figure 1. First, Na3C6H5O7 was used to transform Au ions in HAuCl4 to Au NPs. Then, an Ag shell was prepared on the surfaces of the Au NPs; this was subsequently replaced by a Pt shell. Finally, another Ag shell was synthesized on the surfaces of the Au@Pt core-shell NPs and was then replaced with several small Au spheres around the Au@Pt core-shell NPs. This yielded Au@Pt@Au core-shell NPs. To investigate the morphology of these NPs, TEM and high-resolution TEM (HRTEM) were used to determine their size, shape, and structure. As depicted in Figure 2a,c,d, Au NPs, Au@Pt core-shell NPs, and Au@Pt@Au core-shell NPs were scattered uniformly in deionized water according to the bar graphs given in these images. The radii of Au NPs were approximately 15 nm, while those of the Au@Pt core-shell NPs were nearly 18 nm, which indicated that the Pt shells were approximately 3 nm thick. The nucleation sizes of the Au@Pt@Au core-shell NPs were almost equal to those of the Au@Pt core-shell NPs; the difference was that the Au@Pt@Au core-shell NPs were surrounded by several small Au spheres (with radii of approximately 5 nm). Figure 2b shows an enlarged image of an Au NP, in which the 2.36 Å long lattice fringes can be observed clearly and correspond to the Au (111) crystal plane [49]. Several small Au spheres delineated by the red trajectories in Figure 2e were distributed on the surface of an Au@Pt core-shell NP. The three-dimensional model of an Au@Pt@Au core-shell NP in Figure 2f To investigate the morphology of these NPs, TEM and high-resolution TEM (HRTEM) were used to determine their size, shape, and structure. As depicted in Figure 2a,c,d, Au NPs, Au@Pt core-shell NPs, and Au@Pt@Au core-shell NPs were scattered uniformly in deionized water according to the bar graphs given in these images. The radii of Au NPs were approximately 15 nm, while those of the Au@Pt core-shell NPs were nearly 18 nm, which indicated that the Pt shells were approximately 3 nm thick. The nucleation sizes of the Au@Pt@Au core-shell NPs were almost equal to those of the Au@Pt core-shell NPs; the difference was that the Au@Pt@Au core-shell NPs were surrounded by several small Au spheres (with radii of approximately 5 nm). Figure 2b shows an enlarged image of an Au NP, in which the 2.36 Å long lattice fringes can be observed clearly and correspond to the Au (111) crystal plane [49]. Several small Au spheres delineated by the red trajectories in Figure 2e were distributed on the surface of an Au@Pt core-shell NP. The three-dimensional model of an Au@Pt@Au core-shell NP in Figure 2f corresponds to the NPs in Figure 2d,e. Figure 3a shows the UV-visible absorption curves of Au NPs, Au@Pt core-shell NPs, and Au@Pt@Au core-shell NPs dispersed in deionized water. The LSPR resonance absorption peak of Au NPs (r = 15 nm) appeared at approximately 520 nm. The absorption peak of Au@Pt core-shell NPs had a blue shift of approximately 20 nm, compared to the Au NPs, which could be ascribed to the shielding effect of the Pt shell. The blue curve, representing the optical absorption spectrum of the Au@Pt@Au core-shell NPs, had two peaks at 380 nm and 600 nm, respectively. The absorption peak at 380 nm was generated by the inner Au@Pt core-shell NPs, while the peak at 600 nm corresponded to the small, outermost Au spheres. The black line in Figure 3b represents the XRD pattern of the powder obtained by calcining the TiO2 mesoporous layer colloidal solution at 500 °C. The diffraction angles observed at 25°, 38°, and 48°, respectively, corresponded to the (101), (004), and (200) crystal planes of anatase phase TiO2. However, the red line, depicting the sample mixed with Au@Pt@Au core-shell NPs, was almost identical to the black one. There were no distinct characteristic peaks of Au and Pt in the XRD pattern; this may be because of the low-content Au@Pt@Au core-shell NPs in the TiO2 mesoporous layer colloidal solution. Figure 3a shows the UV-visible absorption curves of Au NPs, Au@Pt core-shell NPs, and Au@Pt@Au core-shell NPs dispersed in deionized water. The LSPR resonance absorption peak of Au NPs (r = 15 nm) appeared at approximately 520 nm. The absorption peak of Au@Pt core-shell NPs had a blue shift of approximately 20 nm, compared to the Au NPs, which could be ascribed to the shielding effect of the Pt shell. The blue curve, representing the optical absorption spectrum of the Au@Pt@Au core-shell NPs, had two peaks at 380 nm and 600 nm, respectively. The absorption peak at 380 nm was generated by the inner Au@Pt core-shell NPs, while the peak at 600 nm corresponded to the small, outermost Au spheres. The black line in Figure 3b represents the XRD pattern of the powder obtained by calcining the TiO 2 mesoporous layer colloidal solution at 500 • C. The diffraction angles observed at 25 • , 38 • , and 48 • , respectively, corresponded to the (101), (004), and (200) crystal planes of anatase phase TiO 2 . However, the red line, depicting the sample mixed with Au@Pt@Au core-shell NPs, was almost identical to the black one. There were no distinct characteristic peaks of Au and Pt in the XRD pattern; this may be because of the low-content Au@Pt@Au core-shell NPs in the TiO 2 mesoporous layer colloidal solution. XPS was used to investigate the elemental compositions and chemical states of the mesoporous TiO2 samples with or without Au@Pt@Au core-shell NPs. Figure 4a,b revealed the photoelectron energies of the Ti 2p and O 1s of the mesoporous TiO2 samples and mesoporous TiO2 samples incorporating Au@Pt@Au core-shell NPs. The peaks of the red lines were almost consistent with the black ones, which implied that the chemical states of the Ti and O atoms did not change after the Au@Pt@Au core-shell NPs were incorporated. As depicted in Figure 4c,d, four characteristic peaks were found at 83.5 eV, 87.5 eV, 71.5 eV, and 74.5 eV, which could be indexed to Au0 and Pt0, respectively, which suggested that both Au0 and Pt0 were present in the samples [50][51][52]. Figure 5a,b show the structure and SEM cross-section images of an entire mesoporous PSC. From the bottom of the cell up, the layers are the substrate, TiO2 dense film, TiO2 film mixed with Au@Pt@Au core-shell NPs, ZrO2 film, CH3NH3PbI3 film, and carbon film. The carbon film assembled via silk-screen printing was 30 μm thick, which was far thicker than the other layers. Therefore, the XPS was used to investigate the elemental compositions and chemical states of the mesoporous TiO 2 samples with or without Au@Pt@Au core-shell NPs. Figure 4a,b revealed the photoelectron energies of the Ti 2p and O 1s of the mesoporous TiO 2 samples and mesoporous TiO 2 samples incorporating Au@Pt@Au core-shell NPs. The peaks of the red lines were almost consistent with the black ones, which implied that the chemical states of the Ti and O atoms did not change after the Au@Pt@Au core-shell NPs were incorporated. As depicted in Figure 4c,d, four characteristic peaks were found at 83.5 eV, 87.5 eV, 71.5 eV, and 74.5 eV, which could be indexed to Au0 and Pt0, respectively, which suggested that both Au0 and Pt0 were present in the samples [50][51][52]. XPS was used to investigate the elemental compositions and chemical states of the mesoporous TiO2 samples with or without Au@Pt@Au core-shell NPs. Figure 4a,b revealed the photoelectron energies of the Ti 2p and O 1s of the mesoporous TiO2 samples and mesoporous TiO2 samples incorporating Au@Pt@Au core-shell NPs. The peaks of the red lines were almost consistent with the black ones, which implied that the chemical states of the Ti and O atoms did not change after the Au@Pt@Au core-shell NPs were incorporated. As depicted in Figure 4c,d, four characteristic peaks were found at 83.5 eV, 87.5 eV, 71.5 eV, and 74.5 eV, which could be indexed to Au0 and Pt0, respectively, which suggested that both Au0 and Pt0 were present in the samples [50][51][52]. Figure 5a,b show the structure and SEM cross-section images of an entire mesoporous PSC. From the bottom of the cell up, the layers are the substrate, TiO2 dense film, TiO2 film mixed with Au@Pt@Au core-shell NPs, ZrO2 film, CH3NH3PbI3 film, and carbon film. The carbon film assembled via silk-screen printing was 30 μm thick, which was far thicker than the other layers. Therefore, the Figure 5a,b show the structure and SEM cross-section images of an entire mesoporous PSC. From the bottom of the cell up, the layers are the substrate, TiO 2 dense film, TiO 2 film mixed with Au@Pt@Au core-shell NPs, ZrO 2 film, CH 3 NH 3 PbI 3 film, and carbon film. The carbon film assembled via silk-screen printing was 30 µm thick, which was far thicker than the other layers. Therefore, the device characterized by SEM did not include the electrode film. The TiO 2 compact layer was usually 20 nm, but this was not clear enough to be visible in Figure 5b. To further explore the optical properties of the PSC devices, UV-vis curves were recorded in order to investigate the light absorption capacity of the devices that incorporated different quantities of Au@Pt@Au core-shell NPs. As described in Figure 6a, the intensity of the UV-vis absorption spectra increased gradually, particularly at 400-600 nm, as the Au@Pt@Au NPs load increased. This was consistent with the absorption curve of pure Au@Pt@Au core-shell NPs. Figure 6b shows a schematic of the LSPR and the scattering effects in the PSCs. When the vibrational frequency of the photons matched well with the frequency of the Au@Pt@Au core-shell NPs, an intense local electromagnetic field was generated around the NPs, which could cause bandgap excitation in the nearby TiO2 and generate more electron-hole pairs in the cell [25,53,54]. Furthermore, Au@Pt@Au core-shell NPs display excellent scattering because of the outermost, small Au spheres, which could ensure that the light transmitted back and forth is more efficiently utilized by the device. A high temperature would affect the LSPR effect of plasmonic NPs; however, an enhancement seen in the absorbance spectra was related with the LSPR effect and the scattering effect of NPs. Figure 6a shows that the LSPR effect and the scattering effect of NPs still played a role in the TiO2 layers after annealing. The J-V curves and parameters of cells incorporated with different amounts of Au@Pt@Au coreshell NPs are presented in Figures 7 and 8, and Table 1, respectively. These parameters were tested with an electrochemical workstation at 25 °C in air. As the concentration of Au@Pt@Au core-shell NPs varied from 0 wt.% to 1.5 wt.%, both the short-circuit current densities (JSC) and the PCEs first To further explore the optical properties of the PSC devices, UV-vis curves were recorded in order to investigate the light absorption capacity of the devices that incorporated different quantities of Au@Pt@Au core-shell NPs. As described in Figure 6a, the intensity of the UV-vis absorption spectra increased gradually, particularly at 400-600 nm, as the Au@Pt@Au NPs load increased. This was consistent with the absorption curve of pure Au@Pt@Au core-shell NPs. Figure 6b shows a schematic of the LSPR and the scattering effects in the PSCs. When the vibrational frequency of the photons matched well with the frequency of the Au@Pt@Au core-shell NPs, an intense local electromagnetic field was generated around the NPs, which could cause bandgap excitation in the nearby TiO 2 and generate more electron-hole pairs in the cell [25,53,54]. Furthermore, Au@Pt@Au core-shell NPs display excellent scattering because of the outermost, small Au spheres, which could ensure that the light transmitted back and forth is more efficiently utilized by the device. A high temperature would affect the LSPR effect of plasmonic NPs; however, an enhancement seen in the absorbance spectra was related with the LSPR effect and the scattering effect of NPs. Figure 6a shows that the LSPR effect and the scattering effect of NPs still played a role in the TiO 2 layers after annealing. Nanomaterials 2019, 9, x FOR PEER REVIEW 7 of 13 device characterized by SEM did not include the electrode film. The TiO2 compact layer was usually 20 nm, but this was not clear enough to be visible in Figure 5b. To further explore the optical properties of the PSC devices, UV-vis curves were recorded in order to investigate the light absorption capacity of the devices that incorporated different quantities of Au@Pt@Au core-shell NPs. As described in Figure 6a, the intensity of the UV-vis absorption spectra increased gradually, particularly at 400-600 nm, as the Au@Pt@Au NPs load increased. This was consistent with the absorption curve of pure Au@Pt@Au core-shell NPs. Figure 6b shows a schematic of the LSPR and the scattering effects in the PSCs. When the vibrational frequency of the photons matched well with the frequency of the Au@Pt@Au core-shell NPs, an intense local electromagnetic field was generated around the NPs, which could cause bandgap excitation in the nearby TiO2 and generate more electron-hole pairs in the cell [25,53,54]. Furthermore, Au@Pt@Au core-shell NPs display excellent scattering because of the outermost, small Au spheres, which could ensure that the light transmitted back and forth is more efficiently utilized by the device. A high temperature would affect the LSPR effect of plasmonic NPs; however, an enhancement seen in the absorbance spectra was related with the LSPR effect and the scattering effect of NPs. Figure 6a shows that the LSPR effect and the scattering effect of NPs still played a role in the TiO2 layers after annealing. The J-V curves and parameters of cells incorporated with different amounts of Au@Pt@Au coreshell NPs are presented in Figures 7 and 8, and Table 1, respectively. These parameters were tested with an electrochemical workstation at 25 °C in air. As the concentration of Au@Pt@Au core-shell NPs varied from 0 wt.% to 1.5 wt.%, both the short-circuit current densities (JSC) and the PCEs first increased and then decreased. The enhancements could be ascribed to the incorporation of The J-V curves and parameters of cells incorporated with different amounts of Au@Pt@Au core-shell NPs are presented in Figures 7 and 8, and Table 1, respectively. These parameters were tested with an electrochemical workstation at 25 • C in air. As the concentration of Au@Pt@Au core-shell NPs varied from 0 wt.% to 1.5 wt.%, both the short-circuit current densities (J SC ) and the PCEs first increased and then decreased. The enhancements could be ascribed to the incorporation of Au@Pt@Au core-shell NPs, which were the centers of strong electromagnetic fields produced by the LSPR effect and scattering centers in the cells, thereby improving the light utilization rate. However, this diminution might be ascribed to the reorganization of electrons and holes on the surface of Au@Pt@Au core-shell NPs, and could also be attributed to the decrease in electron travelling pathways with increased Au@Pt@Au core-shell NP loading [55,56]. Compared to the reference cells, the performance parameters of the devices containing 1 wt.% Au@Pt@Au core-shell NPs improved by 8.1% in terms of PCEs, from 12.4% to 13.4%, and by 5.4% at J SC , from 20.5 mA·cm −2 to 21.6 mA·cm −2 , respectively, while the open-circuit voltages (V OC ) were essentially unchanged. In addition, there was a slight decline in the fill factors (FFs), which could be attributed to the increase in electronic traps after the incorporation of excess Au@Pt@Au core-shell NPs. Au@Pt@Au core-shell NPs, which were the centers of strong electromagnetic fields produced by the LSPR effect and scattering centers in the cells, thereby improving the light utilization rate. However, this diminution might be ascribed to the reorganization of electrons and holes on the surface of Au@Pt@Au core-shell NPs, and could also be attributed to the decrease in electron travelling pathways with increased Au@Pt@Au core-shell NP loading [55,56]. Compared to the reference cells, the performance parameters of the devices containing 1 wt.% Au@Pt@Au core-shell NPs improved by 8.1% in terms of PCEs, from 12.4% to 13.4%, and by 5.4% at JSC, from 20.5 mA·cm −2 to 21.6 mA·cm −2 , respectively, while the open-circuit voltages (VOC) were essentially unchanged. In addition, there was a slight decline in the fill factors (FFs), which could be attributed to the increase in electronic traps after the incorporation of excess Au@Pt@Au core-shell NPs.
The hysteresis index could be calculated by the following Formula [57]: Therefore, according to Formula (1), the hysteresis index, with and without plasmonic NPs, was 0.089 and 0.084, respectively. The result indicated that the hysteresis effect had only changed a little with the loading of plasmonic NPs in these architecture-based PCSs. Figure 8 describes the box charts for the photovoltaic parameters of devices based on mesoporous TiO2 films mixed with 0-2 wt.% Au@Pt@Au core-shell NPs, which indicated that the photoelectric properties of the cells remained steady. External quantum efficiency spectra were measured to further study the photoelectric conversion capacity of the cells. Figure 9 shows the IPCE spectra and integrated current density curves of the PSC devices mixed with Au@Pt@Au core-shell NPs. The trend in these IPCE spectra were consistent with the J-V characteristics described earlier, which indicated that the PSC devices The hysteresis index could be calculated by the following Formula [57]: Therefore, according to Formula (1), the hysteresis index, with and without plasmonic NPs, was 0.089 and 0.084, respectively. The result indicated that the hysteresis effect had only changed a little with the loading of plasmonic NPs in these architecture-based PCSs. Figure 8 describes the box charts for the photovoltaic parameters of devices based on mesoporous TiO 2 films mixed with 0-2 wt.% Au@Pt@Au core-shell NPs, which indicated that the photoelectric properties of the cells remained steady.
External quantum efficiency spectra were measured to further study the photoelectric conversion capacity of the cells. Figure 9 shows the IPCE spectra and integrated current density curves of the PSC devices mixed with Au@Pt@Au core-shell NPs. The trend in these IPCE spectra were consistent with the J-V characteristics described earlier, which indicated that the PSC devices containing 1 wt.% Au@Pt@Au core-shell NPs had the best light trapping and utilizing capability of all the devices described in this study. When the concentration of Au@Pt@Au core-shell NPs increased past this optimal value, the enhancements from the effects of LSPR and scattering were not enough to resist the decrease caused by the recombination of electron hole pairs near the electron traps. Hence, it was of great importance to control the amount of NPs in order to obtain high-efficiency light harvesters. The integrated current density curves were acquired by integrating the IPCE spectra in the left-hand part of Figure 9. The integral results were slightly lower than the J SC numbers in Table 1, which may have been influenced by the equipment and the test environment. External quantum efficiency spectra were measured to further study the photoelectric conversion capacity of the cells. Figure 9 shows the IPCE spectra and integrated current density curves of the PSC devices mixed with Au@Pt@Au core-shell NPs. The trend in these IPCE spectra were consistent with the J-V characteristics described earlier, which indicated that the PSC devices containing 1 wt.% Au@Pt@Au core-shell NPs had the best light trapping and utilizing capability of all the devices described in this study. When the concentration of Au@Pt@Au core-shell NPs increased past this optimal value, the enhancements from the effects of LSPR and scattering were not enough to resist the decrease caused by the recombination of electron hole pairs near the electron traps. Hence, it was of great importance to control the amount of NPs in order to obtain highefficiency light harvesters. The integrated current density curves were acquired by integrating the IPCE spectra in the left-hand part of Figure 9. The integral results were slightly lower than the JSC numbers in Table 1, which may have been influenced by the equipment and the test environment.
Conclusions
We have adopted a chemical reduction method to prepare Au@Pt@Au core-shell NPs to be mixed with the TiO 2 mesoporous layer of PSCs. The TEM images, optical absorption spectra, XRD patterns, and XPS spectra resulting from the tests on these modified PSCs were used to characterize the physicochemical properties of the Au@Pt@Au core-shell NPs. Furthermore, SEM cross-section images, UV-vis absorption spectra, J-V characteristics, histograms, and IPCE curves were used to investigate the photoelectric performance of the PSC cells with different concentrations of Au@Pt@Au core-shell NPs. PSC devices containing 1 wt.% Au@Pt@Au core-shell NPs had the best photovoltaic performances, and these were ascribed to the LSPR and scattering effects of the NPs. Nevertheless, when excess Au@Pt@Au core-shell NPs were mixed into the devices-i.e., when Au@Pt@Au core-shell NP loading increased-the efficacy of the photovoltaic parameters decreased due to the reorganization of electrons and holes on the surface of Au@Pt@Au core-shell NPs and the decrease in electron travelling pathways. | 2019-09-08T13:05:50.741Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "1c799619f0f93179bd65cc7256f07db2583ca1a8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/9/9/1263/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5208320db4c7a882bf69d0d9fe9460b35ba148b3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
225381651 | pes2o/s2orc | v3-fos-license | Composition and Diversity of Over-Wintering Aquatic Bird Community on Poyang Lake, China
: The present study aimed to investigate the structure, composition and diversity of the over-wintering aquatic bird community of Poyang Lake, including Poyang Lake National Nature Reserve (PNNR), Nanji National Nature Reserve (NNNR) and Duchang Provincial Nature Reserve (DPNR), China. After the preliminary survey, birds surveyed from vantage points at each study site between the years 2016 and 2020 in the winter season. A total of 58 bird species belonging to nine orders and 13 families were observed. The study showed variation in e ff ective species numbers (Species richness, Shannon’s diversity and Simpson’s diversity) among the three study sites and the survey years. Nanji National Nature Reserve had the highest avian diversity, whereas Duchang Provincial Nature Reserve had the lowest. Globally threatened bird species, Siberian Crane (critically endangered), Oriental Stork (endangered), found in our study sites. However, the current management practices of the nature reserve and conservation of this globally threatened bird species are inadequate, especially of Duchang Provincial Nature Reserve. Therefore, for long term conservation of birds in these areas, it needs continuing intentional improvement of the sites and awareness creation to the local community.
Introduction
Poyang Lake is the largest freshwater lake in East Asia [1] and is of global importance for conserving migratory aquatic bird of the East Asian-Australasian Flyway [2,3]. It is connected to the Yangtze River and lies on the Northern border of Jiangxi Province. The main rivers that drain to Poyang lake (Ganjiang, Fuhe, Xinjiang, Raohe and Xiushui) are discharged into the Yangtze River from a narrow outlet in the North [4,5]. Among the five major rivers, Ganjiang is the largest in the region, extending 750 km and contributes almost 55% of the total discharge into the Poyang Lake [1]. In addition to the main tributaries that drain into the lake, a seasonal reverse-flow system has also significantly contributed to the complexity of its yearly hydrological variation [6,7]. This variation, both within and among years, directly contributes to the large biomass of plant life [5,8], which provides a wide range of foraging options for many aquatic bird species [2,[9][10][11].
Aquatic birds are species that entirely depend on wetlands for a variety of activities such as foraging, loafing and molting [7,12]. Poyang Lake is a significant global biodiversity area that harbors more than 400,000 aquatic birds belonging to about 87 species [3,13,14] in the winter season. For instance, geese and swans were the most abundant aquatic birds found in Poyang lake followed by shorebirds. During summer, Poyang lake was covered by water and flood, while in winter, the water reduces exposing rivers, channels and smaller sub lakes. Sub lakes, which play an essential role in aquatic bird conservation, are mainly located in the western and southern parts of the Poyang lake [10]. Therefore, to conserve the wetland ecosystem of Poyang Lake and endangered migratory birds, the Chinese government has established two National nature reserves and four Provincial Nature Reserves. These are Poyang Lake National Nature Reserve (denoted hereafter as PNNR in this study), Nanji National Nature Reserve (denoted hereafter as NNNR), Duchang Provincial Nature Reserve (denoted hereafter as DPNR in this study), Baishazhou Provincial Nature Reserve, Kangshan Provincial Nature Reserve and Qingfeng Provincial Nature Reserve) [14,15].
Among the National Nature Reserve and Provincial Nature Reserves located in the western, southwestern and northeastern part of Poyang lake, PNNR, NNNR and DPNR had high aquatic bird richness, abundance and a high proportion of IUCN endangered species [16]. Thus, they are important areas for aquatic bird protection [16,17]. Mainly, during the winter season, these areas serve as stopping over for many migratory birds. Hence understanding species composition and abundance patterns among sub lakes of Poyang lake is very crucial in the conservation of the aquatic birds.
Biodiversity measurement and assessment is an active research focus of ecology [18,19]. Richness and abundance estimates are two of the simplest ways to describe biodiversity and are essential to consider when assessing any ecosystem [20]. They are also used to generate more complex ecological indices [21], including Hill numbers. Species richness features significantly in foundational models of community ecology [22,23] and is a crucial metric in conservation biology [24,25]. Despite its intuitive and universal application, conversely, species richness is a problematic index of biodiversity (i.e., sampling intensity and species abundance problem). Hill numbers overcome many of the traditional diversity measure shortcomings [18].
Previously, some scholars studied aquatic birds of Poyang Lake, but most of their studies focused on long-term trends of aquatic birds and limited species, especially cranes [26,27]. Similarly, they used the traditional methods to measure and assess biological diversity (biodiversity). Therefore, there is a need to understand more about the composition and diversity of aquatic bird community over a longer time scale [15,28] and using different biologic diversity measures different from traditional methods. Thus, this study intended to provide the current composition and diversity of wintering aquatic bird species in three representative areas of Poyang Lake (i.e., PNNR, NNNR and DPNR). Additionally, Hill numbers biodiversity measure was used instead of the traditional diversity measures (species richness, Shannon index, Simpson index) [18,29]. Hill numbers are a mathematically unified family of biologic diversity indices that integrate relative abundance and species richness, which facilitate the precise comparison of diversity [30].
Study Area Description
Poyang Lake, the largest freshwater lake in China, is located at the south bank of Yangtze River in Jiangxi province between 28 • 24 -29 • 46 N and 115 • 49 -116 • 46 E (Figure 1), covering an area of approximately 4000 km 2 [1]. This study conducted in three nature reserves of Poyang lake, namely PNNR, NNNR and DPNR (Table 1), located in the western, southwestern and northeastern parts of Poyang Lake, respectively. They support a high proportion of the globally threatened species, such as critically endangered Siberian Crane (Grus leucogeranus), endangered Oriental Stork (Ciconia boyciana), vulnerable Swan Goose (Anser cygnoides) and White-naped Crane (Grus vipio) [6,31,32]. The topography of the Poyang Lake catchment varies from high mountainous regions (maximum elevation of about 2200 m above sea level) to alluvial plains in the lower reaches of the primary watercourses. Poyang Lake has a humid subtropical climate with an annual average temperature of 16.7-17.7 • C, with average annual precipitation of 1400-1900 mm [2]. Carex spp., Phragmites australis, Potamogeton spp. and Polygonum spp. that are essential food sources of various birds dominate the wetland vegetation of Poyang Lake [9].
Methods
The total area of Poyang lake (4000 km 2 ) was covered by water and flood during summer (July-August), while in winter, it reduces to less than 1000 km 2 , exposing mudflats and smaller independent sub lakes. We conducted this study in two National nature reserve and one Provincial
Methods
The total area of Poyang lake (4000 km 2 ) was covered by water and flood during summer (July-August), while in winter, it reduces to less than 1000 km 2 , exposing mudflats and smaller independent sub lakes. We conducted this study in two National nature reserve and one Provincial nature reserve of Poyang lake (i.e., 16 sub lakes from each PNNR and NNNR, 10 sub lakes from DPNR) ( Table 1). Each winter surveys (20-27 Jan 2016; 09-26 Jan 2017; 25-28 Jan 2018; 09-26 Jan 2019 and 07-15 Jan 2020) were carried out in each of the 42 sub lakes when the population of birds was relatively stable [2,3,31].
Birds surveyed from one to five vantage points in each sub lakes with binoculars and a spotting scope for five consecutive winter seasons. However, some sub lakes of DPNR were not surveyed during the Survey 3 (2018) due to weather conditions and transportation problem. The distance between any two observation points was at least 2-3 km to avoid double counting and at least 20% to 25% of the study area was covered. Large flocks counted by dividing them into groups of 10, 20 or 50 individuals to improve the accuracy of counting [33]. The time spent at the survey site varied depending on the size of the sub lakes, bird population size and visibility. For identification and categorization of birds to respective taxonomic groups, digital camera photographs, bird identification guide books and published literature [34][35][36] were used. Similarly, the conservation status was determined using the latest IUCN assessment [32], published literature and field guide books [37].
After the collection of data, we computed effective species numbers, known as Hill numbers or actual diversities [18,30,38] (of order 0, 1 and 2) in the iNEXT package of R software version 3.61 [39]. The three Hill numbers are species richness (q = 0), the exponential of Shannon's diversity index (q = 1) and the inverse of Simpson's diversity (q = 2). Confidence intervals around Hill numbers were developed to facilitate the comparison of both rarefied and extrapolated samples by bootstrap methods [30,38].
Results
In total, 58 bird species grouped into 9 orders and 13 families were observed in the study sites (Appendix A). The order Charadriiformes consisted of a high number of families (4 families and 13 species). The number of species in the order Anseriformes recorded was exceptionally high (15 spp.) followed by Charadriiformes (13 spp.). The number of species in the order Anseriformes was higher in NNNR than PNNR and DPNR ( Figure 2). Two globally threatened bird species, Siberian Crane (critically endangered) and Oriental Stork (endangered), Eurasian Curlew, Black-tailed Godwit and Northern Lapwing were near-threatened bird species recorded in our study sites. Hence this study sites were not only harbored the highest waterbird richness and abundance, but also provided a home for the highest proportion of most endangered species (Table A1). [2,3,31]. Birds surveyed from one to five vantage points in each sub lakes with binoculars and a spotting scope for five consecutive winter seasons. However, some sub lakes of DPNR were not surveyed during the Survey 3 (2018) due to weather conditions and transportation problem. The distance between any two observation points was at least 2-3 km to avoid double counting and at least 20% to 25% of the study area was covered. Large flocks counted by dividing them into groups of 10, 20 or 50 individuals to improve the accuracy of counting [33]. The time spent at the survey site varied depending on the size of the sub lakes, bird population size and visibility. For identification and categorization of birds to respective taxonomic groups, digital camera photographs, bird identification guide books and published literature [34][35][36] were used. Similarly, the conservation status was determined using the latest IUCN assessment [32], published literature and field guide books [37].
After the collection of data, we computed effective species numbers, known as Hill numbers or actual diversities [18,30,38] (of order 0, 1 and 2) in the iNEXT package of R software version 3.61 [39]. The three Hill numbers are species richness (q = 0), the exponential of Shannon's diversity index (q = 1) and the inverse of Simpson's diversity (q = 2). Confidence intervals around Hill numbers were developed to facilitate the comparison of both rarefied and extrapolated samples by bootstrap methods [30,38].
Results
In total, 58 bird species grouped into 9 orders and 13 families were observed in the study sites (Table A1). The order Charadriiformes consisted of a high number of families (4 families and 13 species). The number of species in the order Anseriformes recorded was exceptionally high (15 spp.) followed by Charadriiformes (13 spp.). The number of species in the order Anseriformes was higher in NNNR than PNNR and DPNR ( Figure 2). Two globally threatened bird species, Siberian Crane (critically endangered) and Oriental Stork (endangered), Eurasian Curlew, Black-tailed Godwit and Northern Lapwing were near-threatened bird species recorded in our study sites. Hence this study sites were not only harbored the highest waterbird richness and abundance, but also provided a home for the highest proportion of most endangered species (Table A1). Our survey data showed that the Siberian Crane and White-naped Crane were wintering at all sites on Poyang Lake (PNNR, NNNR and DPNR) (Table A1). Similarly, our results suggest that the highest number of Oriental Stork and Siberian Crane mainly distributed in PNNR. Oriental Stork were recorded in all the five consecutive years in each study sites (Tables 2 and A1). DPNR had the highest proportion of Ruddy Shelduck and the lowest Lesser White-fronted Goose and Swan Goose (Tables 2 and A1). Table 2. Vulnerable and Endangered over-wintering aquatic bird species counts at each section of Poyang Lake between the years 2016 and 2020 in the winter season at the three study sites. This study showed variation in effective species numbers among each three study sites and each survey year ( Figure 3). For example, in Survey 1, 2016, the highest observed effective species number, 25 bird species, 9.70 Shannon index and 6.84 Simpson index were observed in NNNR with the corresponding asymptotic estimator of 26, 2.27 and 0.85, respectively, followed by DPNR (24, 6.57 and 4.85) (Figure 3a). The plot of the confidence interval of species richness of all surveys is overlapping except DPNR, which shows no significant species difference in all surveys of the two nature reserves. However, the Shannon diversity and Simpson diversity plot of Survey 3 (Figure 3c) were not overlapped. Hence significant differences in Shannon diversity and Simpson diversity were observed between the survey sites in Survey 3. Our survey data showed that the Siberian Crane and White-naped Crane were wintering at all sites on Poyang Lake (PNNR, NNNR and DPNR) (Table A1). Similarly, our results suggest that the highest number of Oriental Stork and Siberian Crane mainly distributed in PNNR. Oriental Stork were recorded in all the five consecutive years in each study sites (Table A1 and Table 2). DPNR had the highest proportion of Ruddy Shelduck and the lowest Lesser White-fronted Goose and Swan Goose (Table A1 and Table 2). Table 2. Vulnerable and Endangered over-wintering aquatic bird species counts at each section of Poyang Lake between the years 2016 and 2020 in the winter season at the three study sites. This study showed variation in effective species numbers among each three study sites and each survey year (Figure 3). For example, in Survey 1, 2016, the highest observed effective species number, 25 bird species, 9.70 Shannon index and 6.84 Simpson index were observed in NNNR with the corresponding asymptotic estimator of 26, 2.27 and 0.85, respectively, followed by DPNR (24, 6.57 and 4.85) (Figure 3a). The plot of the confidence interval of species richness of all surveys is overlapping except DPNR, which shows no significant species difference in all surveys of the two nature reserves. However, the Shannon diversity and Simpson diversity plot of Survey 3 (Figure 3c) were not overlapped. Hence significant differences in Shannon diversity and Simpson diversity were observed between the survey sites in Survey 3. This study showed that there is an essential temporal variation in biodiversity indices among the five consecutive surveys (Figure 4a Figure 4a) with the corresponding asymptotic estimator for species richness, Shannon diversity and Simpson diversity 25, 2.28 and 0.87, respectively. However, in Survey 1, Survey 2 and Survey 4, there was no significant difference in species richness (q = 0) because the plot of the confidence interval did not overlap.
Similarly, temporal variation in biodiversity indices among the study sites (PNNR, NNNR and DNNR) was observed. For instance, the highest values of diversity in the reference sample (nonstandardized data) were found in NNNR, where 49 species richness, 16.37 Shannon index and 10.35 Simpson diversity index (solid points in Figure 4b) were observed. The asymptotic estimator for species richness, Shannon diversity and Simpson diversity (i.e., Hill numbers for q = 0, 1, 2) were 51, 2.80 and 0.90, respectively. In contrast, the lowest values of the three metrics were noted in DPNR: 39, 9.92 and 6.11 (Solid points in Figure 4b) and their corresponding respective asymptotic hill numbers are 40, 2.30 and 0.84. . Abundance data-based rarefaction and extrapolation of Hill numbers for order q = 0,1,2 (species richness (q = 0), Shannon's diversity (q = 1) and inverse Simpson's diversity (q = 2)). For the This study showed that there is an essential temporal variation in biodiversity indices among the five consecutive surveys (Figure 4a Figure 4a) with the corresponding asymptotic estimator for species richness, Shannon diversity and Simpson diversity 25, 2.28 and 0.87, respectively. However, in Survey 1, Survey 2 and Survey 4, there was no significant difference in species richness (q = 0) because the plot of the confidence interval did not overlap.
Similarly, temporal variation in biodiversity indices among the study sites (PNNR, NNNR and DNNR) was observed. For instance, the highest values of diversity in the reference sample (nonstandardized data) were found in NNNR, where 49 species richness, 16.37 Shannon index and 10.35 Simpson diversity index (solid points in Figure 4b) were observed. The asymptotic estimator for species richness, Shannon diversity and Simpson diversity (i.e., Hill numbers for q = 0, 1, 2) were 51, 2.80 and 0.90, respectively. In contrast, the lowest values of the three metrics were noted in DPNR: 39, 9.92 and 6.11 (Solid points in Figure 4b) and their corresponding respective asymptotic hill numbers are 40, 2.30 and 0.84. . Abundance data-based rarefaction and extrapolation of Hill numbers for order q = 0,1,2 (species richness (q = 0), Shannon's diversity (q = 1) and inverse Simpson's diversity (q = 2)). For the Figure 4. Abundance data-based rarefaction and extrapolation of Hill numbers for order q = 0,1,2 (species richness (q = 0), Shannon's diversity (q = 1) and inverse Simpson's diversity (q = 2)). For the entire five consecutive survey years (a) and for the three study site during the whole study periods (b). The solid line is the rarefaction curve and the dotted line is the extrapolation curve, which goes up to double the size of the reference sample. The shaded area represents 95% confidence intervals obtained using the bootstrap method based on the 200 replication.
Similarly, temporal variation in biodiversity indices among the study sites (PNNR, NNNR and DNNR) was observed. For instance, the highest values of diversity in the reference sample (nonstandardized data) were found in NNNR, where 49 species richness, 16.37 Shannon index and 10.35 Simpson diversity index (solid points in Figure 4b) were observed. The asymptotic estimator for species richness, Shannon diversity and Simpson diversity (i.e., Hill numbers for q = 0, 1, 2) were 51, 2.80 and 0.90, respectively. In contrast, the lowest values of the three metrics were noted in DPNR: 39,9.92 and 6.11 (Solid points in Figure 4b) and their corresponding respective asymptotic hill numbers are 40, 2.30 and 0.84.
The number of species observed in each entire survey year of each study site ranged from 29 (NNNR) to 4 (DPNR) ( Figure 5). For example, in PNNR Survey 2, 2017 (Figure 5a
Discussion
The western and southwestern parts of Poyang Lake are playing essential roles in the conservation of wintering aquatic bird species. It contains high species richness and abundant waterbird species during the winter season. This may be due to the existence of large intra-wetland variation [2,[40][41][42] and more abundant food sources [15,[43][44][45] during the winter season. Additionally, fewer disturbances exist in the western and southwestern areas.
PNNR, NNNR and DPNR are a vital wetland ecosystem that provides habitat to various aquatic birds. Every sub lakes with continuous water surface and distinct boundaries within Poyang Lake Figure 5. Abundance data-based rarefaction and extrapolation of Hill numbers for order q = 0-2 (species richness (q = 0), Shannon's diversity (q = 1) and inverse Simpson's diversity (q = 2)). For the five consecutive survey years (2016-2020) of PNNR (a), NNNR (b), and DNNR (c). The solid line is the rarefaction curve and the dotted line is the extrapolation curve, which goes up to double the size of the reference sample. The shaded area represents 95% confidence intervals obtained using the bootstrap method.
Discussion
The western and southwestern parts of Poyang Lake are playing essential roles in the conservation of wintering aquatic bird species. It contains high species richness and abundant waterbird species during the winter season. This may be due to the existence of large intra-wetland variation [2,[40][41][42] and more abundant food sources [15,[43][44][45] during the winter season. Additionally, fewer disturbances exist in the western and southwestern areas.
PNNR, NNNR and DPNR are a vital wetland ecosystem that provides habitat to various aquatic birds. Every sub lakes with continuous water surface and distinct boundaries within Poyang Lake accommodate various waterbird species during the winter. However, the distributions of waterbirds were not the same. Some areas were populated by some species, whereas some of them were only observed to have a few birds. For example, Ruddy Shelduck was only recorded in DPNR. Moreover, inside the sub lakes, aquatic birds showed slight changes in their distribution. The distributional changes of the aquatic birds may be due to food availability [7,9,46,47], habitat area and water depth [28,41,[48][49][50][51][52], protection status [53,54] and vegetation availability [5,6,12,15].
Temporal variation in biodiversity indices was observed among the three study sites and the survey years. The highest value of diversity observed in NNNR and Survey 4 (2019), whereas the lowest noted in DPNR and Survey 3. During Survey 3 (2018), because of the weather condition and transportation (Ferry) problem, some sub lakes of the study areas were not surveyed. Consequently, the lowest number of species recorded. Additionally, the difference in species diversity among the different study sites and survey year could also be associated with differences in habitat characteristics and feeding habits of birds [41,43,55]. For example, DPNR covered the main water body of Poyang lake and had a large area of deep water, which lowers its suitability for some bird species. Consistent with previous studies [3,56,57], all the study areas provide essential habitats and supporting a considerable number of bird species, including the essential wintering endangered migratory aquatic birds.
Similarly, in agreement with the investigation of other studies in the same area [3,58,59], Anseriformes was the most dominant order, followed by Charadriiformes. The composition of the aquatic birds in the study areas took considerable changes during the study time. For instance, Black-tailed Godwit, which was recorded in the earlier study [17,26] only observed in our first survey. White-naped Crane and Hooded Crane were mainly observed in PNNR (i.e., Zhu Shi Hu, Bang Hu, Da Hu Chi and Sha Hu).
A significant population of Greater and Lesser White-fronted Geese (Anser albifrons and Anser erythropus), Swan Geese (Anser cygnoides) and Tundra Swans (Cygnus columbianus), inhabited Poyang Lake during winter seasons, which may be a reflection of the equal availability of their preferred habitats [28,59,60]. However, their abundance has greatly fluctuated in agreement with a previous study [58,59] and their global population was declining due to habitat destruction [54,61,62]. Globally threatened species, such as Siberian Crane, White-naped Crane and Oriental Stork, were also had a relatively high abundance. The estimated total population of Siberian Cranes were 3800-4000 [32]. In the entire Poyang Lake, an earlier study recorded 3750 Siberian Cranes [32]. However, in this study 159-2483 (minimum and maximum, hereafter min and max) Siberian Cranes were recorded. The estimated total population of White-naped Cranes was 6250-6750 [63] and in the entire Poyang Lake, an earlier study recorded 500-1000 [63]. Whereas in this study 525-3489 (min and max) White-naped Cranes were recorded. Similarly, the estimated total population of Oriental Storks was 1000-2499 [64] and an earlier study recorded 4052 [64] in the entire Poyang Lake. However, in this study, we recorded 83-838. This study showed that 20.21% (Siberian Crane), 52.8% (White-naped Crane) and 43% (Oriental Stork) were found in PNNR, NNNR and DPNR.
From this, we conclude that the study areas' (24.2% of the entire Poyang lake) waterbird population number showed variation in each year. This may be because of the population decrease or overestimation of population size in the earlier study. According to this study, the average yearly abundances in three study sites of Poyang Lake (24.2% of Poyang Lake) and the proportion of IUCN of this globally threatened species were 10.11%, 6.09% and 99.04%, respectively. The Storks were also commonly found in only a few sub lakes (i.e., Chang Hu, Zhan Bei Hu and San Ni Wan in NNNR and Mei Xi Hu in DPNR). Whereas the Siberian Crane and White-naped Crane commonly found in PNNR (i.e., Bang Hu and Da Cha Hu) consistent with other studies on Crane [6,26].
All the three nature reserves had some infrastructure and competent staff and have been doing well in aquatic bird monitoring [6,41]. However, some local people lack awareness about conservation laws [58] and DPNR had a lack of funds. Additionally, an essential constraint to aquatic bird protection is a lack of administration of most sub lakes [16,29,65]. Therefore, continuous monitoring and awareness creation [66][67][68] among local communities regarding long term conservation of birds around sub lakes is required. Similarly, continuous quantitative survey and ecological study of wintering waterbird in the entire sub lakes of Poyang Lake also need more attention.
Conclusions
The study demonstrated that a large number of bird species were over-wintering in Poyang Lake. Interestingly some of the globally endangered bird species inhabited in the sub lakes of Poyang Lake, thus making this site an important conservation area. Therefore, an intense conservation action with harmonized and protected promising wintering stop sites should be performed to increase bird diversity in such a large fresh wetlands area. Similarly, for long term conservation of birds around the wetland, continuous monitoring of bird species, intentional improvement of the sites and awareness creation to the local community are recommended. In particular, DPNR needs more attention. Future studies should also focus on the flyway of the migratory bird species and organizing online taxonomic database of Poyang Lake waterbird. | 2020-08-13T10:10:36.289Z | 2020-08-10T00:00:00.000 | {
"year": 2020,
"sha1": "08b7e902f30b08857d2ed6deff82eaaa2d0cd2d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-2818/12/8/308/pdf?version=1597025081",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5dc7ca4157ec55dcff4d954728cc68638dc06947",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
3643994 | pes2o/s2orc | v3-fos-license | Extramedullary haematopoiesis in the kidney
Extramedullary haematopoiesis (EMH) is the development of haematopoietic tissue outside the bone marrow and it most often occurs in the liver and spleen. Renal EMH is quite rare and there are very few case reports concerning the kidney. We describe two cases of ‘renal histologically documented EMH’ and, in particular, in the second of these two, the EMH tissue coexists with a clear cell renal carcinoma. Although rare, these clinical pictures raise some questions about the role of needle biopsy in the management of renal masses that present a diagnostic dilemma, especially in cases without involvement of other abdominal or intrathoracic organs.
Introduction
Renal masses of uncertain aetiology may be discovered incidentally and in some cases, the diagnosis is difficult to establish based only on imaging examination. Extramedullary haematopoiesis (EMH) refers to the development of haematopoietic tissue outside the bone marrow and normally occurs in the reticulendothelial system (liver, spleen and lymph nodes). The involvement of other parenchymatous organs is rare and there are only sporadic reports concerning the kidney [1][2][3][4][5][6][7]8]. We describe two cases of 'renal histologically documented EMH', the first of which mimicked a bilateral malignant tumour of the kidney in a patient with a known history of polycythaemia vera, and the second was observed in an elderly male with a recent diagnosis of idiopathic myelofibrosis.
Case 1
An 80-year-old man, diagnosed with myeloproliferative disease (polycythaemia vera), was admitted after ultrasonography and computed tomography (CT) scan detection ( Figure 1A) of bilateral parapyelic solid renal lesions that can simulate renal carcinoma. The right mass (6.5 cm sized) infiltrated the pelvicalyceal system, causing extrinsic mass effect and continued in the perirenal spaces. The left solid lesion was 2.3 cm in size. Investigations showed a haemoglobin level of 121 g/L (12.1 g/dL); white blood cell (WBC) 20.1 3 10 9 /L (20.1 3 10 3 /lL), platelet count (PLT) 82 3 10 9 /L (82 3 10 3 /lL); plasma creatinine 141.4 lmol/L (1.6 mg/dL) and estimated glomerular filtration rate (eGFR) 0.70 mL/s (42 mL/min). As some doubts persisted about the moderate contrast enhancement, a CT-guided needle biopsy was performed without complications. The histological examination was compatible with the final diagnosis of EMH, containing cells of three distinct lineages including myeloid and erythroid cells and rare megakaryocytes. The immunohistochemical staining was positive for myeloperoxidases and glycophorin ( Figure 2), while CD34 staining was negative. No other signs of EMH were detected in the abdominal parenchymas. The conclusion of a subsequent bone marrow biopsy indicate myelofibrosis post-polycythaemia and he was treated with hydroxyurea and allopurinol. The patient is still alive 28 months after hospital admission.
Case 2
A 79-year-old man with a previous history of ischaemic cardiopathy was admitted to another department with persistent fever and complaints of fatigue and weakness. Examination revealed splenomegaly. Haemoglobin was 113 g/L (11.3 g/dL), WBC 17 3 10 9 /L (17 3 10 3 /lL), PLT 676 3 10 9 /L (676 3 10 3 /lL); plasma creatinine 101.6 lmol/L (1.15 mg/dL); eGFR 1.01 mL/s (61 mL/min) and lactate dehydrogenase 1394U/L. Abdominal ultrasonography and magnetic resonance imaging (MRI) showed a perirenal infiltrating tissue associated with hepatosplenomegaly. Given the suspicion of lymphoma, the patient performed an osteomedullary biopsy that showed idiopathic myelofibrosis. A CT ( Figure 1B) confirmed bilateral perirenal tissue with modest contrastographic impregnation and showed a solid mass in the lower pole of the right kidney with intense contrast enhancement. The patient was referred to us for a CT-guided needle biopsy that revealed the co-presence of two different lesions to the right a clear cell renal carcinoma, while the bilateral perirenal tissue was haematopoietic tissue, confirmed by immunohistochemical cell phenotype. The patient underwent a polar right nephrectomy and he is still alive 24 months after diagnosis with a minor renal dysfunction, plasma creatinine 114.9 lmol/L (1.3 mg/dL) and eGFR 0.88 mL/s (53 mL/min).
Discussion
The kidney is an unusual site for the occurrence of EMH, and clinically, renal EMH can be asymptomatic. There have been <20 previous reports of EMH renal involvement [1][2][3][4][5][6][7]8]. Renal involvement can be parenchymal, intrapelvic or perirenal. In the parenchymal type, the kidneys may either be enlarged or have focal lesions and the masses may be indistinguishable from renal cell carcinoma [2,3]. Pelvicalyceal or hylar involvement is often an extension of a parenchymal lesion pattern and in this site, the EMH tissue may cause obstructive renal failure [4,5]. In the perirenal type, the soft tissue encases both kidneys, such as in our Case 2. The bilateral perirenal localization of EMH may sometimes mimic a renal lymphoma [6].
The differential diagnosis of a perirenal or parapelvic mass of uncertain aetiology includes tumours, lymphomas, lipomatosis and renal inflammatory or infectious tissue [2,6]. The role of nuclear medicine imaging or of FDG-PET/CT, to resolve such diagnostic problems, is still a controversial issue [6]. In both our cases, we chose a CT-guided biopsy approach to arrive at a final diagnosis. A histologically proven diagnosis of EMH in our patients could avoid unnecessary nephrectomy and contribute to preserve their renal function.
In all but two previous reports, renal EMH has occured in association with chronic haematological disorders. In two cases, small foci of 'pure erythropoiesis' were found in areas within a clear cell renal carcinoma and in patients without underlying haematological disease [9,10]. In both of these cases, the authors suggested that the abnormal erythroid proliferation may have been related to a local erythropoietin (EPO) excess produced by malignant cells [9,10]. On the contrary, our Case 2 is quite unique because we found the coexistence of a lower pole clear cell renal carcinoma and perirenal EMH encasing both kidneys in a patient with concomitant idiopathic myelofibrosis. We believe that in our second case, the EMH was due to haematological disease rather than to an EPO excess.
The pathophysiology of solid organ involvement in EMH is still not fully understood. It has been speculated that haematopoietic cells are derived from resident mesenchymal pluripotent cells that could proliferate as a response to a disease-related simulating factor or may arise from migration of stem cells from bone marrow [8]. The renal localization presents some intriguing aspects: (i) does the kidney, with a scarcely represented reticuloendothelial tissue, maintain in adult life a niche for haematopoietic stem cell differentiation? (ii) Is a local intrarenal EPO excess able to drive stem cell migration and to promote EMH proliferation?
Conclusions EMH in the kidney represents an interesting 'speculative challenge' in terms of differential diagnosis with other soft tissue masses. Guided ultrasound or guided CT needle renal biopsy might be included in the diagnostic algorithm to better manage the dubious cases and it also may be very useful to guide a less aggressive treatment.
Conflict of interest statement. None declared. | 2018-04-03T05:57:23.525Z | 2012-04-01T00:00:00.000 | {
"year": 2012,
"sha1": "4313cb78fa29e5f32068b3eb2113314f393388b9",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ckj/article-pdf/5/2/143/983629/sfs015.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4313cb78fa29e5f32068b3eb2113314f393388b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204734642 | pes2o/s2orc | v3-fos-license | Low phase noise master oscillator generation and distribution for ALS and ALS-U
The coax based MO distribution system in the ALS is going to be replaced by a modernized, lower phase noise and more interference tolerant version, ready to support ALS-U operation. System aspects are shown and several commercial analog and digital optical transceiver modules are compared for their suitability in this application. Furthermore, recent phase noise optimizing efforts in the ALS RF system are discussed and several prototypes for a custom built, low phase noise, frequency adjustable master oscillator around 500 MHz are shown.
MO DISTRIBUTION
The Advanced Light Source (ALS) is clocked by a single Master Oscillator (MO), running at f 1 ≈ 499.6 MHz and distributed to several clients across the accelerator complex through phase stable coax (Andrew LDF4-50A Heliax). A 12 x distribution chassis, based on 10 W RF power amplifier modules (MHW709-3) and a custom level control loop, provide enough power to drive the long coax cables [1].
Additionally, a divide by 4 clock is generated by an AD9513 evaluation board in the same rack and distributed, without additional amplifiers, to the nearby timing, gun and buncher LLRF systems. The S-band linac is clocked by the f 1 distribution and derives its f 1 · 6 ≈ 3 GHz RF clock through a local multiplier. The clock tree has been illustrated in Fig. 1.
The f 1 distribution system has been operational with high reliability since its installation in 1989, when the ALS was built. Nonetheless, many of its parts have become obsolete. Additionally, more beamline users are sensitive to timing and require a low jitter frequency reference. The 12 channel limit has been exhausted. Workarounds with RF power splitters have caused phase shift problems in the past, when a port was not properly terminated.
The current plans for the ALS-upgrade project (ALS-U) foresee a new storage ring based on a multi-bend achromat lattice. Its circumference is shorter by 1 2 RF wavelength (≈ 30 cm), increasing its RF frequency to f 2 = f 1 609 608 . The choice was made to operate the accumulator and storage ring at f 2 while keeping the injector and booster ring running at f 1 [2]. This avoids the need of re-aligning or, in case of the linac, re-building these accelerators. The ALS-U layout, as currently planned, is shown in Fig. 2 Last but not least, the distribution chassis has been found to add a significant amount of phase noise to the signal as shown in Fig. 3.
For these reasons the ALS MO distribution system will be upgraded in the near future. An optical fiber based solution has been chosen due to the many advantages over coax based systems. This includes excellent EMI and crosstalk rejection, inherent galvanic isolation, significantly lower transmission loss, higher fan-out capability and the absence of powerful microwave distribution amplifiers with their potential power dissipation and signal quality issues.
The commercial RF over Fiber system from Vialite has been chosen due to its very good phase noise performance. The added phase noise of an optical transmitter, a 16 x active optical splitter and an optical receiver, shown in Fig. 3 (green trace), is negligible compared to the MO phase noise (orange trace).
Furthermore, the Vialite system provides useful reliability features, like redundant power supplies, a chassis with hotswappable transceiver modules, blind mate connectors and the ability to monitor the system over ethernet.
The dual frequency scheme of ALS-U is supported and the system offers ultimate flexibility as each transmitter / receiver pair can operate on a different frequency.
RF signals within a usable bandwidth of 1 GHz are amplitude modulated on an optical carrier. Amplification and fan-out happens in the optical domain by a commercial 'lossless splitter', which provides extremely high isolation of the output ports from each other. Hence a bad RF termination cannot lead to phase errors on other outputs.
Each distribution end-point will be equipped with a standalone optical receiver module and, if necessary, with an additional microwave power amplifier and bandpass filter to reduce out of band spurious signals. The components which will be used and a possible distribution scenario are shown in Fig. 4
NEW MO FOR ALS AND ALS-U
The requirements for a new ALS MO include lowest possible phase noise and low spurious coherent signals (spurs) in the 10 Hz to 10 kHz offset range. For slow orbit feedback, frequency set-point needs to be adjustable in 1 mHz increments around f 1 ± 100 kHz with ≈ 1 Hz update rate. Output phase and amplitude needs to be continuous during these adjustments.
The ALS operational MO was switched from HP8644b to Holzworth HS9001A in January 2019. This reduced the integrated phase noise within 1 Hz to 1 MHz from 11.5 ps to 0.5 ps. For frequency tuning, the old MO relied on a workaround of using an external DC control voltage on its FM modulation input -which has caused scaling and outof-range problems in the past. The new MO can update its digital frequency set-point without phase-glitch and does not require this workaround, simplifying ALS operation.
Previous experience has shown, the ALS infrared beamline is sensitive to spurs in the MO phase noise spectrum [3], particularly in the 10 Hz to 10 kHz carrier offset range. These spurs transfer trough the LLRF system onto the beam, modulating the bunch arrival time. A good example is a spur around 3.75 kHz of around -100 dBc magnitude, which was tracked down to a switch mode power supply within the SRRF klystron drive amplifiers. The spur was clearly visible in the beam spectrum of the longitudinal feedback system and in the spectral measurement results of the ALS infrared beamline, as shown in Fig. 5. The spur magnitude was reduced by > 20 dB after replacing the drive amplifiers in May 2018, improving the signal to noise ratio during infrared beamline measurements. The spectral infrared beamline measurement shows noisy data before (blue trace) and clean data after (orange trace) a new klystron drive amplifier was installed in the storage ring RF system. The old drive amp caused a spur at 3.75 kHz offset of ≈ −100 dBc magnitude.
Keeping this sensitivity to spurs in mind, the blue trace in Fig. 6 indicates some room for improvement for the HS9001. As no other commercial instrument with better broadband phase noise, better spurious performance and phase continuous frequency adjustment capability could be found, two ideas for a custom built MO were further investigated.
AD9912 DDS + Mixer
A clean fixed 400 MHz frequency reference is split in two channels. One is doubled and used as clock for a AD9912 Direct Digital Synthesis (DDS) chip, generating an adjustable frequency of ≈ 100 MHz. The other one is used as Local Oscillator for a mixer, to up-convert the DDS output to an adjustable 500 MHz MO output. The setup for phase noise and spur measurements is shown in Fig. 7. The internal signal generator of the FSWP signal source analyzer was used as 400 MHz frequency reference. The FSWP rejects the noise of its internal source, hence this is an additive phase noise measurement. For the operational system, a high quality fixed frequency OCXO needs to be used, which will add its own noise to the budget. The measured phase noise is shown in Fig. 6 (orange trace). While phase noise performance is on average the best of the MO sources which have been considered so far, the setup suffers from significant spurs in the sensitive frequency range. These spurs move with frequency set-point and are hence hard to control. They originate from the AD9912 output and are inherent to its limited 14 bit DAC resolution [4,5]. To improve upon spurious performance, a DDS with higher resolution than the AD9912 is needed. The AD9164, a modern DAC chip, achieves 16 bit resolution at up to 12 GSps and is hence capable of directly synthesizing f 1 without the need for up-conversion. It contains internal DDS functionality which was used to generate a test-tone at 499.65 MHz to evaluate its phase noise performance. The sampling clock f DAC = 5.12 GHz was chosen for good spur performance at ALS carrier frequencies and derived from the FSWP internal signal generator as shown in Fig. 8.
Direct synthesis with AD9164 DAC
The green trace in Fig. 6 shows the AD9164 added phase noise. While the average noise floor is slightly higher than for the AD9912 setup, there are no significant spurious signals within ±2 MHz of the carrier. This was further verified with multiple spectrum analyzer measurements at different carrier frequency ( f C ) set-points as shown in Fig. 9, confirming a Spurious Free Dynamic Range (SFDR) of > 117 dB. Both measurements show some larger spurs in the -90 dBc range at ≈ 3 MHz offset. These are far enough from the carrier to use a narrow bandpass filter at the AD9164 output for mitigation.
A dual output version (AD9174) of this DAC is available, which can generate the additional f 2 for ALS-U operation. When testing the internal DDS functionality of the AD9174, it was found that updating the Frequency Tuning Word causes the output phase to jump to random values. Hence the DDS logic needs to be implemented in an external FPGA. This adds complexity but also allows for ultimate flexibility and makes it possible to implement advanced features like DDS spur suppression through dithering or destructive interference. The Xilinx VC707 evaluation board was chosen for this purpose, it can drive up to 2 x AD9174-FMC-EBZ DAC boards and hence can provide up to 4 independent RF output channels.
Synthesizing the 2 rationally related frequencies f 1 and f 2 can be achieved with 2 independent phase accumulators, as long as the Frequency Tuning Word is updated synchronously within the same DSP clock cycle, to keep the 2 phases in sync. To achieve precise frequency ratios other than N/2 M , a modulo logic will be required, which also needs to be updated synchronously. Further work is necessary to synchronize the 2 phase accumulators at regular intervals, which would be a robust way of avoiding the accumulation of phase errors between the 2 outputs.
CONCLUSION
A new MO distribution system based on commercial RF over fiber technology has been proposed and evaluated for usage in the ALS. Its active optical splitters provide significant advantages compared to the traditional approach of amplifying and splitting the RF signal in the electrical domain.
Two custom built candidates for a cleaner ALS MO have been investigated, both having the potential to improve over the current MO (HS9001A) performance in terms of phase noise and spurs.
The approach based on the fast DAC (AD9164) is preferred due to its minimal additional hardware requirements. Carrying out the frequency synthesis in a FPGA provides ultimate flexibility. Even though the average phase noise noise-floor is slightly higher compared to the AD9912 based setup, its spur performance is much improved. Carefully choosing the DAC sampling rate, an essentially spur-free window has been found ±2 MHz around the carrier, when operating at typical frequencies for the ALS.
More work is needed designing and implementing the frequency synthesis logic and the JESD204B interface on the FPGA. | 2019-10-16T00:53:18.000Z | 2019-10-16T00:00:00.000 | {
"year": 2019,
"sha1": "3086c1243e2e510d968acae0aef57b79916c2452",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3086c1243e2e510d968acae0aef57b79916c2452",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
244039392 | pes2o/s2orc | v3-fos-license | Diversification and post-glacial range expansion of giant North American camel spiders in genus Eremocosta (Solifugae: Eremobatidae)
Species of camel spiders in the family Eremobatidae are an important component of arthropod communities in arid ecosystems throughout North America. Recently, research demonstrated that the evolutionary history and biogeography of the family are poorly understood. Herein we explore the biogeographic history of this group of arachnids using genome-wide single nucleotide polymorphism (SNP) data, morphology, and distribution modelling to study the eremobatid genus Eremocosta, which contains exceptionally large species distributed throughout North American deserts. Relationships among sampled species were resolved with strong support and they appear to have diversified within distinct desert regions along an east-to-west progression beginning in the Chihuahuan Desert. The unexpected phylogenetic position of some samples suggests that the genus may contain additional, morphologically cryptic species. Geometric morphometric analyses reveal a largely conserved cheliceral morphology among Eremocosta spp. Phylogeographic analyses indicate that the distribution of E. titania was substantially reduced during the last glacial maximum and the species only recently colonized much of the Mojave Desert. Results from this study underscore the power of genome-wide data for unlocking the genetic potential of museum specimens, which is especially promising for organisms like camel spiders that are notoriously difficult to collect.
Results
Matrices and phylogenies. High-throughput sequencing of ddRAD libraries generated a total of 353,916,765 reads obtained from 68 solifugids individuals (mean = 5,204,658, SD ± = 8,260,842). Two samples did not pass the assembly filters and were omitted. Our first assembly comprised of loci shared by at least 17 samples consisted of 25,092 loci, with 64% of missing data (Table S1). Preliminary ML phylogenies using this matrix (not shown) recovered some inconsistency in the monophyly of some species. Therefore, new assemblies were conducted that excluded 24 samples with high amounts (> 95%) of missing data. Details about these matrices are summarized in Table S1.
ML analyses using the SNPs, the unlinked SNPs (uSNPs) and the full matrices of loci shared by at least 21 samples, rendered Eremocosta as monophyletic with strong support (Fig. 1B, Fig. S1). Similarly, each Eremocosta species except E. gigasella was monophyletic with 100% support. All major nodes within the genus received 100% support.
Five of the Eremocosta gigasella were consistently recovered as a clade sister to all other remaining species. The remaining sample (DMNS ZA.21950) grouped with E. striata with strong support but produced a long branch. All of the E. gigasella samples were collected near the Dalquest Desert Research Station in the northern www.nature.com/scientificreports/ Chihuahuan Desert, but in different habitats. The sample sister to E. striata was collected from up on a plateau, whereas the five divergent samples were found in adjacent canyonlands. Interestingly, a new species of solifuge genus Chambria was discovered in these canyons (PEC, unpublished), as well as a myrmecophilic spider that represents a new family 12 . Given these patterns, we suspect the sample that was sister to E. striata is true E. gigasellae, and that the others likely represent a new species. Additionally, our single sample of E. formidabilis was recovered as sister to E. titania with strong support.
Divergence dating and ancestral area reconstructions.
Our re-analysis of the four-gene data from Cushing et al. 7 , but using a more typical rate calibration for arthropods, yielded a topology that was largely congruent. The mean time to the most recent common ancestor (TMRCA) of extant Eremobatidae was estimated to be in the Miocene (18 Ma). Eremocosta was estimated to have diversified beginning in the late Miocene and early Pleistocene (Fig. S2). Our analysis of RADseq data calibrated with older dates from Cushing et al. 7 estimated the divergence of crown Eremocosta to have occurred during the mid to late Miocene with a mean of 11 Ma (95% HPD = 5-18 mya, Fig. S3). When calibrated with younger dates, the TRMCA for Eremocosta estimated to be in the late Miocene to early Pleistocene, with a mean of 6 Ma (95% HPD = 5-8 Ma; Fig. S4). Ancestral area reconstructions using the arthropod rate-calibrated chronogram and optimal model (DEC + j) suggested two areas (the Chihuahuan and Sonoran deserts) as the ancestral range for the genus (Fig. 2). Similarly, the second-best model (DIVALIKE + j) recovered the same combination of the Chihuahuan and Sonoran deserts as the ancestral range for the genus (Fig. S5). Both analyses suggested that common ancestor of E. striata colonized the Madrean Archipelago around 2 Ma, with divergence of E. bajaensis in Californian coastal sage habitats at about the same time. Both models suggest that E. titania colonized the Mojave Desert about 2 Ma prior to inhabiting the Sonoran Desert (Fig. 2).
Testing for the evolution of sexual dimorphism. In our morphometric analysis using the elliptic Fourier analysis (EFA) of the prolateral shapes in male chelicerae, PC1 explained 67% of the variation, with shapes on this component ranging from a slender (E. striata) to a rounder cheliceral manus, and the presence of more distinctive movable finger teeth (E. bajaensis, E. gigasella, and E. calexicensis; Fig. S6). PC2 explained less than 30% and separated chelicerae that exhibit a dorsal "hump" (a declivity between the dorsal surface of the manus down to the fixed finger; e.g. E. calexicensis and E. gigasella; Fig. S6). In the EFA of the prolateral shapes in female chelicerae, PC1 explained 86% of the variation and segregated the chelicerae of E. titania (Fig. S7). When PC1 was plotted onto the ML phylogeny (Fig. S7), female chelicerae did not change much until the morphology of common ancestor of E. calexicensis and E. titania diverged from the others. The morphology of the female chelicerae of E. titania then continued to diverge and is now significantly different (strong phenotype with a more globular cheliceral manus and wider/deeper cheliceral fingers as shown in Fig. S6) from the other four species studied. The morphology of male chelicerae exhibited a different pattern, with early morphological divergence with E. striata and later divergence of E. titania. Taken together, the chelicerae morphology is unique in both sexes for E. titania and in male E. striata.
The MANOVA comparing the sexual dimorphism recovered no significant difference between the outlines of both sexes (F 1,8 = 8.53, P = 0.10). However, the Thin Iso Splines comparison of mean shape between the chelicerae of both sexes showed their strongest differences at the dorsal "hump" in male chelicerae, and the depth and margins of the movable and fixed fingers (Fig. 2c, Fig. S7). Euclidean distances plotted on the dated topology indicate Eremocosta spp. do not show strong dimorphism, with the exception of E. titania (with an EA > 0.25; Fig. 2c).
Population structure-E. titania. Maximum likelihood analyses of matrices e18_SNPs and e13_SNPs recovered the presence of two clades within E. titania with strong support (Fig. 3A, Fig. S8). One clade contained all samples from the Mojave Desert, whereas a larger clade grouped samples from the Sonoran Desert. This Sonoran Desert clade was subdivided into three subclades (with 100% ultrabootstrap support values) in agreement with their distribution areas (Fig. 3A, Fig. S8). Similarly, the structure analysis using uSNPs (e13_uSNPs) with admixture found that a K = 3 was optimal, and divided E. titania into three distinct genetic clusters. In contrast, the structure analysis using no-admixture found that a K = 4 was optimal, subdividing E. titania into four clusters, which partially agrees with our ML topology (Group 1-4; Fig. 3A). In both analyses, only the genetic composition of Group 4 was inconsistent with the clades recovered as monophyletic in our ML topology. The genetic composition of Group 2, on the other hand, agreed with one monophyletic clade in our ML only in the analysis using the no-admixture model. The other genetic clusters (Group 1 and 3) showed discordance between the two models and the ML topology (Fig. 3A). Lastly, discriminant analysis of the principal components (DAPC) of the e13_uSNPs matrix favored the presence of three clusters which agreed with those recovered in the structure analysis with the admixture model (Fig. S9).
Testing for admixture-E. titania. Since our structure analyses showed evidence of admixture in E. titania, we ran TreeMix with four groups to identify patterns of migration. Our results consistently showed that when two migration events are considered, along with blocks of 10, 50, 100 and 1000 SNPs, one migration edge revealed gene flow from Group 4 to Group 1, with the topology in agreement with our ML analyses (Fig. 3B). When we deactivated the sample correction size on the 100 SNPs block, our results showed gene flow from Group 4 to Group 3. These migration edges showed low percentage ancestry received from Group 4 (Fig. 3B). However, TreeMix analysis without the sample correction size and 1000 SNPs block showed Group 2 as the ancestral population, and gene flow from Group 4 to Group 1 (figure not shown). Further, the diveRsity analysis revealed the highest relative migration between Group 4 towards Group 1, with lower migration between Group 4 towards Group 3 (Fig. 3C, Fig. S10). These results suggest highest migration from southern sites in the Sonoran www.nature.com/scientificreports/ desert north into the Mojave Desert (from Group 4 to Group 1), and a putative area prone to more gene flow. Lastly, our results suggested that Group 1 could be admixed between Group 3 and 4, with an unknown source yielding Group 2. Thus, we tested whether Group 1 and 2 (as a genetic cluster recovered by the structure analysis using the admixture model) are admixed between Group 3 and Group 4. Only one sample (E. titania | DMNS ZA.23689A, Fig. 3A) was most closely related to members of Group 3 and Group 4 than the members of Group 1 (Fig. 3D).
Demographic history and species distribution modelling-E. titania. Our demographic history of E. titania, using Bayesian skyline plots analyzing 1000 and 2000 nucleotides, showed a general decrease in effective population size in the late Pleistocene, where the last glacial period peaked (~ 22,000 years ago), followed by a recent increase (Fig. 3E, Fig. S9). The SDM generated for E. titania based on current conditions indicate
Discussion
Our phylogenetic reconstructions using SNP data consistently support the monophyly of Eremocosta, in agreement with a previous 4-gene study and a recent morphological revision 7,8 . This, however, is where the similarities end. Our SNP-based analyses all indicate, with strong support, that E. gigasella specimens collected from canyonlands of the northern Chihuahuan Desert represent an undescribed species of Eremocosta that is sister to all other sampled species (Fig. 1B). Cushing et al. 7 found strong support for a sister relationship between E. gigasella and E. striata. Likewise, our only sample of what we expect to be true E. gigasella (DMNS ZA.21950 in Fig. 1B, Fig. S1) grouped with E. striata, forming a long branch with an estimated late Miocene to Pliocene origin. Eremocosta striata, E. bajaensis, E. calexicensis, and E. titania all formed monophyletic groups with strong support. This result was expected, and confirms that their traditional, morphology-based species descriptions represent real evolutionary entities. An unexpected result, however, was the position of E. formidabilis as sister to E. titania, despite a vast geographic gap between the two species' distributions. Eremocosta formidabilis inhabits the southern Chihuahuan Desert, over 1000 km east of E. titania in the Mojave and western Sonoran deserts. We propose two scenarios that could explain this enormous disjunction.
First, the phylogenetic position of E. formidabilis could be incorrect due to contamination or missing data. The only available sample was collected in 2013 in Aguascalientes, México (the type locality of the species is Guanajuato, México). The specimen used was not necessarily preserved properly for DNA work. This could explain why we only obtained ~ 30% of the SNPs for this sample. That said, phylogenetic analyses with RADseq data have been demonstrated to perform well even with large amounts of missing data 13,14 . Furthermore, 13 of our other samples possessed as much or more missing data than E. formidabilis and were grouped with conspecifics with strong support. Another scenario is that the unexpected phylogenetic position of E. formidabilis is real. If this is the case, then E. formidabilis could be the result of a long-distance dispersal (LDD) event, as predicted to have occurred during the Pliocene in our ancestral area reconstruction. Additional samples would be needed to determine the cause of this curious result. In addition, México is undersampled for solifuges; therefore, there may be yet to be discovered diversity of the genus Eremocosta in the southern Chihuahuan desert region that may help to explain the position of E. formidabilis.
Morphological relationships among Eremocosta species, as assessed with our geometric morphometric analyses of chelicerae shapes (excluding the VDC, see "Methods"), highlight the difficulty in delimiting solifuge species without molecular data. By studying the evolution of shape as a continuous trait, multivariate analysis revealed a unique cheliceral shape morphology in males of E. striata and E. titania, and in females of E. titania. Strong sexual dimorphism in cheliceral morphology was only found in E. titania. All other chelicerae shape morphologies were remarkably conserved.
Despite the curious positions of the two abovementioned samples, ddRAD data allowed us to generate a robust phylogeny for Eremocosta with 100% bootstrap support values at all interspecific nodes (Fig. 1B), the first of its kind for any Solifugae genus. Given their difficulty to collect, most specimens were a decade old. Thus, our results corroborate those of other studies that underscore the power of genome-wide data for unlocking the genetic potential of museum specimens for molecular analyses [15][16][17] . Techniques like this are especially promising for taxa that are difficult to collect, like camel spiders.
Fossil records are sparse for Solifugae and nonexistent for Eremobatidae 7,18 . In spite of this limitation, our divergence dating analyses place the timing of diversification among Eremocosta spp. in a timeframe consistent with expectations given the histories of co-distributed taxa and the desert ecosystems they occupy. As in Cushing et al. 7 , initial (crown) diversification in Eremocosta was predicted to occur during the Miocene. Ancestral area reconstructions indicate that the genus probably colonized North American deserts from an ancestral region in the Sonoran Desert (Fig. 2, Fig. S5). However, given that some of the oldest lineages (E. aff. gigasella) occur in the Chihuahuan Desert, we suspect the genus actually diversified in an east-to-west pattern; moving from the Chihuahuan Desert, then Sonoran Desert, and on to the Mojave Desert, California Coastal Sage, and low to mid elevations of the Madrean Archipelago. Several animal groups are similarly distributed, but few share this east-to-west pattern. Among desert plants, however, phylogenomic evidence suggests that cactus genera Cylindropuntia and Grusonia originated in the Chihuahuan Desert during the mid to late Miocene region before migrating to and diversifying within other North American deserts 19 . Additionally, several phylogeographic studies have found that Chihuahuan Desert populations are sister to all other populations in deserts west of the Cochise Filter Barrier. Molecular clock-based analyses indicate that sister lineages found on either side of the barrier diverged at various times spanning the Miocene, Pliocene, and Pleistocene, best predicted by locomotive and thermoregulatory traits 11,20,21 . This timeframe corresponds with uplift of the Rocky Mountains and climatic differentiation between the Chihuahuan and Sonoran deserts. However, recent data from co-distributed snakes identified isolation by environment, rather than vicariance or dispersal, as the primary cause of divergence in the area 22 .
Interestingly, E. calexicensis and E. striata exhibit an east-to-west pattern as well. Although our sampling is sparse, a single E. calexicensis sample collected east of the Colorado River near Bullhead City, AZ is sister to all other samples to the west, suggesting a possible east-to-west colonization pattern across this 'leaky' river barrier 9 www.nature.com/scientificreports/ Molecular clock analyses suggest that this split is quite old, potentially dating to the Miocene, so we suspect that the Texas sample may represent a new Eremocosta species (Figs. 1, 2, Figs. S1-S5). Diversification of the four most closely related species-E. striata, E. bajaensis, E. formidabilis, and E. calexicensis-was estimated to occur during the late Miocene to Pliocene (Fig. 2). Of these, E. bajaensis is the oldest, with an estimated divergence time of about 7-5 Ma when using the arthropod rate calibration. This timeframe overlaps the time when a flooding event formed the northern third of the Gulf of California, reaching as far north as San Gorgonio Pass. Fossil data indicate that the northern gulf was flooded near synchronously at 6.3 ± 0.1 Ma 10 . Marine waters extending north through the Salton Trough would have effectively isolated Eremocosta inhabiting the Peninsular Range. If true, then the general arthropod rate has proven to work remarkably well with camel spiders, and vicariance caused by sudden flooding of the northern Gulf might be useful for calibrating molecular clocks in studies of other taxa inhabiting the region.
By integrating phylogenetics, structure and DAPC analyses, and species distribution modelling, we were able to characterize fine-scale genetic patterns in E. titania, a first for camel spiders. Results indicate that the species comprises four geographically structured groups; two in basins along the western fringe of the Sonoran Desert (Anza-Borrego Desert and Coachella Valley), one in the Mojave and Sonoran ecotone (near Twentynine Palms), and another found throughout the western Mojave Desert (Fig. S10). All except for the Mojave group were narrowly distributed in desert valleys. The Mojave group was much more widely distributed, ranging from the western Mojave Desert in California and northeast into southern Nevada. The group probably occurs throughout low elevations of the Mojave Desert, as predicted by our species distribution model (Fig. 3F).
The distribution of E. titania, especially the Mojave group, could have been reduced during the last glacial maximum, restricted to low-elevation areas in the western Sonoran Desert where the three narrowly distributed groups occur (Fig. 3F). Distribution modelling of other arthropods have identified the same general area as a desert refugium as well 23,24 . Therefore, we suspect that the four groups diverged when they became repeatedly isolated in a western Sonoran refugium during Pleistocene glacial cycles. The LGM model predicts that climates were not suitable at all Mojave group sites, so the Mojave group's current distribution is likely a product of significant post-glacial range expansion. This interpretation is supported by results from the demographic analyses of SNP data, which depicts late Pleistocene growth in effective population size for E. titania (Fig. 3E).
Although the largest swath of suitable late glacial habitat occurs in the south, the LGM model predicts that Death Valley could have also been a desert refugium for the Mojave group. The valley was flooded during much of the Pleistocene, forming Lake Manly, but suitable habitat could have been available for E. titania along the shoreline and adjacent areas higher elevations. Arachnids are known to exhibit phylogeographic patterns consistent with a model of leading-edge colonization 14,24 , so if Death Valley was a refugium, then we should see a pattern of decreasing genetic diversity with distance from the valley. Sample sizes were not large enough to address this question using population genetics, but individual heterozygosity values for Mojave group individuals were greatest at middle latitudes (Fig. S12). Thus, E. titania may have expanded from two glacial refugia, one in Death Valley and another at the southern end of the range. Additional sampling, especially in Death Valley, would be needed to address this hypothesis.
Migrate analyses provide an interesting picture of varying levels of gene flow among the four E. titania groups (Fig. 3B-D). Unsurprisingly, the southernmost groups in the western Sonoran Desert (Groups 2-4) exhibits about equal and moderate levels of gene flow between them. The strongest signal of gene flow, however, comes from the southernmost group (Group 4) north to the Mojave group (Group 1), with very little movement of genes in the other direction. This result may at first seem unlikely given that Groups 2 and 3 are more geographically proximate to the Mojave group. Additionally, desert habitat in the area is divided by both the easternmost extension of the Transverse Ranges (Little San Bernardino Mts) and northernmost Peninsular Ranges (San Jacinto Mts). However, given the difficulty in collecting camel spiders, our sampling of E. titania in the western Sonoran Desert was limited, and did not include known populations that occur further east in the Salton Trough. These eastern populations may have a less impeded connection with Mojave group samples to the north, thus permitting gene flow to bypass the other groups and mountain ranges that bisect them.
Ultimately, the majority of phylogeographic structure within E. titania occurs in the western Sonoran Desert. This region, also known as the Colorado Desert, has been demonstrated to harbor significant genetic structuring in other desert animals as well; i.e. sidewinders 25,26 , toads 27 , night lizards 28 , pocket mice 29 , and scorpions 30 . As such, the area has been identified as a hotspot for genetic diversity 31 . Hadrurus arizonensis, which are large, arid-adapted scorpions, exhibit a similar pattern of genetic differentiation in low elevation refugia and subsequent expansion throughout the Mojave Desert 24 . Conversely, a lack of significant genetic differentiation was observed in flat-tailed horned lizard (Phrynosoma mcallii) populations.
Taken together, genome-wide SNP data and species distribution modelling provide compelling evidence that E. titania was severely impacted by pulses of cooler and wetter climates associated with Pleistocene glacial cycles. These large, arid-adapted predators were probably once restricted to isolated low-elevation refugia where climates remained xeric during glacial periods. As climates warmed, the species then successfully colonized new areas of suitable habitat as woodlands were predominately replaced by desert scrub ecosystems throughout the Mojave Desert.
Methods
Taxon sampling, RAD sequencing, and assembly. Genomic DNA was extracted from 68 museum preserved specimens as well from material collected between 2017 and 2018 (Table S2). All specimens used were from the DMNS arachnology collection and species identifications were verified by at least two experts. Appropriate permissions from museum authorities at DMNS were obtained for using material from the museum in this study. Data from all specimens used can be accessed via the Symbiota Collections of Arthropods Net- www.nature.com/scientificreports/ work (https:// scan-bugs. org/ portal/ index. php). Sixty-five of the samples represented six of the seven species in Eremocosta (only E. gigas is missing), and three were outgroups; two samples of Hemerotrecha branchi (Eremobatidae) and one Ammotrechula sp. (Ammotrechidae). Library preparation and sequencing followed our recent protocols 14,32 . In brief, we used two restriction enzymes (EcoRI-HF and ClaI) to make cuts for adapter ligations and MspI for dimer cleaving (all enzymes from New England Biolabs, Ipswich, MA). All samples were pooled and subjected to 2 × 150 paired-end sequencing on a full lane of an Illumina HiSeq X at Admera Health (South Plainfiled, NJ). Raw reads were demultiplexed and assembled using iPyRAD v. 0.9 33 with default parameters. Different alignments were created by requiring loci to be shared by at least 17, 22, and 33 taxa. The amount of missing data was analyzed and samples with more than 95% of missing data were dropped by repeating the assembly. We created new alignments that required loci to be shared by at least 21 taxa (hereafter referred to as alignment 'm21'). Assembly statistics are reported in Table S3.
Phylogeny, divergence dating, and ancestral area reconstruction. We used the concatenated matrices of SNPs (m21_SNPs) and uSNPs (m21_uSNPs) to infer phylogenetic relationships among Eremocosta species. For each of these matrices, we conducted maximum likelihood (ML) analyses using IQ-TREE v. 1.6.6 34 implementing ModelFinder 35 and ultrafast bootstrap resampling 36,37 . Our team's previously published eremobatid chronogram based on four genes (COI, 16S, H3, and 28S) suggests that Eremocosta species shared a common ancestor during the Miocene, between 10 and 18 Ma 7 . This estimation was calculated using fossil calibrations for outgroup lineages, as well as a uniform prior placed on a node shared by sister species found on each side of the Trans-Mexican Volcanic Belt. However, the substitution rates derived from this approach were high. For example, a rate of 0.0379 substitutions/site per million years was estimated for COI, which is more than twice as fast as rates estimated for spiders 38 and scorpions 30 . Therefore, we also reanalyzed the original four-gene dataset in BEAST v 1.10 39 without the fossil and biogeographic calibrations, instead using a rate calibration commonly used for COI in arthropods (0.0169 subs/site/my 40 ). All other parameters were set as in the previous analysis: unlinked substitution and clock models across the four partitions, a strict clock (ucld.stdev values were less than 1.0 in preliminary runs with relaxed clocks), Yule speciation process, and four mcmc runs for 50 million generation each, sampling every 5000.
We then used divergence date estimates from the original chronogram as well as the new arthropod rate-based chronogram to calibrate two different molecular clocks for Eremocosta with our RAD data. Specially, we used the putative origin of Eremobatidae (where Eremocosta split from Hemerotrecha), and the divergence of Eremocosta as recovered in (a) Cushing et al. 7 and (b) our new analysis. Divergence dates were estimated by analyzing m21 using the approximate likelihood calculation 41 as implemented in baseml and mcmctree, both part of the PAML v. 4.9 software package 42 . The ML tree inferred from m21_SNPs was used as the input tree calibrated using the putative origin of Eremobatidae and the divergence of Eremocosta as previously discussed. Four Bayesian inference chains were run for 10 million post-burnin generations (burn-in of 10,000), and an independent model rate of evolution; convergence of chains was confirmed using MCMCTreeR 43 .
We constructed a species distribution matrix to estimate ancestral areas for Eremocosta lineages by designating each terminal taxon to the ecoregions 44 they inhabit (Table S4) with the RASP v. 4.2 package 45 . We fitted the data to six models as implemented in the R package BioGeoBEARS 46 : DEC, DEC + j, DIVALIKE, DIVALIKE + j, BAYAREALIKE and BAYAREALIKE + j. Following Turk et al. 47 , we omitted the outgroups as well as our single sample of E. formidabilis due to the possibility of contamination or bias from missing data (see "Discussion"). We ran all models in RASP with a maximum number of areas occupied set to two. We then compared all six models using the Akaike information criterion (AIC) values and Akaike weights (AICw). The model DEC + j was favored (Table S5).
Evolution of sexual dimorphism. Eremocosta morphologies are largely conserved with most specieslevel differences occurring in male chelicerae. Therefore, we explored sexual dimorphism in cheliceral morphology within a phylogenetic context to determine if species diverged morphologically as they colonized and adapted to new areas, as predicted by ancestral area reconstructions (see section above). Cheliceral shape variation was characterized using the geometric morphometric technique of elliptic Fourier analysis (EFA) with the R package Momocs 48 , following previous studies 49 . We used Adobe Photoshop ® to outline monochromatic versions of cheliceral photographs that were published in a revision of the genus 8 . Outlines were imported into R, converted into lists of coordinates, and aligned using the calibrate_harmonicpower function in Momocs. Additional arguments for the EFA included the normalization of coefficients, and a single smoothing iteration. Resulting coefficients were summarized using a Principal Component Analysis (PCA) with the principal components (PCs) used to visualize the variation of the cheliceral shape in the morphospace.
We used a MANOVA to compare the shapes between sexes after the EFA and PCA. Only species for which both female and male photographs were available were included. Deformations between the shapes of both sexes were determined using Thin Plate Splines with the tps_iso function in Momocs. Euclidean distances were calculated between females and males using the truss function with the scores of the first PC. This resultant Euclidean distance represents the degree of sexual dimorphism, which was plotted onto our dated topology as a function of the variation of cheliceral dimorphism through time. This approach explores the general morphology of chelicerae and known differences between ventro-distal concavity (VDC) should not significantly influence the results.
Population structure-E. titania. To determine if Pleistocene climate fluctuations impacted Eremocosta, we conducted a phylogeographic analysis of E. titania, the species for which we had the most samples. First, we generated an assembly with loci shared by at least 13 of the 17 E. titania samples ('e13'-Assembly statistics are reported in Table S2). Using this assembly, we assessed population structure by using the Bayesian MCMC clus- 50 , with the e13 unlinked SNPs matrix (e13_uSNPs). Correlated allele frequencies without using prior population information and the admixture and no-admixture models were implemented for 10 independent runs for k values (2-6) with 10,000 mcmc cycles, with a burn-in of 1000 iterations. The best-fit K value was determined using the log probabilities of X|K 50 Testing for admixture-E. titania. Next, we determined the number of putative admixture and migration events in the resulting populations from the structure analysis within E. titania using Treemix v. 1.13 55 . For this analysis, we used the allele frequencies from assembly 'ec13' , our selection of at least two "migration" events (option -m 2), with or without the sample size correction (-noss), and blocks of 10, 50, 100, and 1000 SNPs to account for linkage disequilibrium. The tree was unrooted. In addition, we calculated the relative migration rates among groups with divMigrate from the diveRsity package 56 using Jost's D and Nei's Gst. Similarly, the three-population tests (f statistics) measure allele frequency correlations between populations as first introduced in Patterson et al. 57 . These statistics are used to test for admixture in a target population from two source populations, or to measure the shared genetic drift between two populations, rooted with an outgroup. Based on our results from the Treemix analysis, we sought to determine if groups 1 and 2 were the result of an admixture event between groups 2 and 3.
Demographic history-E. titania. We reconstructed the demographic history of E. titania using the multi-locus method Extended Bayesian Skyline Plot (EBSP) as implemented in BEAST v 2.5.2 58 . As input data, we randomly selected different compositions of sequences from our e13_uSNP matrix (50, 100, 1000 and 2000 nucleotides). The HKY substitution model was implemented with a strict clock model with default parameters (since no mutation rate is known for Solifugae genomes). The chain length was set up to 100 million generations sampling every 5000 states implemented in two independent runs. Species distribution modelling-E. titania. We developed species distribution models (SDMs) for E. titania using coordinates for the 17 sites where our samples were collected. We chose to use only samples for which we had genetic data confirming their identity. SDMs were constructed using bioclimatic data representing current and last glacial maximum (LGM) Bioclimatic interpolations downloaded from the WorldClim database 59 at 2.5′ (ca 4 × 4 km) resolution. We clipped the layers an extent bounding the known range of E. titania, as potentially accessible desert habitats in adjacent areas (30.0-39.0° N and 111.0-120.0° W). We screened all 19 bioclimatic layers in each data set for multicollinearity using ENMTools 1.3 60 and removed highly correlated (Pearson's r 2 > 0.9) variables. For highly correlated pairs, we retained the layer that contributed the most in preliminary runs using all 19 layers. This approach yielded the following final predictor layers: Bioclim 1, 2, 3, 4, 5, 8, 9, 13, 14, 15, and 18. We used Maxent 3.4.1 61 to construct a present-day SDM, and then projected the model onto the paleo climatic conditions estimated for the LGM. We ran five replicates using cross-validation (equivalent to 20% testing), complementary log-log (cloglog) transformation 62 , the maximum number of iterations set to 10,000, a random seed, and application of the fade by clamping. We optimized the regularization multiplier by using ENMTools to select the best model based on the corrected Akaike Information Criterion (AIC c ) scores among models constructed using beta regularization multipliers of 1-10. The default multiplier (1) was considered optimal, and we used default settings for all remaining parameters.
We used ArcGIS 10.1 (ESRI, Redlands, CA, USA) to visualize the distribution of climates suitable for E. titania by using a color ramp for values above the "minimum training presence" threshold. This threshold is appropriate because it sets the omission rate to zero, and none of our samples should be omitted because coordinates were collected in the field (not georeferenced). | 2021-11-13T06:18:07.324Z | 2021-11-11T00:00:00.000 | {
"year": 2021,
"sha1": "9bfece5de83533b87040cb95db555f36909580d5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-01555-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "57c5b7c54461a245d0ea4302a07c5596633d7c2a",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110121621 | pes2o/s2orc | v3-fos-license | Improving Teaching and Learning Process through Establishment of Centre for Engineering Education Development - An initiative at KG College of Engineering and Technology
The evolution of Engineering Education (EE) in India has been drastic from the British era to the present day. EE in India started during the British era and focused mainly on civil engineering. In 1945 a Government Committee was appointed to suggest options for advanced technical education in India which recommended the establishment of higher technical institutes based on the Massachusetts Institute of Technology in the four regions of India which resulted in setting up five Indian Institutes of Technology and the 20 Regional Engineering colleges just after independence was one of the first milestone achieved by Independent India. Then, there are a large number of State Government Engineering Colleges, often affiliated to a University and having a limited or no autonomy about curriculum, examinations, degree granting, etc. The great demand for engineering and technical education has led to the mushrooming of a large number of private engineering colleges. Since the establishment of IIT Kharagpur in 1951, India has a total of 3,393 engineering colleges as on May, 2012. In spite of the large number of engineering colleges in India, as per the third edition of the National Employability Report, Engineering Graduates - 2014, only 18.33% of the Indian engineers are employable and only about 18.09% actually get a job. This alarming survey indicates the need of a paradigm shift in today's school of engineering learning and training so that we may not only target increased employability but also set our eyes on ameliorating research and innovation into Engineering Education. This paper presents the work conducted by the Centre of Engineering Education Development (CEED) at KG Reddy college of Engineering and Technology, which was established to continuously work towards improving the teaching learning process by implementation of new pedagogies. The focus will be on implementation of active learning into the lecture delivery, it's impact on the student's, subsequent results and the future scope of work.
actually get a job. This alarming survey indicates the need of a paradigm shift in today's school of engineering learning and training so that we may not only target increased employability but also set our eyes on ameliorating research and innovation into Engineering Education. This paper presents the work conducted by the Centre of Engineering Education Development (CEED) at KG Reddy college of Engineering and Technology, which was established to continuously work towards improving the teaching learning process by implementation of new pedagogies. The focus will be on implementation of active learning into the lecture delivery, it's impact on the student's, subsequent results and the future scope of work.
Keywords: Engineering Education, CEED, Active Learning
Introduction
Everything else has accelerated but schools have not; so schools have become more disconnected. The walls between schools and the outside need to be more permeable. Interview with Larry Rosen stock, CEO of High Tech High Network, San Diego, California Education is the dominant model of evident in most places today. It is a model that is receding more and more rapidly as the forces of new pedagogies, and new change leadership, in an educational context that is overdue for transformation. CEED is established is about a radical change in the relationships between all the key players in learning: students, teachers, technologies, college cultures, and assessments. The report is also about how and why change is occurring more organically than ever before.
Background
"Active Learning" is, in short, anything that students do in a classroom other than merely passively listening to an instructor's lecture. This includes everything from listening practices which help the students to absorb what they hear, to short writing exercises in which students react to lecture material, to complex group exercises in which students apply course material to "real life" situations and/or to new problems. We initially started a new initiative to improve student engagement in the classroomby introducing student presentations in the classroom. We received a very response/feedback from the students who have seen a considerable improvement in their speaking skill and the subject knowledge.
In order to learn and gain more exposure in the field of engineering education, we have attended the 1 st and 2 nd International Conference on Transformations in Engineering Education (ICTIEE) in 2014 and 2015, at Hubli and Bangalore respectively Our major breakthrough was during the teacher certification program conducted by the Indo US Collaboration in Engineering Education (IUCEE) in association with International Society for Engineering Education (IGIP) where we introduced to different teaching and learning methodologies and mentored by experts in the field engineering education from around the world.
In order to share our learning in the above-mentioned programs with the entire faculty and to improve teaching learning Process established a Center for Engineering Education Development (CEED) at KGReddy College of Engineering and Technology (KGRCET).
Faculty Training Workshop
As the first initiative under the newly established CEED, we designed a 2-day faculty development workshop by considering three different modules such as preparation, delivery and assessment of the course with the below 4 major contents: A. Characteristics of 21 st Century Learners Today's digital kids think of information and communications technology(ICT) as something akin to oxygen. It's what they breathe, and it's how they live. They use ICT to meet, plate, date, learn, acknowledge each other and form their personal identities. [1] Students born after the 1980's are known as the millennial learners, they are also known as Generation Next, Generation Y, and digital learners. These students have various personality traits such as By taking into account the various characteristics of the 21 st century learner, we proposed the following techniques to students to improve the student learning of the subject.
1) Make content relevant: Whenever possible connect content to the real world. This is possible for most topics in engineering, which is so closely linked to life.
2) Partner with Technology: Use different strategies/ technology options whenever possible 3) Make yourself accessible: Encourage them to communicate with you (e-mail is an excellent tool); support the shy/weaker students to develop confidence.
4) Regular Assessment:
Embed assessment in everyday instruction. This will ensure regular review and repetition leading to enhanced student performance and confidence.
5) Plan and Implement group activities:
Plan activities regularly which can be conducted in pairs, small groups and large groups.
6) Provide constructive feedback:
Positive reinforcement is said to be one of the most powerful tools for motivating students B. Teaching Philosophy Statement The teaching philosophy statement is a statement of your personal ideology about what engineering should be and what is should aim to achieve. It is personal but needs to take into account the mission and goals of the institution and must be regularly reviewed and updated. The statement should contain your goals of engineering education at the level you are teaching, the needs of the students, your role of a teacher, how instruction should be organized and delivered, your definition of student success, your proposal to achieve the goals set for yourself and how you propose to continue working on your professional development.
[2]
We have proposed the faculty the below framework that would help them draft their teaching philosophy statement. The faculty were asked to divide their statement into 4 sections.
1) View of leaning:
How do you conceptualize learning? What do we mean by learning and how does it occur? How do you facilitate this process in the classroom? How have your experiences influenced your view of learning?
2) View of Teaching: What is teaching? What is the professor's role in the classroom? How does teaching facilitate the learning process? How do you challenge students intellectually while supporting those with different learning styles and abilities? How have your experiences influences your view of teaching?
3) Goals for Students: What do you expect your students to learn? What goals do you set for your classes and why? How do you work to help your students achieve your goals? What do you value in terms of student learning (e.g., writing, problem solving, critical thinking, and content knowledge)? 4) Implementation of Philosophy: How do the ideas you've discussed thus far influence what you do in your classroom? How do you operationalize and implement your philosophy of teaching? Reflect on your course materials, assignments, projects, and teaching style.
C. Course Description Document
Course description document is a short, informational statement about the approach and content of a course. Anyone browsing the course description document should be able to determine very quickly what the course is about.
The course description document is divided into 8 sections.
1) Basic Details:
This section contains the title of the course, name of the instructor, number of class hours per week and the room number of the classroom if necessary.
2) Course Overview: This section contains a brief overview of the course along with the test portion for the assignments and mid examinations.
3) Course Objectives:
This section contains a list of objectives that the students will be able to achieve at the end of the course.
4) Course Outcomes:
This section contains a list of outcomes that the students are expected to achieve at the end of the course.
5) Detailed Schedule:
This section contains a detailed list of name of the topic being taught and its associated topic outcome.
6) List of textbooks and references:
This section contains list of text books and other references which can be referred during the course.
7) Activities conducted in the class:
This section contains list all activities that the faculty will be conducting during the class.
8) Grading Criteria:
This section contains the grading criteria and pattern for the evaluation of the course. This issue needs to be addressed immediately to ensure effective learning during the class. It has been suggested that by indulging students small active learning activities in the class improves their attention span and which ensures better learning as shown the below figure. Various activities such as think pair share, TAPPS, group activities, online quizzes etc. can be conducted in the class.
[4]
Taking all the above into consideration, all the faculty were suggested to follow the below was lecture structure. 50 minute lecture to be divided into 5 segments The faculty development workshop was ended with a Q&A session during which faculty clarified all the necessary doubts with us.
Follow up and feedback session
During the next 2 weeks, the members of CEED had scheduled two individual meeting with each of the faculty in the college to help them through the transformation. During the first meeting, feedback and suggestions were provided to help faculty complete the course description document. The hard copy of course description documents were provided to the students before the start of the semester.
During the 2 nd feedback meeting with each faculty, the faculty were expected to share any challenges that they have been facing during the new lecture delivery structure. The concerns faced by few faculties were addressed and necessary suggestions were provided to overcome them. This process ensured a smooth transition of the new teaching methods during the semester.
Results and Conclusion
After both the feedback sessions with the entire faculty individually, the students were asked to fill a feedback form for the entire faculty teaching them. The analysis showed that most of the faculty startedfollowingthe new proposed delivery structure in their classroom. Analysis also showed that a great improvement in the level of student engagement and quality of learning during the class. The new methods also increased the average attendance percentage of the students for the semester. Following are analysis taken from student online feedback.
How much percentage of faculty started teaching the class with an ice-breaker?
How much percentage of faculty started teaching the class with the lecture objectives?
How much percentage of faculty practiced active learning methods in the class?
How much percentage of faculty ended the class summarizing the class's objectives?
How well did the activities performing in the class help students to learn the subject better?(1-5 scale, Excellent-5 to Nouse-1)
Always
Sometimes Never Always Sometimes Never Always Sometimes Never Always Sometimes Never Did the activities being performed in the class making students more interested during the lecture ?(1-5 scale, Excellent-5 to Nouse-1)
Future Scope of work
We have contacted two doctoral students in the department of engineering education at Purdue University who agreed to help us as advisors. They will be working with us on designing a more detailed evaluation system to understand the impact of the active learning methods on students learning.
Two members of the CEED team will also be experimenting a blended course structure in the upcoming semester and evaluate its results. Upon receiving positive results, we plan to introduce it entirely into the curriculum from the next academic year. | 2019-04-13T13:14:04.687Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "651552d9e003c4267cd73d763d12cbc47bc1f6ff",
"oa_license": null,
"oa_url": "https://doi.org/10.16920/jeet/2016/v0i0/85718",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "385061145978486ba038399fdb7e654c7ddd46c9",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
255711136 | pes2o/s2orc | v3-fos-license | Natural Deep Eutectic Solvents in the Synthesis of Inorganic Nanoparticles
Natural deep eutectic solvents (NDESs), as a new type of green solvent, are used in many fields, including industry in extraction processes, medicine, pharmaceuticals, metallurgy, electrodeposition, separations, gas capture, biocatalysis and nanotechnology. Mainly due to their properties, such as simple preparation, environmental friendliness, biocompatibility and multifunctionality, they are being used in various fields of industry. This review aims to provide insight into the applications of natural deep eutectic solvents, specifically in nanotechnology processes. It focuses on the description of NDES and how their physicochemical properties are used to obtain functional nanomaterials, including metals, metal oxides and salts. It highlights how the use of NDESs to obtain a wide range of inorganic nanoparticles enables the elimination of disadvantages of traditional methods of obtaining them, including reducing energy consumption and functionalising nanoparticles in situ. In conclusion, recent advances and future directions in the development and applications of NDESs in nanotechnology are discussed with the aim of identifying unexplained scientific questions that can be investigated in the future.
Introduction
In recent years, technological developments have overcome many problems and the rate of evolution of new types of materials and composites confirms that it is possible to develop products dedicated to specific applications. Some of the interesting materials that have been gaining popularity in the last 10 years are deep eutectic solvents (DESs) [1,2]. The number of scientific papers on these materials increases every year, and there are more and more areas in which natural deep eutectic solvents (NDESs) are used [3]. The uses of NDESs in electronics, medicine and biochemical processes are the most obvious and best described in the literature [4]. Nanotechnology may be added to the research areas in which NDESs are becoming increasingly important as solvents, reaction media, stabilising materials, nanoparticle modifiers and many more [5].
The term deep eutectic solvent was introduced relatively recently, namely, in 2003 [6] ( Figure 1). As a new class of solvents, these solutions quickly began to gain popularity and, just five years later, the first paper on the use of DESs for nanoparticle synthesis was published [7]. Observing the potential for DESs, research was started to find combinations of components that would enable the production of more environmentally friendly solvents that would continue to offer the advantages of DESs. As a result of this research, a group of NDES comprising substances found in nature, specifically in primary metabolites, was identified in 2011 [8]. In the following years, new types of DESs were also developed, i.e., supramolecular deep eutectic solvents (SUPRADES) with cyclic oligosaccharides as acceptors of hydrogen bonds (HBAs) or hydrophobic deep eutectic solvents (HDESs) composed of hydrophobic compounds, such as tetrabutylammonium bromide, menthol, thymol and fatty acids as HBAs, along with long-chain alcohols and carboxylic acids as donors of hydrogen bonds (HBDs) [9]. NDESs, which can be composed of several compounds, allow for the preparation of nanoparticles with well-defined sizes and shapes [10,11]. When using them in the synthesis of nanomaterials, NDES can play the role of redox agent, stabiliser, supramolecular template, reaction environment or pH regulator, all with no need to introduce additional reactants. Selecting the composition of an NDES influences the viscosity, polarity, surface tension, hydrogen bonding and surface characteristics of nanomaterials, which directly affects the mass and energy transport properties of nanostructures [12]. Furthermore, DES components can modify nucleation and growth mechanisms by neutralising charges and passivating individual crystal surfaces, which dictates the growth along preferred crystallographic directions. Combining their great properties and broad perspectives, it is feasible to develop advanced nanostructures in an anhydrous medium [13,14].
However, before achieving this, it is necessary to first solve several challenges posed to researchers dealing with natural deep eutectic solvents. To make NDESs useful and applicable on a large scale, it is necessary to develop a universal nomenclature, develop low-viscosity NDESs, design methods for their preparation that take increased scale into account and develop NDESs that are insoluble in water [15]. The research on pro-ecological methods of obtaining NDESs at an increased scale is important for economic, ecological and technological reasons. Without the development of efficient methods for the synthesis of NDESs and the possibilities for their large-scale use, among others, the preparation of nanomaterials is limited.
This review presents what impact NDESs have on nanotechnology and how their use in nanomaterials may affect nanotechnology in the future. The development of DESs and NDESs has been analysed since 2011, which is when the concept of natural deep eutectic solvents was first introduced in nanotechnology. In particular, the possibility of obtaining inorganic nanoparticles using NDESs is highlighted, along with the relevance that the properties of NDESs may have on the features and applicability of nanomaterials [16]. NDESs, which can be composed of several compounds, allow for the preparation of nanoparticles with well-defined sizes and shapes [10,11]. When using them in the synthesis of nanomaterials, NDES can play the role of redox agent, stabiliser, supramolecular template, reaction environment or pH regulator, all with no need to introduce additional reactants. Selecting the composition of an NDES influences the viscosity, polarity, surface tension, hydrogen bonding and surface characteristics of nanomaterials, which directly affects the mass and energy transport properties of nanostructures [12]. Furthermore, DES components can modify nucleation and growth mechanisms by neutralising charges and passivating individual crystal surfaces, which dictates the growth along preferred crystallographic directions. Combining their great properties and broad perspectives, it is feasible to develop advanced nanostructures in an anhydrous medium [13,14].
However, before achieving this, it is necessary to first solve several challenges posed to researchers dealing with natural deep eutectic solvents. To make NDESs useful and applicable on a large scale, it is necessary to develop a universal nomenclature, develop low-viscosity NDESs, design methods for their preparation that take increased scale into account and develop NDESs that are insoluble in water [15]. The research on pro-ecological methods of obtaining NDESs at an increased scale is important for economic, ecological and technological reasons. Without the development of efficient methods for the synthesis of NDESs and the possibilities for their large-scale use, among others, the preparation of nanomaterials is limited.
This review presents what impact NDESs have on nanotechnology and how their use in nanomaterials may affect nanotechnology in the future. The development of DESs and NDESs has been analysed since 2011, which is when the concept of natural deep eutectic solvents was first introduced in nanotechnology. In particular, the possibility of obtaining inorganic nanoparticles using NDESs is highlighted, along with the relevance that the properties of NDESs may have on the features and applicability of nanomaterials [16].
Eutectic Mixtures
Similarly, to ionic liquids (IL), eutectic mixtures are classified as neoteric solvents. A eutectic mixture is "an approximately reversible, isothermal, non-reactive mixture of different components during cooling of a liquid system, resulting in a lower melting point of the system compared to the melting points of pure compounds" [17]. Such mixtures can consist of two components but can also be multi-component mixtures. However, the melting point of the mixture, which is described by determining the eutectic point (T eut ), remains their most important property. At the eutectic point, the mixture reaches a minimum melting point. Below T eut , the entire mixture solidifies. The difference in the freezing point (T f ) in the eutectic composition of a binary mixture compared with the freezing point of a theoretical ideal mixture, i.e., ∆T f , is related to the magnitude of the interaction between the components (Figure 2). The greater the interaction is, the greater ∆T f will be [16]. Ionic liquids include liquid electrolytes, ionic alloys, molten and liquids salts, and ionic glasses. In contrast, deep eutectic solvents are composed of compounds in which the main interactions that occur between the components are hydrogen bonds. In addition, DESs can be biodegradable and less hazardous, which, when combined with their lower price, makes them an attractive alternative to IL [18].
the system compared to the melting points of pure compounds" [17]. Such mixtures can consist of two components but can also be multi-component mixtures. However, the melting point of the mixture, which is described by determining the eutectic point (Teut), remains their most important property. At the eutectic point, the mixture reaches a minimum melting point. Below Teut, the entire mixture solidifies. The difference in the freezing point (Tf) in the eutectic composition of a binary mixture compared with the freezing point of a theoretical ideal mixture, i.e., ΔTf, is related to the magnitude of the interaction between the components ( Figure 2). The greater the interaction is, the greater ΔTf will be [16]. Ionic liquids include liquid electrolytes, ionic alloys, molten and liquids salts, and ionic glasses. In contrast, deep eutectic solvents are composed of compounds in which the main interactions that occur between the components are hydrogen bonds. In addition, DESs can be biodegradable and less hazardous, which, when combined with their lower price, makes them an attractive alternative to IL [18].
Comparison of Deep Eutectic and Ionic Liquid Mixtures
Deep eutectic solvents belong to the group of eutectic mixtures. Due to their unique properties, DESs are a new class of sustainable solvents that are finding increasing applications in green solvent engineering. They are compared closely with ionic liquids but this is a separate group of solvents. In contrast to ionic liquids, DESs do not consist of ions. Among the advantages of DESs replacing ionic liquids are the fact that they are easier to synthesise, are less expensive, have no by-products and do not require purification [12]. In addition, due to their composition, they are often biodegradable and non-toxic. Common characteristics of DESs and IL include high thermal stability, low volatility, low vapour pressure, high thermostability and polar nature.
Deep eutectic solvents are divided into several types [19]. The first group of DESs (type I) includes mixtures of quaternary ammonium salts and metal chlorides, type II consists of a quaternary ammonium salt and a metal chloride hydrate, type III consists of a quaternary ammonium salt and a hydrogen bond donor compound (usually an organic molecular component, such as an amide, carboxylic acid or polyol), and type IV consists of a metal chloride hydrate and an HBD [20]. After describing a group of natural deep eutectic solvents, type V was created, in which the components are non-ionic organic compounds that are hydrogen bond acceptors and donors [18]. It is assumed that it is also possible to obtain a DES by mixing selected Brønsted-Lowry acids and bases, suggesting that further types of DESs can be determined [17]. The classification of deep eutectic solvents is shown in Table 1.
Comparison of Deep Eutectic and Ionic Liquid Mixtures
Deep eutectic solvents belong to the group of eutectic mixtures. Due to their unique properties, DESs are a new class of sustainable solvents that are finding increasing applications in green solvent engineering. They are compared closely with ionic liquids but this is a separate group of solvents. In contrast to ionic liquids, DESs do not consist of ions. Among the advantages of DESs replacing ionic liquids are the fact that they are easier to synthesise, are less expensive, have no by-products and do not require purification [12]. In addition, due to their composition, they are often biodegradable and non-toxic. Common characteristics of DESs and IL include high thermal stability, low volatility, low vapour pressure, high thermostability and polar nature.
Deep eutectic solvents are divided into several types [19]. The first group of DESs (type I) includes mixtures of quaternary ammonium salts and metal chlorides, type II consists of a quaternary ammonium salt and a metal chloride hydrate, type III consists of a quaternary ammonium salt and a hydrogen bond donor compound (usually an organic molecular component, such as an amide, carboxylic acid or polyol), and type IV consists of a metal chloride hydrate and an HBD [20]. After describing a group of natural deep eutectic solvents, type V was created, in which the components are non-ionic organic compounds that are hydrogen bond acceptors and donors [18]. It is assumed that it is also possible to obtain a DES by mixing selected Brønsted-Lowry acids and bases, suggesting that further types of DESs can be determined [17]. The classification of deep eutectic solvents is shown in Table 1.
Natural Deep Eutectic Solvents
In 2011, Choi reported and described a new type of eutectic solvents, i.e., natural deep eutectic solvents. Choi's research involved elucidating the solubility of intracellular compounds that are insoluble in both the aqueous and lipid phases [21]. The observations were later confirmed by Dai cells containing combinations of metabolites that play key roles in biological processes such as cryoprotection, drought resistance, germination and dehydration, and the discovered mixtures may be the third liquid phase in living organisms [22].
Natural deep eutectic solvents are a group of deep eutectic solvents that contain natural basic metabolites, including sugars, sugar alcohols, carboxylic acids, amino acids and amines. Similarly, to DESs, the melting point of NDESs is lower than the melting point of each component and the components are linked to each other by hydrogen bonds, with a minimum of one component acting as a hydrogen bond acceptor and another as a hydrogen bond donor. Hundreds of mixtures of NDESs consisting of natural products, such as organic acids, alcohols, sugars, cholines, urea and its derivatives and amino acids, were already obtained [20]. Crucially, these compounds are relatively cheap, especially compared with ionic liquid reactants. For example, the popular choline is a component of vitamin B and is currently produced in megatons per year as a dietary supplement for livestock, while urea, a popular choice of HBD, is commonly used in fertilisers. A summary of the NDES components that are most used in nanotechnology is shown in Figure 3.
Natural Deep Eutectic Solvents
In 2011, Choi reported and described a new type of eutectic solvents, i.e., natural deep eutectic solvents. Choi's research involved elucidating the solubility of intracellular compounds that are insoluble in both the aqueous and lipid phases [21]. The observations were later confirmed by Dai et al. in 2013, where it was shown that there are mixtures in cells containing combinations of metabolites that play key roles in biological processes such as cryoprotection, drought resistance, germination and dehydration, and the discovered mixtures may be the third liquid phase in living organisms [22].
Natural deep eutectic solvents are a group of deep eutectic solvents that contain natural basic metabolites, including sugars, sugar alcohols, carboxylic acids, amino acids and amines. Similarly, to DESs, the melting point of NDESs is lower than the melting point of each component and the components are linked to each other by hydrogen bonds, with a minimum of one component acting as a hydrogen bond acceptor and another as a hydrogen bond donor. Hundreds of mixtures of NDESs consisting of natural products, such as organic acids, alcohols, sugars, cholines, urea and its derivatives and amino acids, were already obtained [20]. Crucially, these compounds are relatively cheap, especially compared with ionic liquid reactants. For example, the popular choline is a component of vitamin B and is currently produced in megatons per year as a dietary supplement for livestock, while urea, a popular choice of HBD, is commonly used in fertilisers. A summary of the NDES components that are most used in nanotechnology is shown in Figure 3. Several bonds and interactions are formed between NDES components, including hydrogen bonds, ionic attractions between cationic/anionic groups and van der Waals interactions [17]. The presence of the bonds allows for decreasing the melting point of the eutectic mixture. However, while the phase behaviour of DESs can be easily represented by phase diagrams, it becomes problematic to describe the phenomena with thermodynamic models. Present models, which do not take into account all the actual interactions that occur between the molecules, do not allow for predicting which compounds are able to form NDESs [23,24]. Therefore, the processes for obtaining NDESs are based on experimental methods. If the potentials of DESs and NDESs are to be realised, research and attempts to theoretically describe the modes of interaction of their components are necessary [25].
Synthesis of Natural Deep Eutectic Solvents
One of the greatest advantages of natural deep eutectic solvents is the wasteless process of obtaining them. The process of synthesis of NDESs involves combining all components to form a homogeneous liquid. As a result, there is no need for additional solvents and the solvent formation process does not occur via a reaction with the formation of entirely new components. The reagents can be in either a liquid or solid phase, plus there is no need for prior purification. The combining of the components can, most commonly, occur via mechanical rubbing or by heating the mixture. The laboratory methods used for NDESs include vacuum evaporation, grinding, freeze-drying and mixing in the presence of an external energy source. A comparison of the methods for selecting the heating sources of reactants for obtaining NDESs is shown in Figure 4.
Synthesis of Natural Deep Eutectic Solvents
One of the greatest advantages of natural deep eutectic solvents is the wasteless process of obtaining them. The process of synthesis of NDESs involves combining all components to form a homogeneous liquid. As a result, there is no need for additional solvents and the solvent formation process does not occur via a reaction with the formation of entirely new components. The reagents can be in either a liquid or solid phase, plus there is no need for prior purification. The combining of the components can, most commonly, occur via mechanical rubbing or by heating the mixture. The laboratory methods used for NDESs include vacuum evaporation, grinding, freeze-drying and mixing in the presence of an external energy source. A comparison of the methods for selecting the heating sources of reactants for obtaining NDESs is shown in Figure 4.
Mechanical Methods
The mechanical method involves grinding the NDES components together. At the laboratory scale, mortars are used, and it is possible, at a larger scale, to use ball mills [26]. Through the presence of various mechanical forces, such as shear, fracture and vibration, an efficient bonding process of the NDES components can be achieved, particularly if all components are in a solid state of aggregation [27]. A mechanochemical synthesis for the preparation of choline chloride (ChCl) with urea (choline chloride urea = reline) was described by Crawford et al. Using a twin-screw extrusion method, it became possible to
Mechanical Methods
The mechanical method involves grinding the NDES components together. At the laboratory scale, mortars are used, and it is possible, at a larger scale, to use ball mills [26]. Through the presence of various mechanical forces, such as shear, fracture and vibration, an efficient bonding process of the NDES components can be achieved, particularly if all components are in a solid state of aggregation [27]. A mechanochemical synthesis for the preparation of choline chloride (ChCl) with urea (choline chloride urea = reline) was described by Crawford et al. Using a twin-screw extrusion method, it became possible to combine the components while simultaneously grinding the ingredients. In addition, the method was a continuous process, making the process yield four orders of magnitude higher than when using batch methods [28].
Vacuum Evaporation and Lyophilisation
Vacuum evaporation is an alternative method of NDES synthesis. It involves dissolving the NDES components in a solvent (usually water) at room temperature. Once a solution is obtained under reduced pressure, the water is evaporated. The mixture with the remaining water content is placed in a desiccator to achieve a constant product weight. The lyophilisation process also uses an addition of about 5% mass water, in which the ingredients are dissolved. The systems are then frozen and freeze-dried to form a clear liquid [17].
Conventional Heating
Conventional heating is the most used source of energy supply to a system that allows for natural deep eutectic solvents to be obtained. Reasons for using conventional heating include its availability, the gradual heating of the system and the ability to heat variable volumes of materials [29]. The conventional heating process involves mechanically or Materials 2023, 16, 627 6 of 23 magnetically mixing the solid or liquid NDES components and heating the mixture to 40-100 • C until the compounds are completely dissolved and the NDES is formed [19].
Ultrasound
An effective way to supply energy to the system is to use ultrasound. The advantage of such a solution is that the time to obtain an NDES can be reduced from several hours to even tens of seconds. However, the use of ultrasound requires a semi-fluid environment in which sound waves can propagate. Therefore, this solution will work when at least one component is a liquid. An alternative solution may be to introduce an already prepared NDES obtained via conventional methods or after partial fragmentation of the components by grinding. A limitation may also be the occurrence of local overheating, which may cause partial disintegration of particularly temperature-sensitive raw materials. However, these limitations do not detract from the potential of the presented method [30,31].
Microwave Radiation
In recent years, there have been significant developments in the use of microwave radiation as an energy source in the synthesis of several compounds, both organic and inorganic. Furthermore, in the synthesis of natural deep eutectic solvents, methods using the microwave radiation field as an efficient energy source can be found.
As with ultrasound, microwave-assisted syntheses are faster and more efficient compared with conventional heating. The challenge with the use of microwaves is the use of an additional means of mixing the reactants, which is particularly important when combining and contacting NDES components [1]. On the other hand, the advantage over the rest of the methods is the possibility of increasing the scale while maintaining the efficiency of the energy input to the system.
Balaji et al. compared the possibility of obtaining natural deep eutectic solvents based on malic acid-citric acid-water and xylitol acid-malic acid-water, in each case using a molar ratio of components of 1:1:10. The study compared three different energy sources: controlled heating and stirring, ultrasound-assisted synthesis and synthesis conducted in a microwave radiation field. The process with conventional heating was carried out for 2 h, heating the reaction system to 50 • C at a speed of 220 rpm. In contrast, by using an ultrasonic bath and conducting the process in a microwave radiation field, the processing time was reduced to 45 min. The temperature in the microwave reactor was 80 • C, operating at 850 W and 10 bar pressure. Taking into account the power of the equipment, the synthesis time and the volume of the solvents, the energy consumption per synthesis was determined. The energy consumption for the synthesis of eutectic solvents was 0.014 KWh/cm 3 for heating and stirring, 0.106 KWh/cm 3 for microwave assistance and 0.006 KWh/cm 3 for ultrasonic assistance. In terms of energy use, ultrasound-assisted synthesis is the most environmentally friendly, using approximately 57% less energy compared with the heating and stirring method [5]. Importantly, the authors observed no differences in the properties of the NDES obtained under various energy sources, confirming the possibility of using different methods in the synthesis of natural deep eutectic solvents.
Properties and Characterisation of Natural Deep Eutectic Solvents
A variety of organic compounds can be selected as NDES components. The choice of ingredients will have a key impact not only on the physical properties of NDESs but also on their chemical properties and, consequently, will affect their applicability [32,33]. This review describes major properties that are particularly relevant to the properties of nanomaterials. In addition to the basic physicochemical properties, i.e., density, viscosity and melting point, in the preparation of nanomaterials and their modification, i.e., refractive index, reducing activity and stabilising properties are also important. In the following subsections, the main properties of natural eutectic solvents are described, highlighting their importance in nanotechnology. An extensive description of the properties of DESs can be found in the literature [16,17,25].
Viscosity and Density
The viscosity of NDESs is significantly higher than that of most typical molecular solvents, and the temperature-dependent change in viscosity varies in an Arrhenius manner with large activation energies for viscous flow. Due to the structure, according to hole theory, NDESs composed of ionic compounds contain voids subject to steady flow after melting. The hole theory and ionicity have been extensively described in the literature [16]. Holes have a random size and position, with the assumption that their average size is similar to that of the matching ion. The smaller the ion is, the easier it is to move to a vacant spot, provided that the cavity is of the correct size in the surrounding volume. As the temperature decreases, the average size of the holes will decrease, which contributes to limiting their mobility, which is why the viscosity of NDESs tends to take on such high values [34]. It is assumed that the limiting factor in fluid viscosity is not the thermodynamics of hole formation, but the probability of locating voids. This approach facilitates the modelling of fluid viscosity, as well as the prediction of NDES conductivity. The commonly used equations include the Stokes-Einstein equation, the Arrhenius equation and the Vogel-Fulcher-Tammann (VFT) equation [17]. Using selected models, the temperature dependence of viscosity can be determined. Increasing attention is being paid to these issues in the literature, enabling a higher degree of description and prediction of the behaviour of a substance [35]. Based on the conventional Arrhenius temperature dependence, the empirical Vogel-Fulcher-Tamman (VFT) equation was derived. The VFT equation is used to describe the viscosity of a fluid as a function of temperature and, in particular, its strong temperature dependence when approaching the glass transition. The Stokes-Einstein equation describes the relationship between the intrinsic diffusivity of a substance and its hydrodynamic radius in a viscous medium, taking into account the thermal energy required from the particles to overcome the viscous force of the medium through which they move [36]. The ability to model the behaviour of NDESs was confirmed by Aroso et al. [37]. By analysing the flow curves for mixtures of choline chloride or betaine with sugars in the shear rate range from 0.1 to 100 s −1 at 283-373 K, the authors confirmed the Newtonian behaviour of the liquids and obtained a high degree of fit of the experimental data to the Arrhenius theoretical model. In practice, most NDESs exhibit a high viscosity, which is characterised by the "syrupy nature" of their flow. High viscosities imply limitations in their use in practice. For example, the viscosity of ethaline is 52 cP, compared with 1 cP for water (at 20 • C). It is possible to change the viscosity of materials either by increasing the temperature of the reaction system or by modifying the composition of the NDES. However, both methods are not always feasible or cost-effective. A broad description of the physicochemical correlations with temperature, including viscosity, conductivity and density, as well as a description of hole theory are described in the literature [38][39][40][41]. The models aim to explain ion mobility in high-temperature molten salts and deep eutectic solvents and are based on the observation that the volume of these liquids increases after melting. Crucially, however, this approach is suitable for predicting the behaviour of previously well-understood substances of known composition in which the density, viscosity and other basic properties have been experimentally determined [42].
Density provides information about intermolecular interactions in NDESs. As with viscosity, mixtures of NDESs typically exhibit a density higher than that of water (e.g., ethaline has a density of 1.14 g cm −3 and glycerine has a density of 1.19 g cm −3 at 20 • C). Reducing the density is possible by modifying the composition of NDESs, but this translates into changing their other physicochemical properties as well. Basaiahgari et al. measured the density of ethylene, diethylene, triethylene glycol and glycerol as HBDs and benzylammonium chloride salt as an HBA. Their results showed that ethylene glycol (EG)-based DESs had a lower density compared with glycerol-based DESs. The increase in density after replacing ethylene glycol with diethylene glycol, triethylene glycol and glycerol implies that increasing the number of -OH functional groups on the HBD results in the formation of more H-bonds, which presumably reduces the available volume [43]. Modification of the density or viscosity of NDESs can occur not only by replacing one component with another but also by changing the molar ratio of HBAs to HBDs. Shafie et al. presented the densities of different molar ratios of ChCl to citric-acid-based DESs. They found that as the ratio of ChCl to citric acid increases, the density decreases [44].
pH of Natural Deep Eutectic Solvents
NDESs that predominantly belong to DES type III consist of a mixture of hydrogen bond acceptors and donors. This translates into the variable pH values that NDESs can adopt. The composition of NDESs, namely, the proportion and types of components, will be key to the changes in pH. The acidity of the mixture is relevant, particularly in the preparation of nanoparticles, including the kinetics of the chemical reactions taking place [17]. Metal ions form complexes with the components of NDESs. In addition to the influence of basic H + and OHions, the complex forms that form during the processes will be of dominant importance for the processes in NDES systems. The study of metal ion speciation in DESs is still a subject of research. The complexity of the problem, which is due to the presence of various Lewis anions with different alkalinity, as well as the varying compositions of DESs, make it challenging to understand in detail. However, understanding speciation in DESs is extremely important when trying to determine the mechanisms of nucleation and growth of inorganic nanoparticles [16].
Refractive Index
The refractive index (R I ), as a dimensionless property of a material, is particularly important in the context of obtaining suspensions of metallic and non-metallic nanoparticles. The refractive index R I determines how much the speed of light changes when passing through a medium relative to the speed of light in a vacuum. The R I , which is specific to each solvent, must be taken into account when determining the size of nanoparticles using dynamic light scattering (DLS). It is therefore a useful tool that complements the measurements of physical properties. Its values vary depending on the type of components used, as well as their molar ratios in the NDES [17]. The refractive index range has values of approx. 1.33 to approx. 1.59, depending on the composition and water content [45,46]. For example, the RI for ethaline (choline chloride with ethylene glycol 1:2) and glyceline (choline chloride with glycerol 1:2) at 298.15 K are 1.468 and 1.487, respectively [47].
Reducing Properties
In contrast to organic solvents or water, NDESs, due to their composition, often exhibit strong redox properties. This property can be particularly useful in processes used to obtain nanoparticles. The redox potential of such materials can be determined using cyclic voltammetry (CV) among other techniques [20,48]. CV is a widely used electrochemical technique in which cyclic voltamperograms with two characteristic peaks are presented [49]. The diagrams represent a reversible redox process ( Figure 5). The anodic current peak corresponds to the anodic oxidation of the analyte and the cathodic current peak is associated with the reduction process of the oxidation product observed in the return cycle after a change in electrode polarity [50]. The estimation of the standard electrochemical potentials of oxidation and reduction comprises one of the most widespread applications of CV, making the technique popular in NDES analysis [51]. chemical technique in which cyclic voltamperograms with two characteristic peaks are presented [49]. The diagrams represent a reversible redox process ( Figure 5). The anodic current peak corresponds to the anodic oxidation of the analyte and the cathodic current peak is associated with the reduction process of the oxidation product observed in the return cycle after a change in electrode polarity [50]. The estimation of the standard electrochemical potentials of oxidation and reduction comprises one of the most widespread applications of CV, making the technique popular in NDES analysis [51]. peak cathodic potential (Epc), peak anodic potential (Epa), difference between the cathodic current and the resting current (ipc) and difference between the anodic current and the resting current (ipa) [52].
Alnashef et al. investigated the electrochemical behaviour of iron (III) acetylacetonate in six different deep eutectic solvents formed via hydrogen bonding between ammonium and phosphonium salts with glycerol, ethylene glycol and triethylene glycol. Cyclic voltammetry was used by the authors to determine the kinetic and mass transport properties of the electrolytes, including determining the diffusion coefficient of the iron salts and the electron transfer rate constants, which allow the redox properties of the components to be described for both oxidation and reduction processes [53]. Based on cyclic voltammetry curves, Xu et al. indicated that the diffusion coefficient of iron ions in the ethaline DES system was higher than that in the reline DES system. These results may be useful in evaluating the reducing potential of natural deep eutectic solvents in the reduction of metal ions to nanoparticles [54]. Based on cyclic voltammetry and electrochemical processes, Baby et al. investigated the effect of different deep eutectic solvents on the synthesis of MgFe2O4 nanoparticles, which were to be used for the electrochemical determination of nitrofurantoin and 4-nitrophenol in a subsequent step. Depending on the NDES composition used, different reduction peaks were obtained on the graphs, which corresponded to different redox properties [55].
Figure 5.
A schematic diagram of a cyclic voltammogram according to the IUPAC convention: peak cathodic potential (E pc ), peak anodic potential (E pa ), difference between the cathodic current and the resting current (i pc ) and difference between the anodic current and the resting current (i pa ) [52].
Alnashef et al. investigated the electrochemical behaviour of iron (III) acetylacetonate in six different deep eutectic solvents formed via hydrogen bonding between ammonium and phosphonium salts with glycerol, ethylene glycol and triethylene glycol. Cyclic voltammetry was used by the authors to determine the kinetic and mass transport properties of the electrolytes, including determining the diffusion coefficient of the iron salts and the electron transfer rate constants, which allow the redox properties of the components to be described for both oxidation and reduction processes [53]. Based on cyclic voltammetry curves, Xu et al. indicated that the diffusion coefficient of iron ions in the ethaline DES system was higher than that in the reline DES system. These results may be useful in evaluating the reducing potential of natural deep eutectic solvents in the reduction of metal ions to nanoparticles [54]. Based on cyclic voltammetry and electrochemical processes, Baby et al. investigated the effect of different deep eutectic solvents on the synthesis of MgFe 2 O 4 nanoparticles, which were to be used for the electrochemical determination of nitrofurantoin and 4-nitrophenol in a subsequent step. Depending on the NDES composition used, different reduction peaks were obtained on the graphs, which corresponded to different redox properties [55].
Complexing and Stabilising Properties
Surface tension determines the tendency of a material towards a minimum surface area. Its effects are most prevalent in liquids due to intermolecular interactions between molecules in a liquid, and this translates into stability and reduced agglomeration. Surface tension measurements can show which compounds act as surfactants and reduce cohesion forces, causing NDESs to impart stabilising properties to the forming nanoparticles [17]. Gajardo-Parra et al. measured the surface tension of ChCl-based DESs with levulinic acid, phenol and ethylene glycol. The authors found that the surface tension of ethaline was 45.66 mN/m at 25 • C and lower than that of pure ethylene glycol (48.90 mN/m), confirming that ChCl acts as a surfactant and reduces the cohesion forces on the surface of ethaline [56].
An alternative approach is to select the composition of the NDESs in such a way that the content of compounds exhibiting stabilising properties is maximised. However, it remains to be seen that the stabilisation of nanoparticles can proceed along several pathways, including via a steric effect, using large highly extended stabilisers and charge stabilisation via the deposition of compounds with selected surface groups, e.g., strongly hydrophilic/hydrophobic or positively/negatively charged [57]. In the preparation of inorganic nanoparticles, compounds used as stabilisers that can also be successfully used as components of NDESs include carboxylic acids (citric acid, maleic acid, ascorbic acid) and sugars (glucose, sucrose, galactose, mannose).
Thermal Properties
Knowing the eutectic point allows NDESs to be used with a composition that has the lowest melting point of the product. This results in the ability to operate at lower temperatures and a wider temperature range, affecting the viscosity of the NDES and their other physicochemical parameters. For example, the eutectic point of reline occurs at a choline chloride-urea molar ratio of 1:2. Unfortunately, details of the eutectic composition of individual DESs are determined experimentally, and thus, the number of papers describing this issue is limited [17].
Thermogravimetric analysis (TGA) enables the detection of deviations and changes in behaviour due to events such as phase changes, the addition of additional reactants and the analysis of product stability over time. TGA is used in DES to understand fundamental information about thermal behaviour in both crystalline and glassy transition states. Knowing the behaviour of NDES systems as a function of temperature and time provides insight into the many different physicochemical properties that occur during absorption, desorption, sublimation, evaporation, decomposition, oxidation and reduction, dissolution, etc. [20,48]. An example of the use of thermogravimetric analysis is obtaining thermal decomposition profiles of synthesised solvents, which provide evidence of the occurrence of interactions between precursors and of changes occurring in processes carried out at temperatures higher than 150 • C, which occurs, for example, in the preparation of metal oxides.
A complementary thermal method is differential scanning calorimetry (DSC), which is a thermoanalytical technique that measures the amount of heat required to produce an observed temperature change in a sample. This analysis is used to determine melting points, enthalpies of formation and melting, heat capacity and thermal stability. It is possible to compare changes in these parameters of NDES components as well as the resulting mixtures. The ability to capture phase transitions makes DSC particularly useful for detecting anomalies in DESs and determining the post-process behaviour of NDESs by determining the possibility of recycling these compounds back into the process [17].
Natural Deep Eutectic Solvents Applications in Nanotechnology
Since 2003, i.e., since Abbott's presentation of the new material group of deep eutectic solvents, DESs have become a convenient and environmentally friendly alternative to aqueous conventional solvents or ionic liquids [6]. Through the additional properties that NDESs possess, their applications are constantly expanding. Currently, the main research efforts are concentrated in areas such as biomedicine, metal treatment and extraction of a series of compounds, metallurgy, electrodeposition, separations, gas capture and biocatalysis [15,20,[58][59][60]. Nanotechnology is a new field in which NDESs may find an application ( Figure 6). solvents, DESs have become a convenient and environmentally friendly alternative to aqueous conventional solvents or ionic liquids [6]. Through the additional properties that NDESs possess, their applications are constantly expanding. Currently, the main research efforts are concentrated in areas such as biomedicine, metal treatment and extraction of a series of compounds, metallurgy, electrodeposition, separations, gas capture and biocatalysis [15,20,[58][59][60]. Nanotechnology is a new field in which NDESs may find an application ( Figure 6). Five years after the announcement that DESs exist, Sun et al. presented the first study in which they described a method for obtaining NDES-assisted gold nanoparticles based on choline chloride and urea [7]. DESs have so far found applications as reaction media for the synthesis of nanomaterials, for the electrodeposition of nanomaterials, as dispersing agents, as nanoparticles or modifiers affecting the morphology or chemical composition of nanoparticles, and as substances affecting the nucleation and growth of nanostructures. Among the inorganic materials obtained in the presence of NDESs, metal oxides and metals, especially gold nanoparticles, are most commonly mentioned. Descriptions of methods for obtaining their metal nanoparticles can be also found, but these processes are more problematic to carry out.
Reaction Media for the Synthesis of Nanomaterials
Natural deep eutectic solvents, as substances of organic origin that remain as liquids at room temperature, are excellent materials in which reactions can be carried out. Their main advantage is their liquid form without the use of water or simple organic solvents [20,44,56,58]. This property is typically exploited in processes to obtain suspensions of metal nanoparticles. Additionally, depending on the choice of NDES, we can base the solution on specific parameters, i.e., viscosity, pH, refractive index, etc., which, depending on the future applications of the material, can be crucial. The composition of NDESs and their form, especially at the beginning of their applications in nanotechnology, was used as a reaction medium in electrochemical processes to obtain a range of nanomaterials [60,61].
In the descriptions of NDESs, one of the main properties that is given is that NDESs are classified as a type of green solvent. NDESs are often used as solvents and extraction solutions, but in some cases, they can also be used as reactants to produce the intended nanoparticles [58]. Among the applications of NDESs is their use as reaction media for the preparation of nanoparticles, e.g., calcium phosphate, hydroxyapatite and fluorapatite. Particularly in the development of biomedical materials, conducting reactions in NDES Five years after the announcement that DESs exist, Sun et al. presented the first study in which they described a method for obtaining NDES-assisted gold nanoparticles based on choline chloride and urea [7]. DESs have so far found applications as reaction media for the synthesis of nanomaterials, for the electrodeposition of nanomaterials, as dispersing agents, as nanoparticles or modifiers affecting the morphology or chemical composition of nanoparticles, and as substances affecting the nucleation and growth of nanostructures. Among the inorganic materials obtained in the presence of NDESs, metal oxides and metals, especially gold nanoparticles, are most commonly mentioned. Descriptions of methods for obtaining their metal nanoparticles can be also found, but these processes are more problematic to carry out.
Reaction Media for the Synthesis of Nanomaterials
Natural deep eutectic solvents, as substances of organic origin that remain as liquids at room temperature, are excellent materials in which reactions can be carried out. Their main advantage is their liquid form without the use of water or simple organic solvents [20,44,56,58]. This property is typically exploited in processes to obtain suspensions of metal nanoparticles. Additionally, depending on the choice of NDES, we can base the solution on specific parameters, i.e., viscosity, pH, refractive index, etc., which, depending on the future applications of the material, can be crucial. The composition of NDESs and their form, especially at the beginning of their applications in nanotechnology, was used as a reaction medium in electrochemical processes to obtain a range of nanomaterials [60,61].
In the descriptions of NDESs, one of the main properties that is given is that NDESs are classified as a type of green solvent. NDESs are often used as solvents and extraction solutions, but in some cases, they can also be used as reactants to produce the intended nanoparticles [58]. Among the applications of NDESs is their use as reaction media for the preparation of nanoparticles, e.g., calcium phosphate, hydroxyapatite and fluorapatite. Particularly in the development of biomedical materials, conducting reactions in NDES perfectly suits the purpose. Based on a choline chloride/urea mixture, the preparation of calcium phosphate or hydroxyapatite, among others, enabled the control of particle size, as well as produce elementally and structurally highly pure crystalline products with good biocompatibility and mineralisation ability. An important benefit of using NDESs is that they can be recovered and reused in the process. After synthesis, the DESs were recovered and reused for the synthesis of hydroxyapatite nanoparticles [62]. In another study, Anicai et al. used ChCl/EG and ChCl/urea as the reaction medium to obtain TiO 2 nanoparticles. By using such a reaction system, it was possible to obtain a TiO 2 nanopowder through electrochemical synthesis.
Among others, NDESs have found applications in the recently developed ionothermal synthesis in which the DES or ionic liquid is the solvent and template, i.e., the structure-directing agents. The possibility of using these compounds is due to their low vapour pressure, enabling the ionothermal process to be carried out in a low-pressure environment, i.e., at ambient pressure. Eliminating the need for autoclaves increases safety by simplifying the synthesis process. Moreover, as a result of the possibility to tune their ionic character, NDESs can act in a dual role, both as a reaction environment and as a surface modifier of nanomaterials by being the source of the required functional groups. This enables an 'in situ' modified material to be obtained with the required surface and desired functional properties. The number of methods based on the ionothermic process using NDESs to produce nanoparticles is increasing every year. In their work, Xiong et al. proposed an ionothermal method using ChCl/urea as the reaction media to prepare Fe 2 O 3 nanoparticles, which confirmed the effectiveness of using NDESs as a reaction medium [63].
Reducers and Stabilisers for Nanomaterials
Depending on the compounds included in their composition, the use of NDESs can allow the morphology of the nanomaterials produced to be directed. Influencing the size and shape of the nanoparticles allows for materials with the required properties to be obtained and makes it possible to increase their applicability. Changes can occur both during the production reaction and afterwards for dispersion in colloidal applications. These can occur by contacting the forming nanoparticles with specific functional groups of the NDES compounds. NDES parameters, such as viscosity and tension, prevent aggregation and agglomeration of the nanoparticles such that their stability in the NDES is maintained [64].
Oh et al. used DESs with ChCl and malonic acid both as a reaction medium and a structure-directing agent for the synthesis of gold nanoparticles. The composition of the NDESs allowed for nanoparticles with a highly defined diameter to be obtained. The synthesis was not supported by any surfactants or polymers, suggesting that DES plays an essential role as a structure-directing reagent and particle stabiliser [65].
Biocompatibility of NDESs-Applications in Medicine
As the name suggests, NDESs are based on compounds of natural origin, mainly primary plant metabolites, and can be used in fields such as medicine, cosmetology or food production [59,66]. Even when prepared using industrial methods, they show high biocompatibility and are biodegradable and non-toxic. As a result, they can be successful green replacements for compounds that are currently used, e.g., stabilisers, such as PVP and PVA; used reductants, such as sodium borohydride or hydrazine; or solutions, such as certain organic solvents.
In 2015, the concept of therapeutic deep eutectic solvents (THEDES) was even introduced, which are compounds that can be successfully used as auxiliary materials in the delivery of drugs and active substances [67]. For example, Makkliang et al. used choline chloride-propylene glycol (1:2) at 56.1 • C for a cellulolytic enzyme reaction in which daidzein and genistein were extracted and completely converted to their aglycones. Biocompatible DESs enhance the activity of the cellulolytic enzyme, producing compounds with higher bioavailability [68]. Zhang prepared magnetic molecularly imprinted nanoparticles with a deep eutectic solvent for medical applications to separate transferrin in human serum. The authors confirmed that the use of DES provides an efficient and biocompatible method for protein isolation and purification [69].
Examples of Preparation of Nanoparticles Using NDESs
Since the first description of deep eutectic natural solvents, the number of applications in nanotechnological fields has increased year by year. The materials can be used to prepare well-defined nanoparticles of controlled shape and size, including obtaining their various forms as films and coatings, colloidal suspensions or powder materials, among others.
On the one hand, one may observe increased viscosity compared with water or standard organic solvents, which is a disadvantage of NDESs. On the other hand, it promotes the possibility of dispersion formation of nanoparticles, not allowing rapid growth to a macrocrystalline form, while maintaining their stability. The mechanisms of nucleation and growth of nanoparticles are strongly dependent on the composition of the DES. The choice of components influences the production of a material with the desired reducing potential, and the presence of selected groups and compounds in the NDES system enables the preferential growth of crystals [11].
The literature describes processes for the preparation and modification of inorganic nanoparticles, in which NDESs are the raw materials, reaction medium, reducing agent, stabiliser or surface modifier. Methods for the synthesis of metal nanoparticles, metal oxides, sulphides and salts that can successfully compete with their counterparts obtained by conventional methods are described.
Examples of Obtaining Metal Nanoparticles
Methods for the preparation of gold, silver, copper, nickel and platinum nanoparticles are described in the literature. In their preparation processes, NDESs perform the role of, among others, the reaction medium, stabilising agent, reductant and surface modifier of nanoparticles. Table 2 shows the obtained nanoparticles of selected metals in natural deep eutectic solvents along with the roles of NDESs.
Gold nanoparticles were the earliest nanoparticles to be obtained using NDESs. In 2008, Liao et al. reported on the simple synthesis of Au NPs using ChCl/urea DESs as a stabilising agent. Importantly, the addition of water made it possible to obtain nanoparticles with variable shapes. The authors obtained star-shaped Au nanoparticles at a water content of 5000 ppm, Au nanorods at a water content of 10,000 ppm and snowflake-shaped particles in an anhydrous medium [7]. In contrast, Chirea et al. compared two different DESs, i.e., choline chloride-ethylene glycol (1:2) and choline chloride-urea (1:2), as stabilising agents for gold nanotubes from the direct reduction of HAuCl 4 using NaBH 4 [70]. Crescenzo et al. used NDESs based on betaine N,N,N-trimethylglycine and oxalic acid, both as stabilisers and reductants in the synthesis of Au NPs. It was only necessary to heat the system from 30 to 80 • C. In contrast, using other carboxylic acids (glycolic and phenylacetic acids), an additional reductant was necessary, but no elevated temperatures were required. This demonstrates that it is possible to control both the composition and the process conditions to obtain the desired nanoparticle structures [71]. The significant effect of temperature was also confirmed by Oh et al., who presented a description of how gold nanoparticles were prepared using choline chloride and malonic acid, which acted as a reaction medium, a stabilising agent and a growth-directing agent for the structure. Spherical Au NPs with a size of nearly 100 nm were synthesised at 70 • C, while lattice-like nanostructures were observed when the synthesis temperature was 90 • C [65].
Silver nanoparticles were initially difficult to obtain using the widespread presence of chlorine in NDESs. At present, however, they are successfully synthesised using NDESs. An example is the preparation of silver nanoparticles with a narrow size distribution of about 4.5 nm, which was prepared even using reline based on the laser ablation method of a metallic silver wafer [72]. Adhikari et al. developed an unusual synthesis of Ag NPs using AgCl as a silver precursor. Nanoporous Ag films on copper alloy substrates were obtained using a galvanic exchange reaction from a ChCl-EG solvent (1:2) containing AgCl, along with the morphology of the nanoporous films [73]. Adhikari et al. in another study proved the feasibility of obtaining Ag and Au metal nanoparticles at high metal concentrations (400 and 1000 mM, respectively) [3].
Examples of Obtaining Metal Oxide Nanoparticles
The conventional preparation of inorganic materials, including metal oxide nanoparticles, is often carried out in water or organic solvents, in many cases using a necessary thermal step for crystallisation and nanostructure formation, such as hydrothermal, ionothermal or calcination processes [80]. It is possible to obtain nanoparticles of ZnO [81], SiO 2 [82], Fe 3 O 4 [83], Mn 3 O 4 [84], SnO 2 [85] and many others. Table 3 summarises the preparation of selected metal oxide nanoparticles in the presence of NDESs.
Among the methods for obtaining metal oxide nanoparticles is the antisolvent approach. After dissolving macrometric oxides in NDESs, a new solvent is added to precipitate new nanometric metal oxide forms. Dong et al. developed a method in which, using the choline chloride/urea system, they dissolved ZnO by adding water or ethanol and subsequently precipitated ZnO nanostructures of various shapes [86]. A separate method was presented by Chen et al. who, using choline chloride and succinic acid (1:1), applied anodic dissolution to titanium and obtained titanium oxide nanoparticles [87]. After the application of a choline dihydrogencitrate salt-oxalic acid, titanium oxide nanoparticles were also obtained, but with a different structure [88].
An alternative approach is a method in which nanoparticle precursors are precipitated in an NDES environment and calcined. Sçldner et al. developed a method for the solid-state preparation of phase-pure magnesium ferrite nanoparticles at 500 • C. The effect of five different DESs on the effectual formation, structure and composition of magnesium ferrite nanoparticles was verified [89]. The process involves the formation of magnesium ferrite structures in selected DESs, followed by their calcination, which allows for the formation of regular structures. The process of obtaining ferrite nanoparticles was also described by Das et al., who obtained MgFe 2 O 4 , ZnFe 2 O 4 , CoFe 2 O 4 and NiFe 2 O 4 nanoparticles. In the first step, the iron oxides were dissolved in ChCl/maleic acid with the oxides of the other metal. After 2 h of stirring, calcination processes were carried out (at 400, 500 or 600 • C) to obtain final nanoparticles with sizes of 120-480 nm, depending on the process conditions [11].
A large proportion of the methods in which metal oxide nanoparticles are obtained using NDESs are based on ionothermal processes, including MnO x [90], FeCo LDH [29], Fe 2 O 3 [63] and Fe 3 O 4 [91]. Hammond et al. observed that in the ionothermal process, reline acts as a catalyst to combine reactive components in the presence of water, i.e., Ce(NO 3 ) 3 -6H 2 O or CeCl 3 , enabling the preparation of CeO 2 nanoparticles [92].
Using the NDES system with the addition of a reductant, it is possible to obtain nanoparticles, including metal oxide nanoparticles. Using choline chloride ethylene glycol as the solvent and hydrazine hydrate as the reducing agent, Balaji et al. obtained nanoparticles of ZrO x , MnO x and CuO, among others. Importantly, the use of a reducing agent made it possible to limit the use of high temperatures, as metal oxide nanoparticles were obtained at temperatures as low as 50 • C. However, it was necessary to extend the reaction time to 2 days [93]. Table 3. Metal oxide nanoparticles obtained using NDESs with a description of the method of obtaining the NDESs and the functions they perform in the system.
Material
Natural (Table 4). An example of the preparation of sulphides was presented by Mohammadpour et al., who successfully obtained MoS 2 nanosheets. For this purpose, they used different NDES mixtures with sugars (glucose, fructose, sucrose), choline chloride and water [102]. Moreover, based on the ionothermal method, sulphide nanoparticles can be obtained, as confirmed by Chen et al. [87]. The authors prepared thin films based on lead sulphide (PbS) nanotubes using a ChCl/urea DES. A study by Zhang et al. presented a versatile method for obtaining a range of sulphides (Sb 2 S 3 , Bi 2 S 3 , PbS, CuS, Ag 2 S, ZnS and CdS) using a mixture of choline chloride and thioacetamide acting as a solvent, reactant and stabiliser. In a two-step process, a metal-DES complex was formed by adding suitable metal salts to the DES solution to decompose the metal-DES complex into metal sulphides upon heating [103].
Phosphates are examples of salt nanoparticles that can be obtained in an NDES environment. Using NDESs further increases their biocompatibility, which is particularly beneficial. Liu et al. developed a method to obtain zinc phosphate nanoparticles in the presence of choline chloride with imidazolidone (1:2) [104]. Based on the ionothermal method, Liu et al. obtained zirconium phosphate. The DES consisted of tetramethylammonium chloride/urea or oxalic acid dehydrate [105]. Hydroxyapatite structures with enhanced biocompatibility were obtained using choline chloride-urea as a solvent [106]. The eutectic choline chloride-urea mixtures used as solvents controlled the particle size, as well as leading to elemental and structurally highly pure crystalline products with good biocompatibility while maintaining solvent recoverability.
Current Limitations of Application of NDESs in Nanotechnology
This article presents applications in which natural deep eutectic solvents performed well. Over the 20 years in which these materials have been described, their importance and applicability has increased significantly [64]. However, there are still many issues that researchers will need to solve in the near future. Despite the great potential that natural deep eutectic solvents have, they have some limitations that, without a solution, will not allow for the widespread use of NDESs. The main disadvantages of NDES include the difficulty in significantly scaling up their preparation, the viscosity and density of the materials and the prediction of their properties using theoretical methods [112].
During the preparation of NDESs, some difficulties occur when transferring from the laboratory scale to the technical and industrial scale. Due to the need to mix solid compounds, the mass and energy transport correlations do not change directly proportionally. Due to the bonding processes that take place between hydrogen bond donors and acceptors, it is necessary to achieve a high degree of mixing and efficient energy exchange in the system. The solution may be to make the process leaner or to use alternative sources of heating and mixing of the reaction systems. Adhikari et al. presented a flow-through method for obtaining gold and silver nanoparticles using NDESs from dimethylammonium nitrate and polyol. The authors simultaneously presented solutions to two challenges related to the processes of obtaining NDESs, as well as obtaining metal nanoparticles with high concentrations (in the study, they obtained Au and Ag suspensions with metal concentrations of 400 and 1000 mM, respectively, and then converted this approach into an automated millifluidic continuous flow reaction format [3]).
Despite relying on basic widely available ingredients to obtain NDESs, the cost of their production is relatively high compared with water or alternative organic solvents. It is therefore necessary to verify the recyclability of NDES after nanoparticle preparation processes to reuse them. Several studies reported successfully recycling DESs after their use as dispersion media and in the production of nanoparticles and nanocomposites [113,114]. Yan et al. used NDESs consisting of choline chloride and oxalic acid to pretreat maize cob. Using oxalic acid regeneration and lignin removal, even after ten recycling cycles, the efficiency of the DES pretreatment did not significantly reduce glucan digestibility and glucose recovery (66.23% and 64.43%, respectively) compared with the original DES pretreatment (72.83% and 68.83%, respectively) [115].
Another barrier to the widespread use of NDESs is the ability to obtain unlimited combinations of solvent compositions. The possibility of combining compounds with such different properties has many advantages but determining at what molar ratio it is possible to obtain a deep eutectic solvent is only possible via experimental means. It is therefore necessary to develop models that can predict the behaviour of mixtures and verify whether such systems can be combined. The development of predictive models is the key to realising the full potential of NDESs. According to Hansen et al. this can be achieved through parallel efforts, including the following [17]: (a) Experiments that explore potential interrelationships between the properties of NDES components; (b) Carefully collecting, cataloguing and publishing all possible/practical physicochemical properties, especially for commonly studied compounds and constituents of DES (using reproducible synthesis protocols, carefully controlled storage of samples, detailed treatment, and pretreatment methods, etc.); (c) Processing such aggregated data and applying advanced computational techniques to find new correlations or empirical fits; (d) Undertaking much more in-depth studies on the coupling of liquid-phase dynamics to physical properties; (e) Taking a more detailed approach to understanding the nature and behaviour of esoteric hydrogen bond types/networks, particularly regarding the unusual behaviour of DES.
Conclusions
In this review, the potential role of natural deep eutectic solvents in nanotechnology is discussed, particularly in preparation processes for inorganic nanoparticles. Due to the requirements for more environmentally friendly technologies, the development of alternative methods for nanomaterials synthesis while reducing energy consumption and replacing conventional reactants with less toxic ones has become a priority goal for the scientific and production communities. Natural deep eutectic solvents can become a valuable alternative, as they can be easily prepared, can have reducing and stabilising properties, are capable of modifying the properties of the nanoparticles obtained and provide a reaction medium. There has been a steady increase in literature publications describing the use of deep eutectic solvents, including in nanotechnology. This article presents the physicochemical properties of NDESs, which are used to obtain functional nanomaterials, including metals, metal oxides and salts, and describes examples of NDES applications and the functions they perform in obtaining nanoparticles. An overview is also included of the issues and challenges that are still unresolved by the scientific world.
Funding: This work is the result of the research project no. 2021/05/X/ST5/00541/Miniatura5 funded by the National Science Centre.
Conflicts of Interest:
The author report no declarations of interest. | 2023-01-12T16:43:34.147Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "8834951ca0ad9c12cea539d699c627a166560dd8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/2/627/pdf?version=1673257531",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fc681852405e6ba7923c933de9625a39f5e5330",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247200297 | pes2o/s2orc | v3-fos-license | Thematic Content Analysis of Postgraduate Dissertations on Technological Pedagogical Content Knowledge: The Case of Turkey Teknolojik Pedagojik Alan Bilgisi Üzerine Lisansüstü Tezlerin Tematik İçerik Analizi: Türkiye Örneği
Design/Methodology/Approach: It was conducted with thematic content analysis method. The data were obtained from the postgraduate dissertations published between 2009 and 2019 through a review of the National Thesis Center website of Higher Education Board (YÖK). The review yielded a total of 101 postgraduate dissertations on TPACK, 26 of which are doctoral and 75 of which are at master's level. The dissertations were analyzed using a matrix. Descriptive and content analysis methods were applied to reveal the aim, subject area, method, sample, data collection tools, results and recommendations in each of the dissertations.
INTRODUCTION
With the advancement of technology, TPACK has become the focus of study for teacher educators and researchers in many countries in recent years (American Association of Colleges for Teacher Education [AACTE], 2008). TPACK is defined as a teacher's knowledge of integrating technology with pedagogical techniques in teaching a topic and knowing the effectiveness of presentations made with technological tools on students' learning (Graham, Burgoyne, Cantrell, Smith, St. Clair & Harris, 2009). The TPACK framework was defined by Koehler and Mishra (2005) and expanded with the incorporation of Technological Knowledge (TK) into the concept of Pedagogical Content Knowledge (PCK) referred by Schulman (1987) in teacher competencies . PACK is considered to be a unique feature that characterizes the teaching profession. Teachers can integrate appropriate pedagogical approaches into their content knowledge, and students can better understand the topic in question (Voogt, Fisser, Pareja Roblin, Tondeur, & van Braak, 2013). Shulman (1987) stated that teacher competencies should include the titles of "content knowledge, pedagogical knowledge, pedagogical content knowledge, curriculum knowledge, learner characteristics knowledge, educational context knowledge, educational outcomes, goals, values, philosophical and historical foundations". Koehler (2012) argued that Shulman could not emphasize technology in his PACK model and could not associate technology with content knowledge (CK) and pedagogical knowledge (PK) because of the limited technological materials in classrooms such as blackboards, overhead projectors, typewriters, models and periodic tables, but the integration of technology into classrooms is a natural process now thanks to equipment such as computers, projectors, large digital screens and software in today's classrooms (Wang, Schmidt-Crawford & Jin, 2018).
As far as the existing literature is concerned, Kohler and Mishra (2005) cannot be said to be the first to use the term TPACK. Rather, it was first used by Pierson (2001) to describe the integration of technology into a teacher's classroom. Other researchers also used similar terms such as "PCK-related to Information and Communication Technology (ICT) (Angeli & Valanides, 2005)" or "Technology-Enhanced PCK (Niess, 2005)" (Voogt et al., 2013;Yigit, 2014). In addition, these researchers examined the development of technological, pedagogical and content knowledge of teachers and teacher candidates in both in-service and preservice education, using a similar framework to the TPACK framework (Yiğit, 2014). TPACK is a model that embraces both the relationships and interactions of content knowledge, pedagogical knowledge and technological knowledge that teachers are supposed to have (Abitt, 2011). TPACK and the types of knowledge it interacts with are shown in Figure 1.
Figure 1. TPACK components
As seen in Figure 1, TPACK is the presentation of new concepts with different teaching styles thanks to technology rather than simply adding it to the teaching field in perspective. In respect to the teacher, it can be defined as having technological knowledge, using educational technologies and integrating these technologies into the classroom environment (Koehler & Mishra, 2008).
As can be understood from the explanations above, teachers must first have an effective TPACK in order to be able to integrate technology into their lessons. Due to this necessity, it can be said that the studies carried out within the framework of TPACK have gained significant momentum in recent years. In Table 1, studies related to TPACK are briefly summarized. Ayvaz, 2019;Bulut, 2012;Canbolat, 2011;Gündüz, 2018;Janssen & Lazonder, 2015;Kabakçı, 2011, Karakaya, 2012Kaya, 2010;Keser, Karaoğlan Yılmaz & Yılmaz, 2015;Kılıç, 2011;Kılıç, 2015;Kocakaya, 2015;Öztürk, 2013;Tokmak, Yelken, & Konakman 2013;Savaş, 2011 Studies on teachers' TPACK Ay, 2015;Archambault & Crippen, 2009;Kılıçkeser, 2019 Studies on instructors' TPACK Şimşek, Demir, Bağçeci & Kinay, 2013 Studies on the impact of experimental applications on TPACK development Baran, Canbazoğlu Bilici, Albayrak Sarı & Tondeur, (2019); Chai, Koh & Tsai, 2011;Çelik, Hebebci & Şahin, 2014;Ersoy, Yurdakul & Ceylan, 2016;Koh & Chai, 2014;Niess, 2005; Content analysis studies Abbit, 2011; Baran & Bilici, 2015;Chai, Koh & Tsai, 2013;Gür & Karamete, 2015;Kaleli-Yılmaz,2015;Korucu, Usta & Atun, 2017;Dikmen & Demirer, 2016;Rahmawatib, Budiyantoa & Basori, 2019;Setiawan, Phillipson, Sudarmin & Isnaeni, 2019;Yigit, 2014;Voogt, Fisser, Pareja Roblin, Tondeur & van Braak, 2013;Wang, Schmidt-Crawford & Jin, 2018;Willermark, 2018 As can be seen from Table 1, many studies have been conducted on the contribution of TPACK to teacher competencies for integration of technology into teaching, and analysis studies examining those studies also exist. In particular, exploring different dimensions of data revealed by scientific studies and performing content analyses help educators to identify potential areas of development. What is more, content analyses are considered important for a holistic look at the matter under consideration, to make sense of the trend about the matter and to understand various aspects of the studies (Göktaş et al., 2014). From this point of view, the data obtained from the content analysis studies on TPACK in Turkey seem quite useful as they hint at the types and disciplines of further studies in the relevant literature by providing a broad perspective on the matter. In other words, it is predicted that such documents indicate the missing parts in the TPACK literature and topic to be dealt by related researchers, consequently providing a more holistic picture. Content studies on TPACK in Turkey are presented in Table 2. The number of studies increased gradually over the years; the most common study aim targeted the relationship between various variables and TPACK; the most common research methods and data collection tools were quantitative methods and questionnaire, respectively; the most common sample group contained convenient teacher candidates; the most common implementation areas of TPACK were science and mathematics. Table 2 shows that the studies on TPACK in Turkey increased yearly, the studies often looked into the TPACK developments of teachers/teacher candidates and the integration of technology into education, the most frequenly used research methods included qualitative methods such as survey and experimental, and the most common data collection tools were scales/questionnaires. Besides, sampling targeted teacher candidates with the highest frequency. In short, it can be said that certain types of studies on TPACK have been replicated in the context of Turkey for a while.
It is thought that analyzing the popular and frequently studied topics, especially TPACK, will contribute exceptionally to the literature. Therefore, studies carried out on this subject are valuable for the literature. On the other hand, the current result might have appeared because only the TPACK studies published in journals (a total of 175 papers, see Table 2) were included and postgraduate dissertations are included in such publications at a lesser extent. Performing content analysis on more than one specific field may hinder analyzing the subject thoroughly. It is seen that TPACK tendencies were not clearly revealed separately in the available papers or dissertations. The thesis center database of the Higher Education Board (YÖK) shows that TPACK was the research topic of 25 postgraduate dissertations, 20 of which are master's and 5 doctoral dissertations, between 2009 and 2013. However, the number has shown a significant increase lately. The same topic was studied in 76 postgraduate dissertations, 55 being at master's and 21 at doctoral level, from 2013 to 2019 (YÖK, 2013;YÖK, 2019). Apart from these, the most recent content analysis study was carried out by Korucu, Usta, and Atun (2017) analyzing the studies carried out between the years 2010 and 2016. Another motive for the current study is that nearly 58 postgraduate dissertations were posted on the National Thesis Center database of YÖK in a short period between 2016 and 2019. Departing from these facts, this study was planned with a broader scope attempting to discuss postgraduate dissertations on TPACK written between 2009 and 2019 in Turkey against a set of variables. Answer was sought to the following questions: 1. How are the postgraduate dissertations on TPACK distributed by type, university and department of implementation, and year of publication? 2. What theme was discussed most frequently as the aim of the published postgraduate dissertations about TPACK?
3. How are the postgraduate dissertations on TPACK distributed by research method, sample type and data collection tools? 4. What subjects/fields did the postgraduate dissertations on TPACK target? 5. What theme was implied most frequently in the results of the published postgraduate dissertations about TPACK? 6. What themes were implied mostly in the recommendations of the published postgraduate dissertations about TPACK?
Significance and Value of the Study for the Literature
The main purpose of this content analysis study is to interpret the postgraduate studies on "Technological Pedagogical Content Knowledge (TPACK)" implemented by educational researchers so far in relation to selected criteria. In this scope, the studies were examined in terms of research pattern (qualitative, quantitative, etc.), participants (teacher candidates, senior teachers, etc.), data collection tools (interview, scale, multiple-choice test, observation, etc.), field of implementation and educational background (science, mathematics, classroom teaching, preschool teaching, etc.), results and recommendations put forth. The study is also intended to provide a different perspective to the postgraduate studies on TPACK in Turkey and to figure out studies needed in the future in consideration of the current literature. In other terms, it will provide a more holistic picture by showing the missing parts of the TPACK literature to the researchers who will do a postgraduate dissertation in this field and give them advice for welldirected new researches in this area. It would not too harsh to say that Turkish researchers of TPACK from various disciplines have been repetitive for a while ending up with no authentic products. Therefore, this study is needed in order to determine the lagging sides of the related literature (how TPACK develops, the impact of pre-service/in-service training on the development of TPACK, devising a teacher training TPACK model unique to the Turkish culture, etc.) so that future postgraduate studies can be directed to close this gap. Lastly, recommendations will be brought to increase the quality of new postgraduate dissertations on TPACK.
Limitations of the Study
This study intends to analyze postgraduate dissertations on TPACK. The majority of the existing analysis studies in the literature are aimed at revealing the trends in research papers. Unlike those, the current study included theses at postgraduate level to scrutinize a sufficient number of dissertations on TPACK and to reach reliable results in this segment. However, an exhaustive analysis of the subject and discipline content might be impeded by the thematic content analysis of more than one field. With this 255 concern, the scope of this study was narrowed down to postgraduate studies on TPACK. The inclusion of only theses at postgraduate level can be seen as a limitation of this study. Since this study was aimed to reveal the latest research trends, the studies published from 2009 to 2019 were taken into consideration. The range of publication years can be considered as another limitation of the study. As the final limitation, some master's and doctoral dissertations on TPACK may have been overlooked or not uploaded to the system despite careful screening.
METHOD
In this study, "thematic content analysis" was chosen from content analysis techniques as it is about critical examination of the themes and templates created to expose the trends and results of studies in a selected field (Çalık & Sözbilir, 2014). In this way, this technique provides a comprehensive resource to researchers with limited access to adequate researches in their (Ültay & Çalık, 2012). In general, content analysis method is the summarizing, classifying, comparing and presenting of the research content in numerical terms with the aid of scientific applications (Cohen, Monion & Morrison, 2007). There are applicable techniques such as content analysis to perform frequency analysis, relationship analysis, categorical analysis, evaluative analysis, closure indicator, vocabulary richness, readability indicator, thematic content analysis, descriptive content analysis, structural content analysis, emotional analysis, semantic content analysis (semantic analysis) and intent-motive inferences. The current study aims at interpreting the study data based on certain concepts and themes besides summarizing, classifying and comparing the contents and implications in the postgraduate dissertations. Thematic content analysis technique was preferred here since it was aimed to examine postgraduate level TPACK studies in Turkey in order to identify the common tendencies.
Data Collection
In this study, YÖK National Thesis Center database was scanned by using certain keywords in both Turkish and English to be able to access to all postgraduate dissertations on TPACK across Turkey and to be able to describe TPACK in Turkey and the world. The following Turkish and English key phrases were used for the search: -"Teknolojik Pedagojik Alan Bilgisi,'' or "TPAB" -''Teknolojik Pedagojik İçerik Bilgsi'' or ''TPİB'' -"Technological Pedagogical Content Knowledge" or "TPACK" -"Technological Pedagogical Content Knowledge" or "TPCK".
As a result of the search, 75 master's and 26 doctoral dissertations were found to address TPACK and they all were included in the study. The other inclusion criterion was the deadline of April 2020 for publication of the dissertations.
Document Analysis
The theses collected were subjected to thematic review analysis by using the thematic analysis matrix developed by Ormancı Çepni, Deveci and Aydın (2015). The matrix consists of two sections as general features and content features. The general features investigate the type of publication, the university and department of implementation, and the year of publication whereas the other part deals with the aim and method of the studies, population-sample/study group type and size, grade level of the participants, data collection tools, subjects/fields, and results and recommendations (Table 3).
Data Analysis
The matrix above 3 was used for reviewing the dissertation studies reached through the YÖK's thesis database. As the first step, codes relevant to each category were elicited. To exemplify, each study was divided into categories according to the year it was published and the university and the department it was implemented in. Then, codes concerning the study aims were extracted and the studies sharing the same aims were put under the same code. Studies with a similar goal were clustered under relevant codes and synthesized under a representative theme. The same procedure was followed for grouping other codes and themes.
As mentioned earlier, the second part of the matrix used in this study exhibit content-related data about the reviewed items, such as aim, method, size and grade level of population-sample/study group, data collection tools, subjects/fields, results and recommendation. To analyze the data about the method and subject area of the dissertations, descriptive analysis was performed while the other type of data (i.e. aim, results and recommendations) was analyzed with the content analysis method. During the content analysis, first, the research data were converted to codes and then connected codes were brought together to generate themes. Lastly, frequency and percentage values were calculated for the derived codes and themes, and they were tabulated as can be seen in the following section.
Validity and Reliability: In the first stage of the classification, the researcher titled the common elements in the reviewed studies with a common theme. In the following stage, the themes and other elements used were compared with the coding made by a researcher of TPACK who is an expert in science teaching, and the disagreements were determined. For this purpose, prior to the classification of the publications, a consistency check was performed on the themes derived by the researcher and the expert. Coders' agreement was checked by using the formula "(reliability=agreement)/(agreement+disagreement)" (Miles & Huberman, 1994). There was a high level (93%) of agreement between the two coders. The rest of the codes and themes, which were the subject of disagreement, were rechecked by the researcher. Finally, the researcher's codes and themes were verified by the expert. As a result, internal and external validity and reliability of the study was ensured.
FINDINGS
The findings obtained through the data collection tools developed in the study are presented under 6 separate headings in parallel to the sub-problems of the study.
Distribution of Postgraduate Dissertations on TPACK by Type, University of Implementation, Department of Implementation, and Publication Year
The distribution of the postgraduate dissertations by level is given in Table 4. Table 4 shows that 75% of the studies is composed of master's theses and 25% are doctoral theses.
The distribution of the postgraduate dissertations on TPACK by implementing university is given in Table 5. According to Table 5; 11% of the dissertations were conducted at Middle East Technical University, 10% at Gazi University, 8% at Fırat and Necmettin Erbakan University. Another 5% of the dissertations were found to be take place at Marmara and Sakarya Universities, and 4% at Atatürk University. There are two universities with 3% of the theses, 14 universities with 2% and 16 universities with 1%.
The distribution of postgraduate dissertations on TPACK by implementing department is shown in Table 6. Table 6 displays that 27% of the dissertations were implemented in Science Teaching Department and 23% in Mathematics Teaching. Another 12% of the dissertations were related to Computer and Instructional Technologies Teaching and 8% were about Arts in Teaching. The smallest portion, 1%, of the studies were found to belong to the departments of Turkish Language and Literature Teaching, Physics Teaching, Chemistry Teaching, Pre-school Teaching, Basic Education, Physical Education and Sports and Business Administration.
The distribution of the dissertations on TPACK by the year they were published is shown in Table 7. Table 7 shows that 2 of the dissertations were published in 2009, 12 in 2015, and 13 in year 2019. The highest number of publications was recorded during 2017 (16 studies). It can be said that the number of dissertations saw a gradual increase over the years.
Aims of Reviewed Postgraduate Dissertations on TPACK
The distribution of the dissertations studies on TPACK by study aim is given in Table 8. As can be seen in the table above; 38.6% of the studies were conducted to measure TPACK competency levels of teachers/ teacher candidates/instructors, 57.0% researched the relationship between TPACK and gender/grade level/seniority year, etc, leaving the last portion for examinint the impact of the developed classes/training courses on TPACK development of teachers/teacher candidates (3.5%) and scale development (0.3%). As a note, it is seen that TPACK knowledge of teachers/teacher candidates was addressed frequently (f=107) whereas the impact of training courses/classes on TPACK development of teachers/teacher candidates (f=10) and TPACK scale development (f=1) was not studied so often.
Research Methods, Sample Sizes and Data Collection Tools of Reviewed Postgraduate Dissertations on TPACK
The research approaches and methods adopted in the reviewed postgraduate dissertations about TPACK are listed in Table 9 below. Table 9 proves that 56.4% of the studies used quantitative research methods, 21.8% used qualitative research methods and another 21.8% used mixed methods. Also, the most widespread quantitative method was screening model (45,5%), and experimental design (7,9%) and correlational research model (2,9%) were employed relatively less frequently. The other most popular research patterns were seen to be embedded pattern (15,8%) and multiple case study (13,8%).
The distribution of postgraduate dissertations on TPACK by sample/study group type is displayed in Table 10. Table 10 shows that 53.3% of the dissertations about TPACK were carried out on teacher candidates while 46.7% were on teachers. Within the group of teachers itself, the most frequent sub-group was composed of science teacher candidates (f=24) followed by elementary school mathematics teacher candidates (f=9), secondary school mathematics teacher candidates (f=7) and social studies teacher candidates (f=6), respectively. Going back to the teachers, elementary school mathematics teachers (f=14) constituted the most frequent study group of all the dissertatons. The second most addressed sample was of science teachers (f=13) and the third one composed of English language teachers (f=7).
The distribution of postgraduate dissertations about TPACK by sample/study group size is shown in Table 11 below. Table 11 shows that 43.6% of the postgraduate dissertations on TPACK were conducted with more than 201 participants and 19.8% were implemented with 101-200 people. A smaller percentage, 14.8% of the studies, was conducted with 0-10 people and 8.9% with 31-50 people.
Another criterion of review of the current study, an analysis was performed on the measurement tools, and the results are exhibited in Table 12. It can be understood from Table 12 that a large variety of data collection tools such as scale, observation, interview, document analysis, tests and questionnaire were used in the postgraduate dissertations on TPACK examined here. The breakdown of the tools was as following: Scales account for 43,4%, observation accounts for 17,1%, document analysis 9,1%, questionnaires/forms 8,7% and tests account for 5,1% of the all data collection tools used. Additionally, it was seen that a considerable number of studies were completed by using more than one single tool.
Subjects/Fields of Reviewed Postgraduate Dissertations on TPACK
The postgraduate dissertations related to TPACK are exhibited by theirv study subjects/fields in Table 13 below. It was found out that only 28 of the studies were focused on a specific subject area while the others were conducted to figure out opinions/perceptions/competencies etc. related to TPACK as a generic matter of consideration. Interestingly, 60.7% of the subjects covered in the studies fall under sciences, 32.1% under mathematics and the last 7.1% relate to sub-fields of social sciences.
Results of Reviewed Postgraduate Dissertations on TPACK
As regards the results obtained in the postgraduate dissertations on TPACK, the findings are given in Table 14. Table 14 shows that 38.26% of the dissertations obtained results related to TPACK and TK levels of teachers/teacher candidates/instructors, 45.3% reached findings about the relationship between the TPACK knowledge of teachers/teacher candidates/instructors and several variables and % 11.07 of them obtained results on the impact of the developed training courses on the TPACK level of teachers/teacher candidates. Moreover, it was found that there is a visible weight on the results about TPACK and TK levels of teachers/teacher candidates/instructors (f=114), the relationship of teachers/teacher candidates and gender (f=45) and the relationship of TPAB levels of teachers and their seniority year (f=23).
Basic Recommendations Brought in Reviewed Postgraduate Dissertations on TPACK
As the last components of this review study, the recommendations offered in the postgraduate dissertations on TPACK were analyzed and summarized in Table 15 below. According to the table above, 57% of the recommendations in the postgraduate dissertations targeted restructuring of education faculties for TPACK development of teacher candidates and a 25.2% were mainly about restructuring in-service trainings to improve teachers' TPACK levels. Additionally, some dissertations recommended provision of technological hardware for restructuring the learning environments (%17.8).
Other prominent recommendations included building the curricula of education faculties on TPACK for TPACK development of teacher candidates (f=27), running in-service training courses specific to teachers' branches and seniority years for their TPACK development (f=13) and giving senior teachers the priority to participate in such trainings (f=9).
DISCUSSION AND CONCLUSIONS
This part of the paper is devoted to associating the study findings with each other, comparing them with findings in similar domestic and international researches, and discussing the extent at which the sub-problems could be resolved. The findings elaborated in the foregoing part will be discussed under relevant headings in compliance with the sub-problems.
Aims of Reviewed Postgraduate Dissertations on TPACK
According to our findings, the majority of the postgraduate dissertation on TPACK aimed at describing TPACK competencies and examining the relationship between TPACK and certain variables such as gender/grade level/seniority etc. (Table 8). There are few studies handling the impact of special training courses or classes on TPACK. Similar findings were also resported by other content analysis studies in the literature (Baran & Canbazoğlu Bilici, 2015;Dikmen & Demirer, 2016;Kaleli-Yılmaz, 2015;Setiawan et al., 2019;Voogt et al. 2013;Willermark, 2018). In their content analysis study on TPACK in science education, Setiawan et al. (2019) found out that the largest part of such studies were aimed at determining TPACK competencies of pre-service/in-service science teachers while the rest of them were concerning the relationship between TPACK and other elements of technology integration, teacher candidates' TPACK development strategies, how teachers apply TPACK and developing a tool for TPACK. Researching the TPACK of teachers or teacher candidates and measuring their levels is an important topic. In addition to that, ways of helping teacher candidates and teachers to improve their technology knowledge and integrate technology into their lessons should be sought. Rahmawati, Budiyanto and Basori (2019) also conducted a content analysis study of researches on blended learning within the framework of TPACK. They found out that the teachers lag behind the TPACK levels required for successfully integrating educational technology, and they recommended that training courses or classes should be organized where diverse models are applied in order to elevate their TPACK levels and the outcomes should be announced. As one takes a look at the in-service trainings carried out within the framework of the FATIH project implemented in Turkey, it can be said that such initiatives seem to have an important effect on teachers' technology knowledge development and TPACK awareness, but it is not the case with integration of technology into teaching, to TPACK skills namely (Sezer, 2015). Chai et al. (2013) argued that since TPACK is a practice-dependent research area, training courses based on certain models (Situated Technology Integration (SiTI) Model, TPACK-Comprehension, Observation, Practice and Reflection (TPACK-COPR) Model, Technology Mapping (TM) Model, etc.) could increase the capacity of teachers to integrate ICT into the lesson and suggested that such learning environments should be further developed and researched in consideration of TPACK. On top of these, increasing the number of longitudinal preservice/in-service studies designed within the framework of TPACK would be quite beneficial for clearly depicting what should be done to improve TPACK of both teachers and teacher candidates, which models should be preferred and how the course contents should be designed in our country (Kaleli-Yılmaz, 2015). For instance, when there is a technology-based course where concrete life is provided for individuals to acquire the necessary TK knowledge and experience within the framework of the Du-TE model, similar training is offered in the TPACK-COPR and TM models through in-class activities. In the TPACK-COPR model, the learning setting or context is attached more importance compared to the other models for TPACK development (Kaya & Yılayaz, 2013). In this context, the fact that long-term postgraduate studies to be carried out in the field of course development within the framework of TPACK are high in terms of quality and quantity will shed light on teacher education as to which model is effective.
Research Methods, Sample Sizes and Data Collection Tools of Reviewed Postgraduate Dissertations on TPACK
In this study, examination of the dissertation studies from the perspective of research approach demonstrates that quantitative approach was used more frequently than other research conceptions, and mixed method studies and qualitative studies were in equal numbers (Table 9). In a similar vein, other researchers concluded that the majority of TPACK studies (Baran & Canbazoğlu Bilici, 2015;Dikmen & Demirer, 2016;Kaleli-Yılmaz, 2015;Korucu, Usta & Atun, 2017) were carried out with quantitative research approach. This finding is in congruence with the results of Sözbilir, Kutu, Yaşar, and Arpacik (2010), which looked into the general trends in chemistry education research in Turkey and in the world and found a large number of studies based on quantitative research approach. Ekiz (2013) explained this with superiority of quantitative research approach thanks to fast, easy and convenience sampling as well as easier and faster data collection and interpretation. It must be said that there is a greater need for mixed method studies on TPACK in which quantitative and qualitative approaches are blended. Such studies are likely to not only offer more sound results about TPACK levels of the participants but also pave the way for other studies on TPACK. Researchers Tondeur et al., 2012) stated that the use of mixed research ideas using qualitative data to support quantitative data in TPACK research will promote understanding and evaluation of the theoretical structure of TPACK and thus eliminate much of the concern in this regard. Researchers should take these recommendations into consideration in the context of Turkey like other countries. International TPACK review studies did not report standardized results. For example, the review of Chai et al. (2013) found out that qualitative research methods and practical studies were heavily employed. Willermark (2018) found that quantitative and mixed research methods were the most preferred approaches in TPACK studies. The TPACK review study by Wang, Schmidt-Crawford and Jin (2018) found that mixed method was the most broadly used methodology for the sake of data triangulation, validity and reliability. The dispute between the national and international findings on this aspect might be attributed to the fact that the examples in our country are still far from being longitudinal qualitative applications because TPACK researches have gained momentum in Turkey after 2014 (Table 7).
When the studies in the present review were checked regarding research methods, it was seen that screening model was in the lead, yet embedded design and multiple case designs showed considerable occurance (Korucu, Usta & Atun, 2017). On the other hand, Chai et al. (2013) reported a far higher number of case studies in similar studies. The disagreement between the local and international literature might be due to the fact that quantitative research approaches are more popular in Turkey whereas qualitative and mixed research methods are adopted much more in researches carried out in other countries. The point of screening model is to describe the person with their surrounding conditions without intervention (Karasar, 2010). Most of the studies carried out in Turkey are of quantitative type designed for scale development/application or appraisal of a given situation. Screening model might have been applied so often in the context of Turkey because of the abovementioned reason. To go into further details, half a dozen of reasons can be counted for lower popularity of other research methods compared to screening model: experimental studies are usually implemented with experimental and control groups, data collection and analysis process is more complex and laborious for the researcher than non-experimental studies, those methods require a longer period of time; likewise, case studies, correlational studies, descriptive studies are also extended over a long period of time.
When it comes to the participants of the TPACK theses in Turkey, the samples of the studies were largely composed of education faculty students or teachers, but only a small number of academic staff was picked for such studies (Table 10). By the same token, content analysis studies on TPACK researches indicated similar characteristics of sample groups in Turkey (Baran & Canbazoğlu Bilici, 2015;Dikmen & Demirer, 2016;Kaleli-Yılmaz, 2015;Korucu, Usta & Atun, 2017) and other countries (Wu, 2013;Setiawan et al.,2019;Wang, Schmidt-Crawford 2018;Willermark, 2018). It could be explained with the position of teachers and teacher candidates as focus group groups in education field and researchers' preference of easily accessible participants. Further examination into the study participants shows that teacher candidates appeared in more studies than teachers. To give an example, Setiawan et al. (2019) stated that most of the TPACK researches were implemented with teacher candidates, only one third of the studies were conducted with teachers, and the remaining was done with mixed study participants seeking to compare the TPACK of teachers and teacher candidates. In another example, Dikmen and Demirer (2016) pointed out that the majority of the TPAB study participants were comprised of teacher candidates, some were teachers and a very small number corresponded to academic staff. Kaleli-Yılmaz (2015) claim that teachers in our country generally abstain from volunteering in academic researches thinking that it will put extra time and burden onto them with no benefit in return and their weaknesses will be disclosed. They add that the majority of the participating teachers feel pushed to fill out questionnaires or scales and they pretend to be knowledgeable and well-trained; therefore, the researcher has to put so much effort to convince the teachers to take part and be truthful while responding to questions.
As for the branchs of the participant teacher candidates in the studies, they predominantly come from the fields of science, primary school mathematics, secondary school mathematics and social studies whereas the teachers often teach primary school mathematics, science and English (Table 10). Although the number of studies carried out with teachers other than mathematics, science and English language teaching is low, it was seen that studies were conducted with teachers and teacher candidates from almost every branch including physics, chemistry, biology, geography, and physical education (Baran & Canbazoğlu Bilici, 2015;Dikmen & Demirer, 2016;Kaleli-Yılmaz, 2015;Korucu, Usta and Atun, 2017). Also, the scanned studies were substantially done with teachers/teacher candidates at secondary school education while primary and high school levels did not get the same level of attention. In other words, no attempt has been undertaken yet to discover TPACK levels of teachers from various branches working in primary and secondary schools and what they do to better teach the subjects/topics to their students. Likewise, TPACK analysis studies in our country (Dikmen & Demirer, 2016;Kaleli-Yılmaz, 2015;Korucu, Usta & Atun, 2017) revealed that there exist no TPACK studies with branch teachers at secondary education. However, equivalent foreign studies (Chai et al. 2013;Setiawan et al. 2019;Willermark, 2018) show that nearly all branches have been touched upon in the scope of TPACK studies. This difference might arise from the fact that most of the researchers working on TPACK in our country specialize in science and mathematics.
It can be seen that a wide range of data collection tools such as scale, observation, interview, document analysis, test and questionnaire were used in the postgraduate dissertations on TPACK (Table 12). It should be added that use of more than one single tool was not an exception. Rather, it was recurrent in the studies scanned here. The use of multiple tools is considered important for both the authenticity and usefulness of the studies and strengthening the studies in terms of validity and reliability. Another finding reveals that scales, questionnaires/forms and tests were preferred more often than other data collection tools like observation and document analysis. In support of this situation, the bulk of the postgraduate dissertations on TPACK were conducted with a large number of participants (201 + people and 101-200 people) (Table 11). The number of studies employing methods such as observation, interview and document analysis (lesson plan, diary, etc.) revealing the change throughout a process 268 seems to be low. The majority of TPACK review studies in the literature (Baran & Canbazoğlu Bilici, 2015;Dikmen & Demirer, 2016;Kaleli-Yılmaz, 2015;Korucu, Usta & Atun, 2017;Wang, Schmidt-Crawford & Jin, 2018;Willermark, 2018) concluded that scale was the most common data collection tool. It is probable that scales were the most preferential data collection tool as a subsequent tendency following quantitative research approach and large sample use. Ekiz (2013) believes that the frequent use of scales in studies is due to the fact that they are easily accessible, are low-cost, are more labor-saving and time-saving compared to other data collection tools, and they minimize bias arising from prejudices and personal disposition. The researcher adds that describing the existing situation in the literature through developing scales are more preferred by researchers since they have clear-cut boundaries in terms of analysis, findings and results. Koehler, Shin and Mishra (2012) stated that the studies examining teachers' TPACK development rarely used open-ended questionnaires, performance evaluation questionnaires, interviews and observation since data coding and other operations needed in the analysis of the data obtained from these tools stand as a complex process. Another reason might be the existence of TPACK scales created to make the TPACK structure operational. The literature accommodates several TPACK scales: "Survey of Preservice Teachers' Knowledge of Teaching and Technology" (Schmidt, Baran, Thompson, Koehler, Mishra & Shin, 2009), "PT-TPACK'' (Lux, Bangert & Whittier, 2011), "IWB-based TPACK" (Jang & Tsai, 2012), ''TPACK'' (Chai et al., 2013), "Web Pedagogical Content Knowledge'' (Kavanoz, Yüksel & Özcan, 2015), ''TPCK-SRL'' (Kohen & Kramarski, 2012) and ''TPACK-EFL'' (Baser, Kopcha, & Ozden, 2016). Regarding the demand for these scales, Willermark (2018) found in the TPACK content analysis study that the "Survey of Preservice Teachers' Knowledge of Teaching and Technology" of Schmidt et al. (2009) was used with the highest frequency. Since the diversity of the scales allows the researcher to describe the problem situation in a different way, it can be counted as another reason for the intense demand for scales as a data collection tool. Nevertheless, Voogt et al. (2013) argue that the data to be obtained with the TPACK scales are more likely to reveal teachers' knowledge level they think they have within the framework of TPACK rather than the real TPACK levels of teachers/teacher candidates. The researchers defend the use of joint use of multiple data collection tools like interview and lesson plan to expose the actual TPACKs of individuals. Another finding worth of noting is that there were not found any meta-synthesis and metaanalysis studies on TPACK in the literature review. Conducting studies with these methods and identifying trends in the field of TPACK holds a potential to fill an important deficiency. However, such studies, also called analysis of analyses, require a high level of analysis and synthesis skills. These recommendations should also be taken into account before carrying out new studies in Turkey.
Subjects/Fields of Reviewed Postgraduate Dissertations on TPACK
A small part of the postgraduate studies focused on a specific subject while the rest attempted to figure out opinion/perception/competency etc. related to TPACK in a more general sense. It has been seen that it is trying to be determined (Kaleli-Yılmaz, 2015). It is notable that the subjects covered in the dissertations are largely linked to science followed by mathematics and social sciences, respectively (Table 13). Particularly, secondary school physics was handled while chemistry, biology and astronomy remained as the least discussed fields. As for mathematics, studies at high school level were more apparebt, such as derivatives, polygons, geometry and mathematical functions. This finding is in agreement with the literature (Chai et al. 2013, Setiawan et al. 2019Wu, 2013). In the study of Chai et al. (2013), it was concluded that the majority of TPACK studies examined TPACK independently. Wu (2013) also found in the literature review that TPACK was examined independent of subject areas in most cases, while science and mathematics were dominant in field-specific studies. Setiawan et al. (2019) pointed out that the majority of the studies were in the context of science as an umbrella discipline, but there were few studies on specific fields of science such as biology, chemistry, and physics. Remembering that TPACK is a field-based knowledge structure, there rises the need for studies on defining TPACK in various fields as well as studies examining field-specific technologies (Baran & Canbazoğlu Bilici, 2015;Voogt et al., 2013, Kaleli-Yılmaz, 2015. It can be suggested that education is still full of gaps to be closed for guiding deeper change within the framework of TPACK; therefore, further development and exploration of especially fieldspecific technological environments is required. It is also recommended that researchers should create different data collection templates, questionnaires or process evaluations suitable for the nature of these fields.
Results of Reviewed Postgraduate Dissertations on TPACK
Most of the results of the studies were found to relate to the TPACK and TK (Technological Knowledge) of teachers and teacher candidates and the relationship between this knowledge and various variables. Only a small number of them reached results on the impact of the developed classes/training courses on the TPACK knowledge of the teachers and teacher candidates (Table 14). While TPACK of teacher candidates and that of half of the teachers was at a sufficient level, their TK was almost at an insufficient level. It was unclear whether there was a significant relationship between teacher candidates' TPACK and gender. But there was a significant relationship between teachers' TPACK and genders in support of males. As to the relationship between teacher candidates' TPACK and grade level, there was not a significant relationship. However, the relationship was significant between the teachers' TPACK and seniority years. Despite that, the TPACK level was found to be low among the teachers with bigger seniority years. Again, a significant relationship was found between teacher candidates' and teachers' TPACK and ability to own/use technology. There was also a significant relationship between the teachers' TPACK and student success. As another subcomponent, it was seen that the classes/training courses developed within the framework of TPACK had a positive impact on the TPACK development of teacher candidates and teachers. In the TPACK content analysis study conducted by Kaleli-Yılmaz (2015), it was also concluded that most of the teachers and teacher candidates had sufficient TPACK but insufficient TK. On the whole, the results of the studies were suitable for the respective study aim, and in line with the expectations with the most studied subject, the participants' TPACK and TK levels were good. The recommendation in this respect would be to perform meta-analysis studies on variables that have been studied extensively, such as TPACK and TK. Secondly, outcomes of TPACK training courses and the subsequent implementations can be made public for insight about the impact of training attempts.
Basic Recommendations Brought in Reviewed Postgraduate Dissertations on TPACK
According to the findings above, the recommendations in more than half of the dissertations were oriented towards restructuring of education faculties for the TPACK development of teacher candidates, and the rest implied restructuring in-service training for the development of teachers' TPACK levels. There were also recommendations for the provision of technological equipment for building active learning environments. In particular, there were recommendations for redesigning the curriculum based on TPACK for TPACK development of teacher candidates, teaching teacher candidates knowledge and skills necessary for technology-supported applications as a part of subject field education, developing technological software specific to field education and teaching to use them, and restructuring certain courses mainly including Teaching Practice for the application of the acquired TPACK and skills. It is crucial to integrate and apply new technologies to subject field education courses during the pre-service period because teacher candidates' having sufficient TPACK will help them be more successful in integrating technology into their lessons when they start work (Rahmawati, Budiyanto, & Basori, 2019). Kaya and Yılayaz (2013) stated that it is of vital importance to reconsider the content, duration and teaching of "Special Teaching Methods", "School Experience" and "Teaching Practice" courses in the light of TPACK since those courses are offered at education faculties in Turkey to show how to teach a specific field (mathematics, science, social studies, etc.) (PCK). It was also emphasized in the studies that in-service trainings organized for the development of teachers' TPACK should be arranged to fit into the teachers' branches/years of experience. Senior teachers should be given priority in participation in the course. In addition to this, it was recommended to address TPACK components one by one in pre-service/in-service training. To summarize, it can be suggested that the postgraduate dissertations reviewed here contained recommendations about teacher education and researchers put forward recommendations under several themes.
RECOMMENDATIONS
In this study, a total of 101 postgraduate dissertations dealing with TPACK were analyzed and it was understood that the number of postgraduate dissertations increased gradually after 2009. In this respect, it is unquestionable that the dissertations on TPACK are important. What is even more important is to produce authentic studies as required by the nature of science instead of replicating some kinds of studies. In the dissertations published on TPACK, teacher candidates and teachers took part more often as study participants. It is critical to study the TPACK of teachers or teacher candidates and to identify their levels, but that would be incomplete without looking for alternative ways by which teachers and teacher candidates can integrate technology into lessons. There should be more classes during pre-service period to help teacher candidates learn how to integrate technology into lessons in their subject field and how to improve their TK Such classes or courses should be taught by instructors who are competent in the relevant field and TPACK. At the same time, course contents in education faculties should be rearranged within the framework of TPACK and necessary updates mandated by the field-specific ICTs should be performed. In order to achieve the targeted results in the FATIH project carried out in our country, it can be thought that in-service trainings based on different TPACK models will be developed and a teacher training TPACK model suitable for Turkish culture can be created based on the findings obtained. It is recommended that future research should be inclusive of students as the way the teacher integrates technology into the lesson affects students' success, attitudes and behaviors towards the lesson. For example, research can be done on how the teacher's TPACK level affects the students. Moreover, it was seen that science and mathematics lessons and secondary school teachers were mostly chosen for the reviewed dissertations. Primary and high school teachers should be preferred in future research and more research weight should be placed onto verbal skill courses such as Turkish Language, Geography and History. The recommendation is about focusing on qualitative and mixed methods as well as quantitative methods in future TPACK studies for a great contribution to the literature. To conclude, examining the studies on TPACK in the light of these recommendations is expected to enrich the relevant literature and shed light on future studies by the same token.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, author-ship, and/or publication of this article.
Statements of publication ethics
I hereby declare that the study has not unethical issues and that research and publication ethics have been observed carefully. | 2022-03-03T16:11:14.516Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "6df601b0ebd1fe9b0cbf46c3685e89ddac37dc1d",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1375447",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "db797a7d539c992b69e67b50da49fd16739d578a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
11887943 | pes2o/s2orc | v3-fos-license | Dysferlin-deficient immortalized human myoblasts and myotubes as a useful tool to study dysferlinopathy
Dysferlin gene mutations causing LGMD2B are associated with defects in muscle membrane repair. Four stable cell lines have been established from primary human dysferlin-deficient myoblasts harbouring different mutations in the dysferlin gene. We have compared immortalized human myoblasts and myotubes carrying disease-causing mutations in dysferlin to their wild-type counterparts. Fusion of myoblasts into myotubes and expression of muscle-specific differentiation markers were investigated with special emphasis on dysferlin protein expression, subcellular localization and function in membrane repair. We found that the immortalized myoblasts and myotubes were virtually indistinguishable from their parental cell line for all of the criteria we investigated. They therefore will provide a very useful tool to further investigate dysferlin function and pathophysiology as well as to test therapeutic strategies at the cellular level.
Introduction
Muscular dystrophies comprise clinically and genetically heterogeneous disorders characterized by progressive weakness and wasting of the skeletal muscle accompanied by an increase in muscle connective tissue [1] Dysferlin gene mutations cause limb girdle muscular dystrophy 2B (LGMD2B) and Miyoshi myopathy, allelic autosomal recessive diseases characterized by limb girdle or distal weakness of early adult onset [2] [3] .Dysferlin (MIM*603009) is a 230kDa transmembrane protein comprising calcium binding C2 domains that is highly expressed in skeletal muscle [4] [5] .Dysferlin localizes to the sarcolemma and is involved in membrane repair, membrane trafficking and muscle regeneration [6] [7] [8] .Various mutations associated with LGMD2B have been identified in dysferlin.These mutations lead either to a reduced expression of dysferlin at the sarcolemma, an intracelluar accumulation of dysferlin, the formation of amyloid-like deposits or to the complete absence of dysferlin protein finally resulting in impaired muscle membrane repair [9] [10] [11] .
The access to primary human myoblasts from biopsies of patients with disease-causing dysferlin mutations is limited.Due to excessive fibrosis, these muscle biopsies often contain only very few myogenic cells and are highly intermingled with connective tissue cells like fibroblasts and adipocytes.Additionally, primary human myoblasts in culture show a limited proliferative potential and undergo changes that are linked to replicative senescence [12] .To circumvent these limitations immortalized human myoblast lines were generated by retroviral transduction of primary human myoblasts harbouring different disease-causing mutations with telomerase (hTERT) and cyclin-dependent kinase 4 (CDK-4).The expression of hTERT overcomes the progressive erosion of telomeres occuring due to cell division and the overexpression of CDK-4 blocks the induction of the p16mediated cellular stress-pathway [13] .After their immortalization these cell lines show a prolonged proliferation and differentiation capacity compared to primary human myoblasts in vitro and they can be transplanted into regenerating muscle in vivo [13] .
Here we describe a detailed analysis of immortalized human myoblast lines harbouring different dysferlin mutations.These cell lines maintain the characteristics of primary dysferlin-deficient human cell strains with respect to myogenic differentiation, dysferlin expression and membrane repair and represent a useful tool to further investigate all aspects of dysferlin function and dysfunction.
Introduction
Muscular dystrophies comprise clinically and genetically heterogeneous disorders characterized by progressive weakness and wasting of the skeletal muscle accompanied by an increase in muscle connective tissue [1] Dysferlin gene mutations cause limb girdle muscular dystrophy 2B (LGMD2B) and Miyoshi myopathy, allelic autosomal recessive diseases characterized by limb girdle or distal weakness of early adult onset [2] [3] .Dysferlin (MIM*603009) is a 230kDa transmembrane protein comprising calcium binding C2 domains that is highly expressed in skeletal muscle [4] [5] .Dysferlin localizes to the sarcolemma and is involved in membrane repair, membrane trafficking and muscle regeneration [6] [7] [8] .Various mutations associated with LGMD2B have been identified in dysferlin.These mutations lead either to a reduced expression of dysferlin at the sarcolemma, an intracelluar accumulation of dysferlin, the formation of amyloid-like deposits or to the complete absence of dysferlin protein finally resulting in impaired muscle membrane repair [9] [10] [11] .
The access to primary human myoblasts from biopsies of patients with disease-causing dysferlin mutations is limited.Due to excessive fibrosis, these muscle biopsies often contain only very few myogenic cells and are highly intermingled with connective tissue cells like fibroblasts and adipocytes.Additionally, primary human myoblasts in culture show a limited proliferative potential and undergo changes that are linked to replicative senescence [12] .To circumvent these limitations immortalized human myoblast lines were generated by retroviral transduction of primary human myoblasts harbouring different disease-causing mutations with telomerase (hTERT) and cyclin-dependent kinase 4 (CDK-4).The expression of hTERT overcomes the progressive erosion of telomeres occuring due to cell division and the overexpression of CDK-4 blocks the induction of the p16mediated cellular stress-pathway [13] .After their immortalization these cell lines show a prolonged proliferation and differentiation capacity compared to primary human myoblasts in vitro and they can be transplanted into regenerating muscle in vivo [13] .
Here we describe a detailed analysis of immortalized human myoblast lines harbouring different dysferlin mutations.These cell lines maintain the characteristics of primary dysferlin-deficient human cell strains with respect to myogenic differentiation, dysferlin expression and membrane repair and represent a useful tool to further investigate all aspects of dysferlin function and dysfunction.
Introduction
Muscular dystrophies comprise clinically and genetically heterogeneous disorders characterized by progressive weakness and wasting of the skeletal muscle accompanied by an increase in muscle connective tissue [1] Dysferlin gene mutations cause limb girdle muscular dystrophy 2B (LGMD2B) and Miyoshi myopathy, allelic autosomal recessive diseases characterized by limb girdle or distal weakness of early adult onset [2] [3] .Dysferlin (MIM*603009) is a 230kDa transmembrane protein comprising calcium binding C2 domains that is highly expressed in skeletal muscle [4] [5] .Dysferlin localizes to the sarcolemma and is involved in membrane repair, membrane trafficking and muscle regeneration [6] [7] [8] .Various mutations associated with LGMD2B have been identified in dysferlin.These mutations lead either to a reduced expression of dysferlin at the sarcolemma, an intracelluar accumulation of dysferlin, the formation of amyloid-like deposits or to the complete absence of dysferlin protein finally resulting in impaired muscle membrane repair [9] [10] [11] .
The access to primary human myoblasts from biopsies of patients with disease-causing dysferlin mutations is limited.Due to excessive fibrosis, these muscle biopsies often contain only very few myogenic cells and are highly intermingled with connective tissue cells like fibroblasts and adipocytes.Additionally, primary human myoblasts in culture show a limited proliferative potential and undergo changes that are linked to replicative senescence [12] .To circumvent these limitations immortalized human myoblast lines were generated by retroviral transduction of primary human myoblasts harbouring different disease-causing mutations with telomerase (hTERT) and cyclin-dependent kinase 4 (CDK-4).The expression of hTERT overcomes the progressive erosion of telomeres occuring due to cell division and the overexpression of CDK-4 blocks the induction of the p16mediated cellular stress-pathway [13] .After their immortalization these cell lines show a prolonged proliferation and differentiation capacity compared to primary human myoblasts in vitro and they can be transplanted into regenerating muscle in vivo [13] .
Here we describe a detailed analysis of immortalized human myoblast lines harbouring different dysferlin mutations.These cell lines maintain the characteristics of primary dysferlin-deficient human cell strains with respect to myogenic differentiation, dysferlin expression and membrane repair and represent a useful tool to further investigate all aspects of dysferlin function and dysfunction.
Patients
The Charité internal review board approved the study and written informed consent was obtained from all patients.Skeletal muscle biopsies (M.vastus lateralis) were obtained from four patients with dysferlinopathy and four healthy controls.Patients with LGMD2B were affected by the following mutations in dysferlin: homozygous c.4022T>C (DYSF1), compound heterozygous c.855+1delG/c.895G>A(DYSF2), compound heterozygous c.1448C>A/c.*107T>A (DYSF3) and homozygous c.2810+2T>A (DYSF4) (see Table1) resulting either in complete loss or intracellular aggregation of dysferlin.At the time of biopsies taken patients were 57, 37, 36 and 25 years old, respectively.
Purification and differentiation of primary human myoblasts
Primary myoblasts were isolated by protease digestion from fresh muscle biopsies and expanded at 37 o C in humidified atmosphere at 5% CO 2 in skeletal muscle growth medium (PromoCell, Heidelberg, Germany) supplemented with 10% FCS, glutamine (3mM) and gentamycin (40µg/ml) (Gibco, Paisely, UK).All cultures were enriched in myoblasts by immuno-magnetic cell sorting using anti-CD56/NCAM antibody coated magnetic beads (Miltenyi Biotech, Bergisch Gladbach, Germany).Purity of the myoblast preparation was verified by staining with an anti-desmin antibody (DAKO) revealing more than 95% desminpositive cells.Differentiation of myoblasts into myotubes was initiated at approximately 90% confluence by cultivation in differentiation medium (DMEM, 2% horse serum) for 7 days.
Immortalization of primary human myoblasts and their differentiation into myotubes
Primary human dysferlin-deficient and control myoblast lines were transduced with pBABE retroviral vectors carrying Cdk4 and hTERT.Puromycin and neomycin were used as selection markers, respectively and isolation of individual myogenic clones was carried out as described by Mamchaoui et al. [13] .The immortalized dysferlin-deficient and control human myoblast lines were cultured in growth medium consisting of 1 vol 199 Medium (Invitrogen, Carlsbad, CA)/4 vol DMEM (Invitrogen) supplemented with 20% foetal calf serum (Invitrogen), 2.5 ng/ml HGF (Invitrogen), 0.1 µM Dexamethasone (Sigma-Aldrich, St. Louis, MO) and 50µg/ml Gentamycin (Invitrogen).Differentiation into myotubes was initiated at approximately 90% confluence by cultivation in differentiation medium (DMEM, 2% horse serum) for 7 days
Immunocytochemistry
Myotubes were fixed for 10 min with 4% formaldehyde, permeabilized for 15 min with 0.2%Triton X100 and blocked with 1% BSA diluted in PBS.The following primary antibodies were used for immunostaining: mouse monoclonal antibodies to MHCs and dysferlin HAMLET (both from Novocastra, Newcastle upon Tyne, UK), desmin (Dako), caveolin3 (Santa Cruz Biotechnology), MHCf and a-actinin (both from Sigma-Aldrich) in combination with secondary Alexa 488-or Alexa 568conjugated anti-mouse IgG antibodies (Invitrogen).Nuclei were counterstained with Hoechst 33258.Images were collected using a Leica DMI6000 fluorescence microscope or with a Zeiss-LSM 510 META confocal microscope
Laser mediated membrane wounding
Shortly before performing the membrane wounding myotubes were washed once in Tyrode solution (140 mM NaCl, 5 mM KCl, 2 mM MgCl 2 , 2.5 mM CaCl 2 and 10 mM HEPES, pH 7.2).The wounding experiment was performed in Tyrode solution supplemented with the FM1-43 dye (2.5µM; Molecular Probes, Invitrogen, Paisley, UK).Myotubes were wounded by the irradiation of a 2.5 x 2.5 ?m surface area for 58 s at 50% of the laser power (30mW argon-laser) employing a Zeiss-LSM 510 META confocal microscope.Images were taken with a 63x oil immersion objective every 20s for 280s after wounding and were processed using the Zeiss LSM Image Browser software.Changes of fluorescence intensity were calculated using ImageJ.
Generation of dysferlin-deficient immortalized human myoblast lines
Four stable cell lines were established from primary human dysferlin-deficient myoblasts harbouring different mutations in dysferlin resulting in either the intracellular aggregation of dysferlin (DYSF1 and DYSF2) or the complete absence of dysferlin protein (DYSF3 and DYSF4) as indicated in Table1.
Patients
The Charité internal review board approved the study and written informed consent was obtained from all patients.Skeletal muscle biopsies (M.vastus lateralis) were obtained from four patients with dysferlinopathy and four healthy controls.Patients with LGMD2B were affected by the following mutations in dysferlin: homozygous c.4022T>C (DYSF1), compound heterozygous c.855+1delG/c.895G>A(DYSF2), compound heterozygous c.1448C>A/c.*107T>A (DYSF3) and homozygous c.2810+2T>A (DYSF4) (see Table1) resulting either in complete loss or intracellular aggregation of dysferlin.At the time of biopsies taken patients were 57, 37, 36 and 25 years old, respectively.
Purification and differentiation of primary human myoblasts
Primary myoblasts were isolated by protease digestion from fresh muscle biopsies and expanded at 37 o C in humidified atmosphere at 5% CO 2 in skeletal muscle growth medium (PromoCell, Heidelberg, Germany) supplemented with 10% FCS, glutamine (3mM) and gentamycin (40µg/ml) (Gibco, Paisely, UK).All cultures were enriched in myoblasts by immuno-magnetic cell sorting using anti-CD56/NCAM antibody coated magnetic beads (Miltenyi Biotech, Bergisch Gladbach, Germany).Purity of the myoblast preparation was verified by staining with an anti-desmin antibody (DAKO) revealing more than 95% desminpositive cells.Differentiation of myoblasts into myotubes was initiated at approximately 90% confluence by cultivation in differentiation medium (DMEM, 2% horse serum) for 7 days.
Immortalization of primary human myoblasts and their differentiation into myotubes
Primary human dysferlin-deficient and control myoblast lines were transduced with pBABE retroviral vectors carrying Cdk4 and hTERT.Puromycin and neomycin were used as selection markers, respectively and isolation of individual myogenic clones was carried out as described by Mamchaoui et al. [13] .The immortalized dysferlin-deficient and control human myoblast lines were cultured in growth medium consisting of 1 vol 199 Medium (Invitrogen, Carlsbad, CA)/4 vol DMEM (Invitrogen) supplemented with 20% foetal calf serum (Invitrogen), 2.5 ng/ml HGF (Invitrogen), 0.1 µM Dexamethasone (Sigma-Aldrich, St. Louis, MO) and 50µg/ml Gentamycin (Invitrogen).Differentiation into myotubes was initiated at approximately 90% confluence by cultivation in differentiation medium (DMEM, 2% horse serum) for 7 days
Immunocytochemistry
Myotubes were fixed for 10 min with 4% formaldehyde, permeabilized for 15 min with 0.2%Triton X100 and blocked with 1% BSA diluted in PBS.The following primary antibodies were used for immunostaining: mouse monoclonal antibodies to MHCs and dysferlin HAMLET (both from Novocastra, Newcastle upon Tyne, UK), desmin (Dako), caveolin3 (Santa Cruz Biotechnology), MHCf and a-actinin (both from Sigma-Aldrich) in combination with secondary Alexa 488-or Alexa 568conjugated anti-mouse IgG antibodies (Invitrogen).Nuclei were counterstained with Hoechst 33258.Images were collected using a Leica DMI6000 fluorescence microscope or with a Zeiss-LSM 510 META confocal microscope
Laser mediated membrane wounding
Shortly before performing the membrane wounding myotubes were washed once in Tyrode solution (140 mM NaCl, 5 mM KCl, 2 mM MgCl 2 , 2.5 mM CaCl 2 and 10 mM HEPES, pH 7.2).The wounding experiment was performed in Tyrode solution supplemented with the FM1-43 dye (2.5µM; Molecular Probes, Invitrogen, Paisley, UK).Myotubes were wounded by the irradiation of a 2.5 x 2.5 ?m surface area for 58 s at 50% of the laser power (30mW argon-laser) employing a Zeiss-LSM 510 META confocal microscope.Images were taken with a 63x oil immersion objective every 20s for 280s after wounding and were processed using the Zeiss LSM Image Browser software.Changes of fluorescence intensity were calculated using ImageJ.
Generation of dysferlin-deficient immortalized human myoblast lines
Four stable cell lines were established from primary human dysferlin-deficient myoblasts harbouring different mutations in dysferlin resulting in either the intracellular aggregation of dysferlin (DYSF1 and DYSF2) or the complete absence of dysferlin protein (DYSF3 and DYSF4) as indicated in Table1.
Patients
The Charité internal review board approved the study and written informed consent was obtained from all patients.Skeletal muscle biopsies (M.vastus lateralis) were obtained from four patients with dysferlinopathy and four healthy controls.Patients with LGMD2B were affected by the following mutations in dysferlin: homozygous c.4022T>C (DYSF1), compound heterozygous c.855+1delG/c.895G>A(DYSF2), compound heterozygous c.1448C>A/c.*107T>A (DYSF3) and homozygous c.2810+2T>A (DYSF4) (see Table1) resulting either in complete loss or intracellular aggregation of dysferlin.At the time of biopsies taken patients were 57, 37, 36 and 25 years old, respectively.
Purification and differentiation of primary human myoblasts
Primary myoblasts were isolated by protease digestion from fresh muscle biopsies and expanded at 37 o C in humidified atmosphere at 5% CO 2 in skeletal muscle growth medium (PromoCell, Heidelberg, Germany) supplemented with 10% FCS, glutamine (3mM) and gentamycin (40µg/ml) (Gibco, Paisely, UK).All cultures were enriched in myoblasts by immuno-magnetic cell sorting using anti-CD56/NCAM antibody coated magnetic beads (Miltenyi Biotech, Bergisch Gladbach, Germany).Purity of the myoblast preparation was verified by staining with an anti-desmin antibody (DAKO) revealing more than 95% desminpositive cells.Differentiation of myoblasts into myotubes was initiated at approximately 90% confluence by cultivation in differentiation medium (DMEM, 2% horse serum) for 7 days.
Immortalization of primary human myoblasts and their differentiation into myotubes
Primary human dysferlin-deficient and control myoblast lines were transduced with pBABE retroviral vectors carrying Cdk4 and hTERT.Puromycin and neomycin were used as selection markers, respectively and isolation of individual myogenic clones was carried out as described by Mamchaoui et al. [13] .The immortalized dysferlin-deficient and control human myoblast lines were cultured in growth medium consisting of 1 vol 199 Medium (Invitrogen, Carlsbad, CA)/4 vol DMEM (Invitrogen) supplemented with 20% foetal calf serum (Invitrogen), 2.5 ng/ml HGF (Invitrogen), 0.1 µM Dexamethasone (Sigma-Aldrich, St. Louis, MO) and 50µg/ml Gentamycin (Invitrogen).Differentiation into myotubes was initiated at approximately 90% confluence by cultivation in differentiation medium (DMEM, 2% horse serum) for 7 days
Immunocytochemistry
Myotubes were fixed for 10 min with 4% formaldehyde, permeabilized for 15 min with 0.2%Triton X100 and blocked with 1% BSA diluted in PBS.The following primary antibodies were used for immunostaining: mouse monoclonal antibodies to MHCs and dysferlin HAMLET (both from Novocastra, Newcastle upon Tyne, UK), desmin (Dako), caveolin3 (Santa Cruz Biotechnology), MHCf and a-actinin (both from Sigma-Aldrich) in combination with secondary Alexa 488-or Alexa 568conjugated anti-mouse IgG antibodies (Invitrogen).Nuclei were counterstained with Hoechst 33258.Images were collected using a Leica DMI6000 fluorescence microscope or with a Zeiss-LSM 510 META confocal microscope
Laser mediated membrane wounding
Shortly before performing the membrane wounding myotubes were washed once in Tyrode solution (140 mM NaCl, 5 mM KCl, 2 mM MgCl 2 , 2.5 mM CaCl 2 and 10 mM HEPES, pH 7.2).The wounding experiment was performed in Tyrode solution supplemented with the FM1-43 dye (2.5µM; Molecular Probes, Invitrogen, Paisley, UK).Myotubes were wounded by the irradiation of a 2.5 x 2.5 ?m surface area for 58 s at 50% of the laser power (30mW argon-laser) employing a Zeiss-LSM 510 META confocal microscope.Images were taken with a 63x oil immersion objective every 20s for 280s after wounding and were processed using the Zeiss LSM Image Browser software.Changes of fluorescence intensity were calculated using ImageJ.
Generation of dysferlin-deficient immortalized human myoblast lines
Four stable cell lines were established from primary human dysferlin-deficient myoblasts harbouring different mutations in dysferlin resulting in either the intracellular aggregation of dysferlin (DYSF1 and DYSF2) or the complete absence of dysferlin protein (DYSF3 and DYSF4) as indicated in Table1.
After cloning, all immortalized myoblast lines were 100% myogenic as shown by the expression of both desmin and CD56/NCAM by immunocytochemistry, with a total absence of connective tissue cells such as fibroblasts or adipocytes.The reference sequence used is GenBank ID NM_003494.The nomenclature sequence uses "c", "r" and "p" when referring to cDNA, mRNA and protein, respectively, and "i" when referring to intronic sequence.
Comparison of dysferlin-deficient immortalized human myoblasts and myotubes to their parental cell lines
The effect of immortalization on myoblast differentiation into myotubes was analyzed by Western blott assessing the expression of myogenic differentiation markers.All immortalized human myoblast lines showed a strong expression of desmin and a weak expression of caveolin3 (Fig. 1B) similar to their parent myoblast lines (Fig. 1A).Primary and immortalized (IM) wild-type human myoblasts showed a low expression of dysferlin, that was reduced in IM DYSF1 and DYSF2/IM DYSF2 and completely absent in DYSF3/IM DYSF3 and DYSF4/IM DYSF4.We were not able to analyze primary human DYSF1 myoblasts and their corresponding myotubes due to a massive presence of non-muscle cell types and a very poor myogenicity, further enhancing the interest in generating immortalized clones from patients' primary cultures.
After 7 days of differentiation myotube formation was associated with a strong increase in the expression of myosin heavy chain (MHC) and caveolin3 in all primary and immortalized cell lines investigated (Fig. 1A and 1B) except DYSF1 regardless of the presence of dysferlin.Desmin was still expressed in myotubes.a-tubulin served as a loading control.Myotubes derived from primary and immortalized wild-type myoblasts showed a strong increase in dysferlin expression with differentiation (Fig. 1A In primary and immortalized wild-type myotubes and in IM DYSF1 and DYSF2/IM DYSF2 myotubes dysferlin was only expressed as a single 230 kDa protein band as revealed by the HAMLET antibody directed against the C-terminal juxtamembrane region of dysferlin and by a polyclonal antibody to the N-terminal part of dysferlin [10] (data not shown).This suggests that dysferlin protein with disease causing-point mutations is expressed as the full-length protein.
Altogether dysferlin-deficient immortalized human myoblasts showed a differentiation pattern comparable to their untransformed counterparts.Furthermore, the differentiation of myoblasts with disease-causing mutations in dysferlin into myotubes is similar to wild-type myoblasts with respect to the expression of myogenic differentiation markers such as desmin, MHC and caveolin3.
After cloning, all immortalized myoblast lines were 100% myogenic as shown by the expression of both desmin and CD56/NCAM by immunocytochemistry, with a total absence of connective tissue cells such as fibroblasts or adipocytes.The reference sequence used is GenBank ID NM_003494.The nomenclature sequence uses "c", "r" and "p" when referring to cDNA, mRNA and protein, respectively, and "i" when referring to intronic sequence.
Comparison of dysferlin-deficient immortalized human myoblasts and myotubes to their parental cell lines
The effect of immortalization on myoblast differentiation into myotubes was analyzed by Western blott assessing the expression of myogenic differentiation markers.All immortalized human myoblast lines showed a strong expression of desmin and a weak expression of caveolin3 (Fig. 1B) similar to their parent myoblast lines (Fig. 1A).Primary and immortalized (IM) wild-type human myoblasts showed a low expression of dysferlin, that was reduced in IM DYSF1 and DYSF2/IM DYSF2 and completely absent in DYSF3/IM DYSF3 and DYSF4/IM DYSF4.We were not able to analyze primary human DYSF1 myoblasts and their corresponding myotubes due to a massive presence of non-muscle cell types and a very poor myogenicity, further enhancing the interest in generating immortalized clones from patients' primary cultures.
After 7 days of differentiation myotube formation was associated with a strong increase in the expression of myosin heavy chain (MHC) and caveolin3 in all primary and immortalized cell lines investigated (Fig. 1A and 1B) except DYSF1 regardless of the presence of dysferlin.Desmin was still expressed in myotubes.a-tubulin served as a loading control.Myotubes derived from primary and immortalized wild-type myoblasts showed a strong increase in dysferlin expression with differentiation (Fig. 1A In primary and immortalized wild-type myotubes and in IM DYSF1 and DYSF2/IM DYSF2 myotubes dysferlin was only expressed as a single 230 kDa protein band as revealed by the HAMLET antibody directed against the C-terminal juxtamembrane region of dysferlin and by a polyclonal antibody to the N-terminal part of dysferlin [10] (data not shown).This suggests that dysferlin protein with disease causing-point mutations is expressed as the full-length protein.
Altogether dysferlin-deficient immortalized human myoblasts showed a differentiation pattern comparable to their untransformed counterparts.Furthermore, the differentiation of myoblasts with disease-causing mutations in dysferlin into myotubes is similar to wild-type myoblasts with respect to the expression of myogenic differentiation markers such as desmin, MHC and caveolin3.
After cloning, all immortalized myoblast lines were 100% myogenic as shown by the expression of both desmin and CD56/NCAM by immunocytochemistry, with a total absence of connective tissue cells such as fibroblasts or adipocytes.The reference sequence used is GenBank ID NM_003494.The nomenclature sequence uses "c", "r" and "p" when referring to cDNA, mRNA and protein, respectively, and "i" when referring to intronic sequence.
Comparison of dysferlin-deficient immortalized human myoblasts and myotubes to their parental cell lines
The effect of immortalization on myoblast differentiation into myotubes was analyzed by Western blott assessing the expression of myogenic differentiation markers.All immortalized human myoblast lines showed a strong expression of desmin and a weak expression of caveolin3 (Fig. 1B) similar to their parent myoblast lines (Fig. 1A).Primary and immortalized (IM) wild-type human myoblasts showed a low expression of dysferlin, that was reduced in IM DYSF1 and DYSF2/IM DYSF2 and completely absent in DYSF3/IM DYSF3 and DYSF4/IM DYSF4.We were not able to analyze primary human DYSF1 myoblasts and their corresponding myotubes due to a massive presence of non-muscle cell types and a very poor myogenicity, further enhancing the interest in generating immortalized clones from patients' primary cultures.
After 7 days of differentiation myotube formation was associated with a strong increase in the expression of myosin heavy chain (MHC) and caveolin3 in all primary and immortalized cell lines investigated (Fig. 1A and 1B) except DYSF1 regardless of the presence of dysferlin.Desmin was still expressed in myotubes.a-tubulin served as a loading control.Myotubes derived from primary and immortalized wild-type myoblasts showed a strong increase in dysferlin expression with differentiation (Fig. 1A In primary and immortalized wild-type myotubes and in IM DYSF1 and DYSF2/IM DYSF2 myotubes dysferlin was only expressed as a single 230 kDa protein band as revealed by the HAMLET antibody directed against the C-terminal juxtamembrane region of dysferlin and by a polyclonal antibody to the N-terminal part of dysferlin [10] (data not shown).This suggests that dysferlin protein with disease causing-point mutations is expressed as the full-length protein.
Altogether dysferlin-deficient immortalized human myoblasts showed a differentiation pattern comparable to their untransformed counterparts.Furthermore, the differentiation of myoblasts with disease-causing mutations in dysferlin into myotubes is similar to wild-type myoblasts with respect to the expression of myogenic differentiation markers such as desmin, MHC and caveolin3.
Differentiation pattern of dysferlin-deficient and wild-type immortalized human myoblasts
Myogenic differentiation of dysferlin-deficient and wild-type immortalized human myoblasts into myotubes was followed by immunocytochemistry to reveal the expression of early and late myogenic differentiation markers including the myogenic regulatory transcription factors MyoD and myogenin, muscle structural proteins MHC, a-actinin and desmin and differentiationregulated sarcolemmal proteins dysferlin and caveolin3.Representative examples are shown in Figure 2.After 7 days of differentiation multinucleated myotubes were formed from dysferlin-deficient and from wild-type immortalized human myoblasts as demonstrated by the expression of MHC, a-actinin and caveolin3 (Fig. 2).Dysferlin was present in wild-type myotubes, showed a reduced expression in IM DYSF1 and IM DYSF2 and was completely absent in IM DYSF3 and IM DYSF4 (Fig. 2).The subcellular localization of muscle differentiation-associated proteins was analyzed by high-resolution laser scanning confocal microscopy.After 7 days of differentiation a striated staining pattern for MHC and a-actinin, labelling Z-lines, was observed in myotubes derived from wild-type and dysferlin-deficient immortalized myoblasts (Fig. 3).This is indicative of the correct formation of sarcomeres in these myotubes independent of the presence or absence of dysferlin.Accordingly, spontaneous contractions of myotubes were occasionally observed.
In wild-type myotubes dysferlin was distributed in the plasma membrane and the reticulate structures throughout the cell.Myotubes with disease-causing mutations in dysferlin showed an intracellular aggregation (IM DYSF1 and IM DYSF2) or a complete absence of dysferlin protein (IM DYSF3 and IM DYSF4) (Fig. 3)
Differentiation pattern of dysferlin-deficient and wild-type immortalized human myoblasts
Myogenic differentiation of dysferlin-deficient and wild-type immortalized human myoblasts into myotubes was followed by immunocytochemistry to reveal the expression of early and late myogenic differentiation markers including the myogenic regulatory transcription factors MyoD and myogenin, muscle structural proteins MHC, a-actinin and desmin and differentiationregulated sarcolemmal proteins dysferlin and caveolin3.Representative examples are shown in Figure 2.After 7 days of differentiation multinucleated myotubes were formed from dysferlin-deficient and from wild-type immortalized human myoblasts as demonstrated by the expression of MHC, a-actinin and caveolin3 (Fig. 2).Dysferlin was present in wild-type myotubes, showed a reduced expression in IM DYSF1 and IM DYSF2 and was completely absent in IM DYSF3 and IM DYSF4 (Fig. 2).The subcellular localization of muscle differentiation-associated proteins was analyzed by high-resolution laser scanning confocal microscopy.After 7 days of differentiation a striated staining pattern for MHC and a-actinin, labelling Z-lines, was observed in myotubes derived from wild-type and dysferlin-deficient immortalized myoblasts (Fig. 3).This is indicative of the correct formation of sarcomeres in these myotubes independent of the presence or absence of dysferlin.Accordingly, spontaneous contractions of myotubes were occasionally observed.
In wild-type myotubes dysferlin was distributed in the plasma membrane and the reticulate structures throughout the cell.Myotubes with disease-causing mutations in dysferlin showed an intracellular aggregation (IM DYSF1 and IM DYSF2) or a complete absence of dysferlin protein (IM DYSF3 and IM DYSF4) (Fig. 3)
Differentiation pattern of dysferlin-deficient and wild-type immortalized human myoblasts
Myogenic differentiation of dysferlin-deficient and wild-type immortalized human myoblasts into myotubes was followed by immunocytochemistry to reveal the expression of early and late myogenic differentiation markers including the myogenic regulatory transcription factors MyoD and myogenin, muscle structural proteins MHC, a-actinin and desmin and differentiationregulated sarcolemmal proteins dysferlin and caveolin3.Representative examples are shown in Figure 2.After 7 days of differentiation multinucleated myotubes were formed from dysferlin-deficient and from wild-type immortalized human myoblasts as demonstrated by the expression of MHC, a-actinin and caveolin3 (Fig. 2).Dysferlin was present in wild-type myotubes, showed a reduced expression in IM DYSF1 and IM DYSF2 and was completely absent in IM DYSF3 and IM DYSF4 (Fig. 2).The subcellular localization of muscle differentiation-associated proteins was analyzed by high-resolution laser scanning confocal microscopy.After 7 days of differentiation a striated staining pattern for MHC and a-actinin, labelling Z-lines, was observed in myotubes derived from wild-type and dysferlin-deficient immortalized myoblasts (Fig. 3).This is indicative of the correct formation of sarcomeres in these myotubes independent of the presence or absence of dysferlin.Accordingly, spontaneous contractions of myotubes were occasionally observed.
In wild-type myotubes dysferlin was distributed in the plasma membrane and the reticulate structures throughout the cell.Myotubes with disease-causing mutations in dysferlin showed an intracellular aggregation (IM DYSF1 and IM DYSF2) or a complete absence of dysferlin protein (IM DYSF3 and IM DYSF4) (Fig. 3) Immunofluorescence staining with either anti-MHC or with anti-?-actinin antibody revealed correct organization of the sarcomere and Z-lines, respectively, in all cell lines.Dysferlin was distributed to the plasma membrane in wild-type immortalized myotubes (IM WT2).Disease-causing mutations in dysferlin result in an intracellular aggregation (IM DYSF1 and IM DYSF2) or a complete absence of dysferlin protein (IM DYSF3 and IM DYSF4).(bar 5µm, left panels; bar 20µm, right panels)
Dysferlin-deficiency and membrane repair
Since dysferlin is involved in plasma membrane resealing after injury, laser wounding assays were performed.Myotubes derived from dysferlin-deficient and wild-type immortalized human myoblasts were compared with respect to membrane repair.Membrane wounding was conducted employing a laser confocal microscope and the fluorescent membrane dye FM1-43 was used as a readout of membrane repair, as described by Cai et al. [14] .In myotubes derived from dysferlin-deficient immortalized myoblasts, we observed an opening of the targeted membrane frontier and a staining of intracellular membrane compartments by the influx of the fluorescent dye FM1-43 (Fig. 4B and 4C).Membrane integrity was not restored during a 280 sec observation period.In myotubes from wild-type immortalized human myoblasts, we observed resealing of the targeted membrane frontier after laser wounding and no influx of FM1-43 during the remaining 280 sec (Fig. 4A).These observations were reinforced by the quantification of the increase in fluorescence intensity in dysferlin-deficient compared to wild-type myotubes (Fig. 4D).
Discussion
Primary human myoblasts are an indispensible tool to study cellular processes in muscle disease.However, the material is very precious and available in very small quantities because of the limited size of biopsy procedures that can be performed and the reduced proliferative potential of human myoblasts which will be further reduced in muscular dystrophies due to the cycles of degeneration and regeneration.Furthermore, primary cultures from dystrophic muscle are generally highly intermingled with fibroblasts that cannot always be sorted completely.In the past, we observed that cultures derived from different LGMD2B patients consisted mainly of fibroblasts and adipocytes and we were therefore unable to generate pure primary myoblast lines Immunofluorescence staining with either anti-MHC or with anti-?-actinin antibody revealed correct organization of the sarcomere and Z-lines, respectively, in all cell lines.Dysferlin was distributed to the plasma membrane in wild-type immortalized myotubes (IM WT2).Disease-causing mutations in dysferlin result in an intracellular aggregation (IM DYSF1 and IM DYSF2) or a complete absence of dysferlin protein (IM DYSF3 and IM DYSF4).(bar 5µm, left panels; bar 20µm, right panels)
Dysferlin-deficiency and membrane repair
Since dysferlin is involved in plasma membrane resealing after injury, laser wounding assays were performed.Myotubes derived from dysferlin-deficient and wild-type immortalized human myoblasts were compared with respect to membrane repair.Membrane wounding was conducted employing a laser confocal microscope and the fluorescent membrane dye FM1-43 was used as a readout of membrane repair, as described by Cai et al. [14] .In myotubes derived from dysferlin-deficient immortalized myoblasts, we observed an opening of the targeted membrane frontier and a staining of intracellular membrane compartments by the influx of the fluorescent dye FM1-43 (Fig. 4B and 4C).Membrane integrity was not restored during a 280 sec observation period.In myotubes from wild-type immortalized human myoblasts, we observed resealing of the targeted membrane frontier after laser wounding and no influx of FM1-43 during the remaining 280 sec (Fig. 4A).These observations were reinforced by the quantification of the increase in fluorescence intensity in dysferlin-deficient compared to wild-type myotubes (Fig. 4D).
Discussion
Primary human myoblasts are an indispensible tool to study cellular processes in muscle disease.However, the material is very precious and available in very small quantities because of the limited size of biopsy procedures that can be performed and the reduced proliferative potential of human myoblasts which will be further reduced in muscular dystrophies due to the cycles of degeneration and regeneration.Furthermore, primary cultures from dystrophic muscle are generally highly intermingled with fibroblasts that cannot always be sorted completely.In the past, we observed that cultures derived from different LGMD2B patients consisted mainly of fibroblasts and adipocytes and we were therefore unable to generate pure primary myoblast lines Immunofluorescence staining with either anti-MHC or with anti-?-actinin antibody revealed correct organization of the sarcomere and Z-lines, respectively, in all cell lines.Dysferlin was distributed to the plasma membrane in wild-type immortalized myotubes (IM WT2).Disease-causing mutations in dysferlin result in an intracellular aggregation (IM DYSF1 and IM DYSF2) or a complete absence of dysferlin protein (IM DYSF3 and IM DYSF4).(bar 5µm, left panels; bar 20µm, right panels)
Dysferlin-deficiency and membrane repair
Since dysferlin is involved in plasma membrane resealing after injury, laser wounding assays were performed.Myotubes derived from dysferlin-deficient and wild-type immortalized human myoblasts were compared with respect to membrane repair.Membrane wounding was conducted employing a laser confocal microscope and the fluorescent membrane dye FM1-43 was used as a readout of membrane repair, as described by Cai et al. [14] .In myotubes derived from dysferlin-deficient immortalized myoblasts, we observed an opening of the targeted membrane frontier and a staining of intracellular membrane compartments by the influx of the fluorescent dye FM1-43 (Fig. 4B and 4C).Membrane integrity was not restored during a 280 sec observation period.In myotubes from wild-type immortalized human myoblasts, we observed resealing of the targeted membrane frontier after laser wounding and no influx of FM1-43 during the remaining 280 sec (Fig. 4A).These observations were reinforced by the quantification of the increase in fluorescence intensity in dysferlin-deficient compared to wild-type myotubes (Fig. 4D).
Discussion
Primary human myoblasts are an indispensible tool to study cellular processes in muscle disease.However, the material is very precious and available in very small quantities because of the limited size of biopsy procedures that can be performed and the reduced proliferative potential of human myoblasts which will be further reduced in muscular dystrophies due to the cycles of degeneration and regeneration.Furthermore, primary cultures from dystrophic muscle are generally highly intermingled with fibroblasts that cannot always be sorted completely.In the past, we observed that cultures derived from different LGMD2B patients consisted mainly of fibroblasts and adipocytes and we were therefore unable to generate pure primary myoblast lines from them, as examplified for DYSF1 in this report.However, in the present study it was possible to get a 100% myogenic population of IM DYSF1 after immortalization with hTERT and CDK-4 and subsequent cloning highlighting the great potential of the immortalization procedure.We thoroughly characterized four immortalized LGMD2B patient cell lines harbouring different dysferlin mutations not only for myogenic differentiation but also functionally and found no significant differences to their parental cell lines (Fig. 1 and data not shown).Therefore, the immortalization of primary human myoblasts represents a major advantage and may overcome the limitations mentioned above.In addition to LGMD2B, immortalized human myoblast lines have been successfully established from patients with other muscle diseases including facioscapulohumeral MD, Duchenne MD, congenital MD and oculo-pharyngeal MD [13] [15] .
Human cell lines with unlimited proliferative capacity are useful tools in cell biology and for translational research.Up to now, only murine myoblast lines are available that are deficient for dysferlin.GREG cells were derived from A/J mice lacking dysferlin expression [16] and a C2C12 cell clone with a stable dysferlin knock-down by shRNA has been establihed that expresses about 10% of residual dysferlin [17] .
There are discrepancies in the literature about the potential of dysferlin-deficient myoblasts to fuse and to differentiate into myotubes.Dysferlin-deficient myoblasts have been described to fully differentiate without any impairment as compared to wildtype myoblasts with respect to myoblast fusion and activation of myogenic pathways.This strongly suggests that dysferlin does not play a role in early stages of myotube formation and subsequent maturation [7] [18] [19] [20] [21] .Instead, myoferlin, another member of the ferlin protein family, has been described to be essential for myoblast-myoblast and myoblast-myotube fusion [18] [22] .Myoferlin is expressed early during myoblast differentiation whereas dysferlin is expressed only after the formation of multinucleated myotubes [18] [22] .Contradictorily, a reduced or delayed differentiation of dysferlin-deficient myoblasts into myotubes has been attributed to dysferlin-deficiency by other groups [17] [23] [24] .Reasons for these discrepancies might be caused by the use of different cellular models, species, culture conditions, in vitro cultivation time, experimental design or developmental differences (e.g.age of the donor patient) that finally result in a different myogenic potential and differentiation kinetics.For instance it has been shown that differentiation kinetics of immortalized myoblast lines slow down with time in culture probably due to constant selection for proliferation [15] .
Being able to assess new therapeutical approaches is of great significance and requires to prove the proper function of the restored protein.This can be achieved partially by analysis of the correct intracellular localisation and the size of the protein using immunochemical approaches.In the case of dysferlin the assumption that dysferlin is indispensable in sarcolemmal repair opens the possibility for a direct functional assay by laser-mediated membrane wounding in cultured myotubes and myofibers.
We show here that myotubes derived from the immortalized dysferlin-deficient myoblast lines, e.g.IM DYSF1 and IM DYSF2, can be employed as a read-out tool of dysferlin functionality by laser-mediated wounding of the sarcolemma.Our results are in accordance with the earlier observed dysfunction of the membrane resealing process in the absence of dysferlin in myotubes and myofibers [6] [8] [14] .We conclude that the human immortalized dysferlin-deficient myoblast lines represent innovative tools to assess dysferlin functionality after application of pharmacological and genetical approaches to restore dysferlin.
Although we did not analyze cellular metabolism and regulation of cell cycle progression we expect metabolic changes in immortalized myoblasts due to their high proliferative potential.However, this seems to have no influence on myogenic differentiation and dysferlin function in immortalized myoblasts and their corresponding myotubes as demonstrated in this report.
In summary, the immortalized myoblast cell lines display properties highly similar to their parental cell lines with respect to myogenic differentiation, formation of multinucleated myotubes, development of a correct myofibrillar architecture and dysferlin protein expression.Dysferlin reveals unaltered subcellular localization and function in membrane repair in control cell lines, while it is perturbed in cell lines derived from LGMD-2B patients.In addition dysferlin-deficient myoblasts have been described to fully differentiate suggesting that dysferlin does not play a role in early stages of myotube formation.Therefore, immortalized human myoblast lines harbouring different mutations in dysferlin represent a very useful tool to further investigate dysferlin function, to study the pathophysiological mechanisms involved in dysferlinopathy and more importantly to assess therapeutic strategies to correct dysferlinopathies with a reliable readout.
Competing interests
The authors have declared that no competing interests exist.
Address for Correspondence
Simone Spuler, MD ECRC, Charité Campus Buch, Lindenberger Weg 80, 13125 Berlin, Germany from them, as examplified for DYSF1 in this report.However, in the present study it was possible to get a 100% myogenic population of IM DYSF1 after immortalization with hTERT and CDK-4 and subsequent cloning highlighting the great potential of the immortalization procedure.We thoroughly characterized four immortalized LGMD2B patient cell lines harbouring different dysferlin mutations not only for myogenic differentiation but also functionally and found no significant differences to their parental cell lines (Fig. 1 and data not shown).Therefore, the immortalization of primary human myoblasts represents a major advantage and may overcome the limitations mentioned above.In addition to LGMD2B, immortalized human myoblast lines have been successfully established from patients with other muscle diseases including facioscapulohumeral MD, Duchenne MD, congenital MD and oculo-pharyngeal MD [13] [15] .
Human cell lines with unlimited proliferative capacity are useful tools in cell biology and for translational research.Up to now, only murine myoblast lines are available that are deficient for dysferlin.GREG cells were derived from A/J mice lacking dysferlin expression [16] and a C2C12 cell clone with a stable dysferlin knock-down by shRNA has been establihed that expresses about 10% of residual dysferlin [17] .
There are discrepancies in the literature about the potential of dysferlin-deficient myoblasts to fuse and to differentiate into myotubes.Dysferlin-deficient myoblasts have been described to fully differentiate without any impairment as compared to wildtype myoblasts with respect to myoblast fusion and activation of myogenic pathways.This strongly suggests that dysferlin does not play a role in early stages of myotube formation and subsequent maturation [7] [18] [19] [20] [21] .Instead, myoferlin, another member of the ferlin protein family, has been described to be essential for myoblast-myoblast and myoblast-myotube fusion [18] [22] .Myoferlin is expressed early during myoblast differentiation whereas dysferlin is expressed only after the formation of multinucleated myotubes [18] [22] .Contradictorily, a reduced or delayed differentiation of dysferlin-deficient myoblasts into myotubes has been attributed to dysferlin-deficiency by other groups [17] [23] [24] .Reasons for these discrepancies might be caused by the use of different cellular models, species, culture conditions, in vitro cultivation time, experimental design or developmental differences (e.g.age of the donor patient) that finally result in a different myogenic potential and differentiation kinetics.For instance it has been shown that differentiation kinetics of immortalized myoblast lines slow down with time in culture probably due to constant selection for proliferation [15] .
Being able to assess new therapeutical approaches is of great significance and requires to prove the proper function of the restored protein.This can be achieved partially by analysis of the correct intracellular localisation and the size of the protein using immunochemical approaches.In the case of dysferlin the assumption that dysferlin is indispensable in sarcolemmal repair opens the possibility for a direct functional assay by laser-mediated membrane wounding in cultured myotubes and myofibers.
We show here that myotubes derived from the immortalized dysferlin-deficient myoblast lines, e.g.IM DYSF1 and IM DYSF2, can be employed as a read-out tool of dysferlin functionality by laser-mediated wounding of the sarcolemma.Our results are in accordance with the earlier observed dysfunction of the membrane resealing process in the absence of dysferlin in myotubes and myofibers [6] [8] [14] .We conclude that the human immortalized dysferlin-deficient myoblast lines represent innovative tools to assess dysferlin functionality after application of pharmacological and genetical approaches to restore dysferlin.
Although we did not analyze cellular metabolism and regulation of cell cycle progression we expect metabolic changes in immortalized myoblasts due to their high proliferative potential.However, this seems to have no influence on myogenic differentiation and dysferlin function in immortalized myoblasts and their corresponding myotubes as demonstrated in this report.
In summary, the immortalized myoblast cell lines display properties highly similar to their parental cell lines with respect to myogenic differentiation, formation of multinucleated myotubes, development of a correct myofibrillar architecture and dysferlin protein expression.Dysferlin reveals unaltered subcellular localization and function in membrane repair in control cell lines, while it is perturbed in cell lines derived from LGMD-2B patients.In addition dysferlin-deficient myoblasts have been described to fully differentiate suggesting that dysferlin does not play a role in early stages of myotube formation.Therefore, immortalized human myoblast lines harbouring different mutations in dysferlin represent a very useful tool to further investigate dysferlin function, to study the pathophysiological mechanisms involved in dysferlinopathy and more importantly to assess therapeutic strategies to correct dysferlinopathies with a reliable readout.
13125 Berlin, Germany from them, as examplified for DYSF1 in this report.However, in the present study it was possible to get a 100% myogenic population of IM DYSF1 after immortalization with hTERT and CDK-4 and subsequent cloning highlighting the great potential of the immortalization procedure.We thoroughly characterized four immortalized LGMD2B patient cell lines harbouring different dysferlin mutations not only for myogenic differentiation but also functionally and found no significant differences to their parental cell lines (Fig. 1 and data not shown).Therefore, the immortalization of primary human myoblasts represents a major advantage and may overcome the limitations mentioned above.In addition to LGMD2B, immortalized human myoblast lines have been successfully established from patients with other muscle diseases including facioscapulohumeral MD, Duchenne MD, congenital MD and oculo-pharyngeal MD [13] [15] .
Human cell lines with unlimited proliferative capacity are useful tools in cell biology and for translational research.Up to now, only murine myoblast lines are available that are deficient for dysferlin.GREG cells were derived from A/J mice lacking dysferlin expression [16] and a C2C12 cell clone with a stable dysferlin knock-down by shRNA has been establihed that expresses about 10% of residual dysferlin [17] .
There are discrepancies in the literature about the potential of dysferlin-deficient myoblasts to fuse and to differentiate into myotubes.Dysferlin-deficient myoblasts have been described to fully differentiate without any impairment as compared to wildtype myoblasts with respect to myoblast fusion and activation of myogenic pathways.This strongly suggests that dysferlin does not play a role in early stages of myotube formation and subsequent maturation [7] [18] [19] [20] [21] .Instead, myoferlin, another member of the ferlin protein family, has been described to be essential for myoblast-myoblast and myoblast-myotube fusion [18] [22] .Myoferlin is expressed early during myoblast differentiation whereas dysferlin is expressed only after the formation of multinucleated myotubes [18] [22] .Contradictorily, a reduced or delayed differentiation of dysferlin-deficient myoblasts into myotubes has been attributed to dysferlin-deficiency by other groups [17] [23] [24] .Reasons for these discrepancies might be caused by the use of different cellular models, species, culture conditions, in vitro cultivation time, experimental design or developmental differences (e.g.age of the donor patient) that finally result in a different myogenic potential and differentiation kinetics.For instance it has been shown that differentiation kinetics of immortalized myoblast lines slow down with time in culture probably due to constant selection for proliferation [15] .
Being able to assess new therapeutical approaches is of great significance and requires to prove the proper function of the restored protein.This can be achieved partially by analysis of the correct intracellular localisation and the size of the protein using immunochemical approaches.In the case of dysferlin the assumption that dysferlin is indispensable in sarcolemmal repair opens the possibility for a direct functional assay by laser-mediated membrane wounding in cultured myotubes and myofibers.
We show here that myotubes derived from the immortalized dysferlin-deficient myoblast lines, e.g.IM DYSF1 and IM DYSF2, can be employed as a read-out tool of dysferlin functionality by laser-mediated wounding of the sarcolemma.Our results are in accordance with the earlier observed dysfunction of the membrane resealing process in the absence of dysferlin in myotubes and myofibers [6] [8] [14] .We conclude that the human immortalized dysferlin-deficient myoblast lines represent innovative tools to assess dysferlin functionality after application of pharmacological and genetical approaches to restore dysferlin.
Although we did not analyze cellular metabolism and regulation of cell cycle progression we expect metabolic changes in immortalized myoblasts due to their high proliferative potential.However, this seems to have no influence on myogenic differentiation and dysferlin function in immortalized myoblasts and their corresponding myotubes as demonstrated in this report.
In summary, the immortalized myoblast cell lines display properties highly similar to their parental cell lines with respect to myogenic differentiation, formation of multinucleated myotubes, development of a correct myofibrillar architecture and dysferlin protein expression.Dysferlin reveals unaltered subcellular localization and function in membrane repair in control cell lines, while it is perturbed in cell lines derived from LGMD-2B patients.In addition dysferlin-deficient myoblasts have been described to fully differentiate suggesting that dysferlin does not play a role in early stages of myotube formation.Therefore, immortalized human myoblast lines harbouring different mutations in dysferlin represent a very useful tool to further investigate dysferlin function, to study the pathophysiological mechanisms involved in dysferlinopathy and more importantly to assess therapeutic strategies to correct dysferlinopathies with a reliable readout.
Fig. 1 :
Fig. 1: Comparison of immortalized human myoblasts and myotubes to their parental counterparts.The expression of myogenic differentiation markers was analysed by Western blot in dysferlin-deficient and wild-type human myoblasts and myotubes in primary cells (A) and after immortalization (B).A similar increase in MHC (200kDa) and caveolin3 (23kDa) was observed in primary (A) and immortalized (IM) (B) cell lines after differentiation into myotubes.Mutations in dysferlin result in a reduced expression (IM DYSF1 and DYSF2/IM DYSF2) or in a complete absence of dysferlin protein (230kDa) (DYSF3/IM DYSF3 and DYSF4/IM DYSF4) in both myoblasts and myotubes.a-tubulin (50kDa) has been used as a loading control.
Fig. 1 :
Fig. 1: Comparison of immortalized human myoblasts and myotubes to their parental counterparts.The expression of myogenic differentiation markers was analysed by Western blot in dysferlin-deficient and wild-type human myoblasts and myotubes in primary cells (A) and after immortalization (B).A similar increase in MHC (200kDa) and caveolin3 (23kDa) was observed in primary (A) and immortalized (IM) (B) cell lines after differentiation into myotubes.Mutations in dysferlin result in a reduced expression (IM DYSF1 and DYSF2/IM DYSF2) or in a complete absence of dysferlin protein (230kDa) (DYSF3/IM DYSF3 and DYSF4/IM DYSF4) in both myoblasts and myotubes.a-tubulin (50kDa) has been used as a loading control.
Fig. 1 :
Fig. 1: Comparison of immortalized human myoblasts and myotubes to their parental counterparts.The expression of myogenic differentiation markers was analysed by Western blot in dysferlin-deficient and wild-type human myoblasts and myotubes in primary cells (A) and after immortalization (B).A similar increase in MHC (200kDa) and caveolin3 (23kDa) was observed in primary (A) and immortalized (IM) (B) cell lines after differentiation into myotubes.Mutations in dysferlin result in a reduced expression (IM DYSF1 and DYSF2/IM DYSF2) or in a complete absence of dysferlin protein (230kDa) (DYSF3/IM DYSF3 and DYSF4/IM DYSF4) in both myoblasts and myotubes.a-tubulin (50kDa) has been used as a loading control.
Fig. 2 :
Fig. 2: Differentiation state of myotubes derived from human immortalized dysferlin-deficient and wild-type myoblasts.Immunofluorescence stainings show the formation of multinucleated myotubes expressing the myogenic differentiation markers desmin, a-actinin, MHC and caveolin3 in both wild-type and dysferlin-deficient myotubes.Disease-causing mutations in dysferlin result in a reduced expression (IMDYSF1 and IM DYSF2) or in a complete absence of dysferlin (IM DYSF3 and IM DYSF4).(bar, 50µm)
Fig. 2 :
Fig. 2: Differentiation state of myotubes derived from human immortalized dysferlin-deficient and wild-type myoblasts.Immunofluorescence stainings show the formation of multinucleated myotubes expressing the myogenic differentiation markers desmin, a-actinin, MHC and caveolin3 in both wild-type and dysferlin-deficient myotubes.Disease-causing mutations in dysferlin result in a reduced expression (IMDYSF1 and IM DYSF2) or in a complete absence of dysferlin (IM DYSF3 and IM DYSF4).(bar, 50µm)
Fig. 2 :
Fig. 2: Differentiation state of myotubes derived from human immortalized dysferlin-deficient and wild-type myoblasts.Immunofluorescence stainings show the formation of multinucleated myotubes expressing the myogenic differentiation markers desmin, a-actinin, MHC and caveolin3 in both wild-type and dysferlin-deficient myotubes.Disease-causing mutations in dysferlin result in a reduced expression (IMDYSF1 and IM DYSF2) or in a complete absence of dysferlin (IM DYSF3 and IM DYSF4).(bar, 50µm)
. 3 :
Fig. 3: Subcellular localization of muscle differentiation-specific proteins in myotubes derived from human immortalized dysferlin-deficient and wild-type clones of myoblasts.
Fig. 3 :
Fig. 3: Subcellular localization of muscle differentiation-specific proteins in myotubes derived from human immortalized dysferlin-deficient and wild-type clones of myoblasts.
Fig. 3 :
Fig. 3: Subcellular localization of muscle differentiation-specific proteins in myotubes derived from human immortalized dysferlin-deficient and wild-type clones of myoblasts. | 2017-04-28T19:26:24.620Z | 2012-02-02T00:00:00.000 | {
"year": 2012,
"sha1": "6df99b2f3039456bca2146766b251973cc6a0f53",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3274833",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6df99b2f3039456bca2146766b251973cc6a0f53",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
229493203 | pes2o/s2orc | v3-fos-license | Study of bonding zone composite reinforced structures
A bonding zone between elements of layered composite concrete structure was investigated. Results of stress-strain state, nonlinear deformation, and crack resistance of this zone are presented. The bonding zone simulation was performed using the developed methodology based on the finite element method. The problem was solved in a nonlinear formulation using elements with nonlinear material properties and one-side contact elements. A fragment of a layered structure made of two concretes with different strength and under the action of shear forces was considered. Based on the available experimental data, shear stiffness of the bonding zone was calculated. The simulation of the deformation process for the composite structure was performed taking into account its nonlinear deformation and consideration of the cracking process up to the moment of its complete failure. The obtained results of are compared with experimental data. The necessity of using the actual strength and deformation characteristics of the shear of the bonding zone when modeling the deformation process and evaluating the ultimate breaking load is shown.
Introduction
The use of reinforced concrete multilayer composite structures over the past two or three decades has substantially increased. This is a result of using of three-layer building envelopes for thermal protection, and especially it's connected with the ever increasing number of reconstructions, the need for strengthening individual structural elements by their building-up or growing, including the use of rigid reinforcement. At the same time, the specificity of deformation of such structures, despite of a large amount of studies in this field (see, for example, [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]), are not taken into account strictly enough. The current codes for reinforced concrete structures [16,17] do not consider the specifics of nonlinear deformation with the crack formation and sequential disconnection in the bonding zone (also known as the interelement concentration zone [18]), but they only approximate estimates of deformability and strength. Moreover, there are no standardized deformation curves for the bonding zones between elements of composite concrete and reinforced concrete structures.
Goals of this work were to develop a numerical nonlinear deformation model of bonding zone between elements of layered and composite reinforced concrete structures made of concretes of different strengths (for example, a composite beam or a layered wall panel), to determine the deformation properties of bonding zones and to compare the results with available experimental data.
Materials and methods
The object of our study is a fragment of a plane-stressed layered composite structure consisting of two concrete prisms connected by a bonding zone of thickness t and formed by layer-by-layer concreting (figure 1). The upper prism is made of heavy-weight concrete of class B20 (according with russian building codes), and has a cross-section of 100 X100 mm and a length of 590 mm. The lower prism is made of lightweight concrete of class B5 with 1200 kg/m3 density and has a cross section of 200 x 100 mm with a length of 600 mm.
The choice of the object for study was determined by the fact that an experiment was conducted for such structure [19]. Deformation process simulating of structure was performed with the finite element method in the Lira-SAPR software package. Elements with nonlinear material properties and nonlinear elements representing contact and sliding between nodes were used to build the model of the composite structure.
A regular mesh of plane stress state rectangular finite elements was used to create a design model of the structure. Orthotropic linear elements were used as elements that simulate the bonding zone, for which it is possible to specify the shear modulus independently of the modules of linear deformations.
Multilinear stress-strain relationships diagrams were used to describe the plasticity of concretes. Parameters of diagrams were taken from the Russian building codes [16]. The boundary conditions of free support of the structure on the lower surface were modeled by one-sided contact elements (see figure 1) so that they correspond to the experimental scheme as much as possible. The necessity of using such elements is demonstrated by the results of the calculations performed (see figure 2), which show that the position of contact zone of the structure and the base is significantly shifted in the process of nonlinear deformation.
The structure was loaded with a continuous load 'p', on the area shown in figure 1. Nodes of this area was coupled to impose equal displacement in the direction of x-axis. This allowed us to simulate the application of the load from a hard stamp, just as it was in experiments [19]. During loading, the cracking process was simulated. For this purpose, at each loading step, the material strength was checked based on the stress values calculated at the center of the finite element. The criterion for the appearance of a crack in the element was the achievement by the shear stresses of their limiting value.
After the appearance of the crack, the element was removed, and a special one-sided contact finite element (a gap element) with Coulomb friction in the tangential direction was installed in the place between the edges of the crack. This element simulates the flow of forces in the direction across the crack in the case of compression, and the friction of the crack edges against each other when they contact. Under the action of tensile forces in the direction across the crack, there is no interaction between the crack edges due to one-sided activity of this element. The results of the calculations performed with those elements show the rationality of their using in the model. So, with their use, in the process of loading the structure with a crack, both areas on which edges of crack was contacted each others and areas where they have no contact were found (see figure 2).
Step by step loading was carried out until the appearance of a through crack or forming significant areas of destroyed elements that make impossible to pass forces to the structure or its interaction with the constraints.
Results and discussion
It is known [18] that the bonding zone of the layered composite structure has different deformation behavior from the deformation behavior of the materials of the connected elements. Therefore, the value of the shear modulus of this zone was calculated at the first stage of research. For this purpose, an iterative calculation was performed, during which the value of the shear modulus of the finite elements located in the bonding zone was adjusted. At the same time, we don't change linear modulus of deformation. It was taken equal to the linear modulus of deformation of the materials from which the elements of the composite structure are made.
In the process of changing the shear modulus, the values of shear strain γ in the middle of the bonding zone (in section 2 of figure 3) were calculated and compared with the values obtained in the experiment [19]. When the displacement values converged, the achieved value of the shear modulus was taken as the deformation characteristic of the bonding zone. The shear modulus was determined under relatively small load values (2.6 kN), at which the deformation process is close to linear. As a result, a shear modulus of 400 MPa was obtained for the simulated structure. This value is much less than the shear modulus of concrete elements from which the considered composite structure was made: 3167 MPa for B5 and 11460 MPa for B20. At the second stage of research, for the considered composite structure, the process of its deformation was simulated taking into account the cracking process. During loading, up to the moment of ultimate failure, the values of shear strains were checked in three sections along the length of the bonding zone ( figure 3). A value of the ultimate shear strength was taken according to the following formula [20]: where R b is the compressive strength of concrete; R bt is the tensile strength of concrete.
The values of shear deformations obtained as a result of calculations are shown in the diagrams in figure 5(a). The experimental data are also shown here [19].
The analysis of these data shows the excess of the load-bearing capacity of the composite structure (ultimate failure load about 20 kN) in comparison with the values obtained as a result of the experiment (ultimate failure load 13-14 kN).
In addition, the type of structural failure obtained as the result of numerical simulation differs significantly from that observed in the experiment.
Thus, as a result of step-by-step loading, it was found that the structure break occurs as a result of concrete crushing near the area of application of the load. Moreover, a horizontal crack along the bonding zone is formed from the side of the concrete with less strength (B5), and has a limited length. A through crack is not formed along the junction of elements of the composite structure. This indicates that the approach is erroneous when the strength characteristics of the bonding zone are equated with the strength characteristics of the connected elements.
At the last stage of the research, the average values of stresses along the contact zone at the time of failure, obtained in the experiment [19], were taken as the shear strength of the simulated bonding zone of the composite structure. They averaged 0.235 MPa, which is significantly lower than the shear strength for concrete B5 (0.8 MPa).
At this stage, the results of numerical simulation of the deformation process taking into account crack process showed a better agreement with the results of the experiment.
Thus, the obtained deformation curves are closer to the experimental ones (see figure 5(b)). At the same time, the present difference in the values of strains is primarily a result of the constancy of the shear modulus of the bonding zone taken in the model, while it should progressively decrease, as the experimental diagram shows. The type of cracking and failure process obtained as a result calculation of the simulated structure fully matches the ones observed in the experiment. Thus, the failure of a composite structure occurs as a result of the formation of a through longitudinal crack passing through the bonding zone between elements. In this case, breakage in the connected elements does not happen. The calculated value of the failure load (10.2 kN) is slightly lower than that recorded in the experiment (14 kN). This is probably the effect of a number of errors related to the applied strength criterion, the numerical method of calculation, as well as the difference between the actual deformation characteristics of concrete and the standard values we have accepted [16].
But at the same time, the value of the failure load obtained as a result of the calculation is slightly lower than the experimental one, and gives the safety margin, which allows us to conclude that it is possible to use the proposed methodology for estimating the structural strength.
Conclusions
A technique was developed for finite element modelling of the stress-strain state and the cracking process of the bonding zone composite reinforced structures.
The shear stiffness of the bonding zone is numerically determined taking into account the stress pattern in the structure.
The necessity of using reduced shear strength characteristics and the shear modulus of bonding zone instead of the characteristics of the contacting elements in the simulation of the deformation process and assessing the ultimate load is shown.
The results obtained with the developed calculation methodology demonstrate their adequate agreement with the experimental data. This allows us to recommend it for nonlinear calculations of layered composite concrete structure. | 2020-11-19T09:13:56.545Z | 2020-11-18T00:00:00.000 | {
"year": 2020,
"sha1": "81a0abe0ba426608b0895298d05a74fb1a55cf5d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/962/2/022065",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "662d6474cdb703e2606c8ef7286fa11eb70665db",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
210835515 | pes2o/s2orc | v3-fos-license | Socially Responsible Investment and Market Performance: The Case of Energy and Resource Companies
Do financial markets reward the energy and resource companies for adopting socially responsible practices? In this study, we investigate the stock market performance of major international energy and resource firms, classified within the socially responsible investment (SRI) category, from 2005 to 2016. We simulate investments in the portfolios of the SRI energy and resource companies stocks during this 11-year period and we further assess their risk-adjusted performance. The returns of the energy and resource SRI portfolio as a whole were neither consistently superior nor inferior to those of the benchmark indices. However, there exist substantial differences across the individual sub-sectors. The overall results show that markets do not reward or penalize the energy and resource firms for their SRI attitudes. We also find that the crude oil price consistently had a significant influence on the stock returns of the SRI energy and resource companies.
INTRODUCTION
The global and national energy and environmental policy debates are increasingly shaped by the need to balance the competing objectives of economic efficiency, sustainability and affordability. Many energy and resource firms have noted the social and political changes in their environment and view the pursuit of profit for shareholders combined with social and environmental responsibility as part of their long-term corporate strategies. The recent developments in global climate agreements (e.g. COP21) and the emergence of the notion of 'stranded carbon' are examples of such contextual changes. The new operating environment represents a major departure from the "business-as-usual" conduct of business as these firms move from a production function of only private goods towards joint production of private and public goods.
From the theoretical point of view, firms undertake sustainable investments to improve their image, secure comparative advantage and maximise profits for their shareholders. This is particularly the case for energy and resource firms since they increasingly find themselves at the centre Open Access Article of the sustainability debate. However, empirical evidence about their performance in financial markets remains scarce. It is, therefore, worthwhile to examine whether the market rewards or penalizes this departure from the conventional profit maximisation model.
In order to achieve sustainable energy economy objectives, it is important to decouple energy use and its related emissions and environmental impacts from economic activity. Therefore, not only the governments but also energy and resource firms can have a crucial role to play through their actions and investments (see, e.g., IEA, 2014 and. In recent years, many major companies have adopted Socially Responsible Investment (SRI) principles as a strategic tool and self-regulation mechanism for improving corporate image and gaining competitive advantage. SRI has grown drastically over the past three decades. Forum for Social Investment reports that the assets invested in SRI companies in the US have increased by over 1260% to $8.72 trillion between 1995 and 2016 (a compound annual growth of 13.25%) representing nearly 22% of the $40.3 trillion total assets under management (USSIF, 2017). The number and value of SRI funds have also increased significantly, which has led to the creation of SRI indices, such as: Calvert Social Index, Domini400 Social Index, FTSE4GOOD Social Index and MSCI ESG Social Indices etc.
However, it is not clear from the literature whether investments according to the SRI principles provide higher, lower or similar returns in comparison with conventional stocks (see, e.g., the review studies by Margolis and Walsh (2003), Orlitzky et al. (2003) and more recently by Revelli and Viviani (2013)). In particular, the literature about the effect of SRI on performance of energy and resource firms is remarkably scarce (see, e.g., Jenkins and Yakovleva (2006), Frynas (2009) and Zhao (2015) for rare exceptions) and the available findings are inconclusive.
Our paper contributes to the literature on SRI investments and firms financial performance in general and in the case of energy and resource firms in particular. To the best of our knowledge, this paper is the first such study to analyse SRI investments in energy and resource companies on a global scale using international data from several markets in different geographical regions covering all six continents. We present novel empirical findings on the performance of international energy and resource SRI stocks. The findings are important for energy market and financial market researchers. In particular, they are of relevance for energy policymakers and for the investors in energy and resource firms.
We analyse the performance of energy and resource SRI companies on the international stock market and we simulate an investment in portfolios of such firms. We calculate raw returns of the energy and resource SRI stocks portfolios and analyse their performance using Fama-French (1992 and Carhart (1997) multi-factor models. Furthermore, we control for changes in the oil price by including the crude oil price returns in our Fama-French and Carhart estimations. We also measure the performance of the portfolio using risk-adjusted techniques, such as the Modified Sharpe Ratio (MSR) and the Certainty Equivalent (CEQ) returns. Moreover, by evaluating the profitability of stocks portfolios in the variants with and without dividends, we can extract the effect of dividends on their total returns. Finally, we analyse the performance in individual sub-sectors and examine the relation between the investigated stocks returns and the changes in the levels of the crude oil price.
The performance of energy and resource SRI stocks portfolios is assessed by comparisons with major global benchmarks, including broad market indices as well as the energy market, the SRI market and the alternative energy market sector indices (S&P Global 1200, MSCI World Energy, FTSE4GOOD and FTSE ET50 Index). The study encapsulates bull and bear market phases and allows the assessment of the impact of those market conditions on the profitability of energy and resource SRI stocks portfolios. We identify bull and bear market periods using the concept of non-overlapping "bull" and "bear" phases based on major peaks and troughs in the stock market indices, presented in Gooding and O'Malley (1977) and in Woodward and Anderson (2009), i.e. based on the price variability of indices and their long-term trends. Our sample is composed of global energy and resource stocks, hence we rely on the examination of bull and bear market phases of the S&P Global Index and the MSCI World Energy Index.
The next section presents the conceptual framework of our study. Section 3 reviews the relevant literature, which relates mainly to market performance of stocks and portfolios within the context of social responsibility. Section 4 presents a theoretical discussion of the effect of limiting the size of stocks portfolio due to imposition of the stocks exclusion criteria. Section 5 provides an overview of the data and the methodology. Section 6 presents and discusses the empirical results. Section 7 concludes the paper.
CONCEPTUAL FRAMEWORK
The conceptual framework of our analysis relies on two competing theoretical views about the profitability of investments in the SRI stocks.
The literature pointing towards a negative relationship between SRI and stock returns proposes two possible explanations. First, the cost of social responsibility is an extra expense for firms that reduces profitability. However, the SRI supporters argue that, over time, this extra cost is traded off by gains in reputation. Second, focusing on SRI companies as a subset of stocks reduces benefits of diversification (e.g., when stocks of tobacco companies are excluded from portfolios), which may result in lower risk-adjusted returns. On the other hand, the proponents of SRI argue that the excluded companies are engaged in unsustainable products or services that will make them less profitable anyway over time. These arguments are supported by many empirical studies that do not find meaningful differences between the performance of SRI and non-SRI stocks (see, for example, the results in Revelli and Viviani, 2013).
There is also a stream of literature that advocates a positive relationship between SRI and stock returns. The conceptual argumentation in this case is related predominantly to the instrumental stakeholder theory and the slack resources theory. Instrumental stakeholder theory postulates that companies aim to satisfy various stakeholder groups and that the resulting stakeholder-management relationships serve as monitoring and enforcement mechanisms leading to various positive side-effects, such as the increased efficiency of the firm's adaptation to external demands as well as better overall financial performance (Freeman and Evan, 1990;Hill and Jones, 1992;Jones, 1995;Clarkson, 1995).
Slack resources theory argues, in turn, that positive financial performance allows companies to become more socially responsible because it provides them with additional resources necessary to engage in corporate social responsibility, which usually requires availability of substantial excess funds (see Ullmann, 1985;McGuire et al., 1988;Waddock and Graves, 1997).
Other theoretical and conceptual views postulate that socially responsible companies are likely to benefit from different "mediating effects", such as improvement of reputation, better relations with financial institutions and investors as well as easier access to capital or even lower cost of capital (Spicer, 1978;Fombrun and Shanley, 1990). Further positive consequences of reputational effects, such as increase in employees' goodwill, may lead to improvement of the firm's financial performance (Davis, 1973;McGuire et al., 1988;Waddock and Graves, 1997).
There exist different channels through which financial performance of the SRI companies can be improved, for example through higher sales, better profitability or achieving lower cost of Open Access Article capital etc. Moreover, in the theoretical literature there are two other themes related to the conceptual discussions and their respective theoretical arguments that are tested empirically, i.e. the existing studies have also attempted to verify: (1) whether the socially responsible and ethical attitudes increase costs of firms' operations, which leads to negative impact on their financial performance and (2) whether social responsibility can be afforded by firms that already have good financial performance, which leads to feedback effects and further improvement of their financial situation.
In this study, we examine the main theoretical conjectures discussed in the above section. We empirically analyse the performance of portfolios composed of the SRI energy and resource companies stocks relative to benchmark portfolios using the data from major international financial markets.
RELEVANT EMPIRICAL LITERATURE
Theories and concepts of SRI have been evolving over time. In a review about social responsibility research, Lee (2008) found that the SRI debates have moved from macro level to micro (organisational) level over the last six decades. The literature in the 1950s and 1960s viewed social problems as a matter for politicians and civil society only. In the 1970s and 1980s, however, the literature began to investigate the relationship between social responsibility of firms and their financial performance. The practice of financial investments regarding the SRI attitudes has evolved and triggered more research. In a 2010 survey of 107 money managers on socially responsible investment, at least half of respondents saw social responsibility as a way to manage portfolio risk or to improve long-term performance (Voorhes and Humphreys, 2011).
The early research on the relationship between SRI and financial performance includes the seminal studies by Moskowitz (1972) and Vance (1975). While Moskowitz (1972) found a positive relationship between social responsibility and financial performance, Vance (1975) reported a negative relationship between them. However, both studies did not include the analysis of risk-adjusted returns which was later carried out by Alexander and Buchholz (1978), who used social responsibility ranking data from Vance (1975) and applied CAPM models to capture the market risk factor, yet they did not find a statistically significant relationship between social responsibility and stock market performance.
Our paper compares the performance of portfolios which are possible to construct by a private investor (i.e. stocks meeting certain screening criteria related to socially responsible investment). Thus, we next focus on the literature on market return and performance of stocks and portfolios within the context of socially responsible business. Margolis and Walsh (2003) and Orlitzky et al. (2003) reviewed the studies about the performance of SRI stocks and portfolios. They found that 54 papers showed a positive relationship with financial performance, while 28 others did not evidence any statistically significant relationship. Further 20 papers showed mixed findings, whereas seven papers found a negative relationship. Or-litzky et al. (2003) used a meta-analysis of 52 studies yielding a sample size of 33,878 observations and found a higher correlation between social responsibility and financial performance although the evidence appeared stronger for accounting based financial performance indicators compared to market based indicators. Derwall et al. (2005) used eco-efficient screening criteria of creating more goods and services using fewer resources and yielding less waste and pollution. Their study covering US data from 1995 to 2003, found that the high eco-efficiency portfolio provided substantially higher average returns than the low eco-efficiency portfolio. Differences in market sensitivity, investment style or industry-specific factors could not explain the performance differential and the results remained significant for transaction costs up to 200 bps. Derwall et al. (2005) suggested that the superior performance of a portfolio, constructed using environmental considerations as a key factor, could be a case of the market mispricing information on the ecological performance of companies. Kempf and Osthoff (2007) presented a trading strategy in which they simulated trades relying on buying stocks with higher ratings for social responsibility and selling those with lower ratings. They found an alpha of 8.7% per annum for investors employing the "best-in-class" screening approach. The increased performance continued even after taking into account reasonable transaction costs. Likewise, Statman and Glushkov (2009) found that stock portfolis with high ratings of a broad range of social responsibility characteristics outperformed those with low ratings. Their study showed community, employee and environment as some of the key screening factors that influenced performance. Ambec and Lanoie (2007) examined several studies in which portfolio analysis was applied to examine whether SRI funds (or indices) exhibit different performance from funds in a more general investment context. A majority of them (11 out of 16 papers) did not find statistically significant differences between the performance of the SRI funds and conventional ones, while in five studies the SRI funds outperformed. Ambec and Lanoie (2008) found companies benefitting from environmental performance. They showed positive links between environmental and economic performance citing examples of better opportunities for cutting costs and increasing revenues by environmentally friendly companies. Humphrey et al. (2012) investigated whether corporate social performance ratings have a systematic effect on the market based financial performance and risk of the firms. They applied the test for the UK companies over the period 2002-2011. They found no difference in the risk-adjusted performance of portfolios among firms which had high and low corporate social performance ratings. Galema et al. (2012) concluded that when considering the entire efficient frontier and not imposing any short sales restrictions, socially responsible US investors are generally worse off in mean-variance terms. However, they suffer only due to foregone risk reduction opportunities and not because of foregone returns. In addition, when short sale constraints are introduced, investors are no longer worse off by engaging in socially responsible investing activities. Brzeszczyński and McIntosh (2014) analysed the performance of the British SRI stocks in the period 2000-2010. Using the "Global 100 Most Sustainable Corporations in the World" list to select sustainable companies, they found average returns of SRI firms to be higher than those of market indices. The positive performance was also evidenced by risk-adjusted measures (certainty equivalent returns and modified Sharpe ratio) and a simple trading strategy did beat the market indices even after the inclusion of different levels of transaction costs.
In a recent meta-analysis of 85 studies and 190 experiments, Revelli and Viviani (2013) investigated whether inclusion of CSR and ethical criteria in the portfolio construction processes Open Access Article is more profitable than conventional investment policies. They found that, compared with conventional investments, the consideration of CSR in stock market portfolios is neither a weakness nor a strength.
The analysis of the SRI samples used in the existing literature further highlights that previous studies have applied data for stocks from different industries, which is likely to have an impact on the results. Kempf and Osthoff (2007) and Statman and Glushkov (2009) exploited data for stocks from KLD ratings, which consist of firms from different industry sectors. Kempf and Osthoff (2007) divided the companies into 10 industries for their best-in class approach of positive screening policy. Similarly, in Humphrey et al. (2012) the sample includes firms from 19 industries and Brzeszczyński and McIntosh (2014) also investigated stocks from more than 15 industry sectors. 1 Recent empirical studies from international markets, including those which analysed the performance of the SRI funds, the SRI stocks or the portfolios composed of SRI stocks (see, e.g., Lean et al., 2015;Auer, 2016;Auer and Schuhmacher, 2016;Syed, 2017;or Wu et al., 2017), also show mixed results.
Moreover, a recent paper by Riedl and Smeets (2017), based on surveys and incentivized experiments, found that both social preferences and social signalling effects can explain the SRI decisions, whereas financial motives play less important role. Socially responsible investors expected to earn lower returns on SRI funds than on conventional funds and also to pay higher management fees. Hence, the results from Riedl and Smeets (2017) suggest that investors are willing to sacrifice some financial performance if they invest according to their social preferences. These findings support some of the theoretical considerations, which we discussed above, and they are also consistent with much of the empirical evidence available in the literature from different international markets.
In summary, the review of the empirical SRI studies shows that the findings about the performance of SRI investments are inconclusive. Some of the existing evidence points towards superior performance of SRI investments (e.g. Derwall et al., 2005;Kempf and Osthoff, 2007;Statman and Glushkov, 2009), while many other available results differ (e.g. in Humphrey et al. (2012) a superior risk-adjusted performance could not be supported based on a range of market performance models) and those other papers do not confirm consistent outperformance.
THE EFFECT OF LIMITING THE PORTFOLIO SIZE DUE TO EXCLUSION CRITERIA
The effect of the exclusion of stocks from the pool of all the stocks available in any given market is a priori uncertain and, in fact, such decision can lead to either lower returns or higher returns of the constructed portfolio (or it can result in no change at all). Below we provide a theoretical discussion regarding this issue and we also demonstrate what happens if the exclusion criteria are imposed on a group of stocks (in the case, which is the subject of investigation in our study, on the non-SRI energy and resource companies stocks) under different possible scenarios.
Let i R denote the return for stock i in a market composed of a total number of I stocks, where i = 1, 2, 3, ... I. The stocks are classified into two groups: SRI energy and resource companies 1. Methodologically, it is not clear how the effect of performance of stocks from different industries (which may, again, have different degrees of social responsibility etc.) is captured by the commonly applied tools, such as through the estimations of multi-factor models. We simplify this problem by using only companies that are focused on the production and supply of energy and related resources (e.g., oil, gas, water and minerals) etc. whereas all of them are characterised by substantial social and environmental responsibility and have been screened as SRI. This sample selection allows us to observe the performance of large and well established SRI firms making our study novel and different from others in the existing literature. stocks, denoted by j = 1, 2, 3, ... J, and all other stocks which do not meet the SRI energy and resource companies stocks selection criteria, denoted by k = 1, 2, 3, ... K. These two sets are mutually exclusive and fully complementary, i.e.: the relation J + K = I must always hold.
The return achieved from the market portfolio p R composed of all stocks I available in any given market is: where: i R are the returns of stocks i, j R are the returns of the SRI energy and resource companies stocks j and k R are the returns of the non-SRI energy and resource companies stocks k. The respective weights are: i w (for i = 1, 2, 3, ... I), j w (for j = 1, 2, 3, ... J) and k w (for k = 1, 2, 3, ... K) and they must always sum up to 1 within each group, i.e.: ∑ I i =1 w i = 1, ∑ J j =1 w j = 1 and ∑ K k =1 w k = 1. The effect of the exclusion of one of the two groups of the distinguished stocks, in this case the removal of the stocks which do not meet the SRI energy and resource companies stocks selection criteria (k = 1, 2, 3, ... K), on the portfolio return p R is as follows: , then the exclusion of the non-SRI energy and resource companies stocks is beneficial for the portfolio performance, because p R increases after the removal of stocks k = 1, 2, 3, ... K.
, then the exclusion of the non-SRI energy and resource companies stocks is detrimental to the portfolio performance, because p R decreases after the removal of stocks k = 1, 2, 3, ... K.
, the exclusion of the non-SRI energy and resource companies stocks does not have any effect on the return p R of the portfolio.
Note that the above relationship holds under all possible combinations of returns, including the cases when ∑ . Moreover, this relation is also true regardless of the size of the groups, i.e. for any possible values of I, J and K. For , the exclusion of the non-SRI energy and resource companies stocks (all stocks k = 1, 2, 3, ... K) is detrimental to the portfolio performance, because p R decreases in all these cases.
, the exclusion of the non-SRI energy and resource companies stocks (all stocks k = 1, 2, 3, ... K) is beneficial for the portfolio performance, because then p R increases in all these cases. However, an important practical issue to consider here that also determines the relation between ∑ J j =1 w j • R j and ∑ I i =1 w i • R i is the method of weighting stocks, i.e. how the weights i w (for i = 1, 2, 3, ... I), j w (for j = 1, 2, 3, ... J ) and k w (for k = 1, 2, 3, ... K) are assigned. For example, the assumption of equal weights will inevitably lead to different values of ∑ J j =1 w j • R j and ∑ K k =1 w k • R k , and consequently to different p R values, than the assumption of unequal weights. The portfolios constructed in practice by stock market investors may have either equal weights or weights different from an equal structure, i.e. they can be allocated through the optimization procedures (e.g. based on the mean-variance relationship) or they can be determined by the size of the stocks (usually measured by their market capitalization) or some other criteria (e.g., the value of such indicators as P/E, P/BV or D/Y ratios etc.).
Open Access Article
Hence, we can conclude that the effect of the exclusion of any one of the two groups of stocks from the broad market portfolio (in the case discussed in this paper the removal of the stocks which do not meet the SRI energy and resource companies stocks criteria) depends predominantly on the sign of the relation defined as: but also on the method of weighting stocks and the assumptions regarding the weights, which ultimately determine the return R p as well as the values of ∑
Data
The sample selection process required us to analyse first the scope of business activity of all 344 companies from the Global-100 list, which appeared on it during the first 11 years since its launch in 2005. As the focus of this study is on energy and resource SRI stocks, from the Global-100 list we chose companies that: (a) produce energy, minerals and water, (b) produce energy related materials for consumption in energy or transport industry and (c) supply energy, minerals and water. 2 This selection led to the identification of the following industry groups: (1) Alternative Energy, (2) Electric Utilities, (3) Electricity, (4) Energy Equipment and Services, (5) Gas, Water and Multiutilities, (6) Industrial Engineering, (7) Mining, (8) Oil Equipment, Services and Distribution and (9) Oil and Gas Producers.
We used the energy and resource SRI stocks data from the compilation published by Corporate Knights based in Toronto, Canada, which prepares annually the "Global 100 Most Sustainable Corporations in the World" list. The basis for it is mainly quantitative as it provides scores against 12 different KPIs 3 for all global publicly traded companies exceeding market capitalisation of at least US$ 2 billion. For example, the screening requires all the companies to pass nine different Piotroski F-Score (Piotroski, 2000) tests, which confirm the sound financial position of the firms. Similarly, in the sustainability disclosure principle, companies that fail to disclose at least 75% of the priority indicators for their respective GICS Industry Group are eliminated. As such, the results follow a rulesbased construction methodology and ensure sustainable financial index of firms. The listing criteria also include screening against product category (e.g. tobacco is not included), sustainability-related fines, penalties or settlements.
We filtered all SRI companies based on the above categories and this procedure identified 56 SRI energy and resource companies in the 11 years period between February 2005 and January 2016. Table 1 presents the constituents of the SRI energy and resource companies stocks portfolio (henceforth, referred to as: 'SRI E&RC stocks portfolio') used in this study. It also provides information about the country of origin, area of operation, number of employees and year of establishment.
2. The purpose of our study was to analyse the performance of stocks which simultaneously meet the SRI and energy / resource industry membership criteria. Therefore, we deliberately chose stocks which are at the intersection of energy /resource and SRI selection rules. We do not test the performance of separate portfolios of energy / resource stocks and SRI stocks, however we investigate it indirectly by comparing the results of our portfolios with pure SRI and pure energy sector indices.
3. These key performance indicators (KPIs) are: Energy Productivity, Carbon Productivity, Water Productivity, Waste Productivity, Innovation Capacity, Percentage Tax Paid, CEO to Average Worker Pay, Pension Fund Status, Safety Performance, Employee Turnover, Leadership Diversity and Clean Capital Pay Link. More details are available at: www.corporateknights.com. Source: Data collated by authors from companies' websites, annual reports and from Bloomberg.
Open Access Article
As shown in Table 1, the list of the constituent companies in our SRI E&RC stocks portfolio consists of long established large firms. For example, BP Plc, Lonmin Plc, PG & E Corp, Teck Resources, Tokyo Gas, Umicore and Wartsila OYJ are more than a century old. There are few companies that were founded more recently but have a long history. For example, the newest company on the list, Cenovus Energy Inc. formed in 2008, is a split from Encana which descends from the 19th century Canadian Pacific Railway. Similarly, BHP Billiton was incorporated in 2001 but it was a merger of Billington and BHP that were established in 1860 and 1885, respectively. Likewise, Aluminia Limited, established in 2002 is a demerger from WMC Resources which had a history dating back to 1950s.
Many of these companies have grown large over time and they have a presence in many countries (e.g., BP has operations in 80 markets). These firms contribute to the national economies and provide employment in communities. They produce gas, oil, minerals and electricity with a range of local and global environmental impacts. Therefore, these firms are widely believed to bear important social, economic and environmental responsibilities. The companies in our sample have more than 25,000 employees on average. Those firms with relatively fewer employees, such as Cairn Energy from the United Kingdom which officially had 156 employees as of year-end 2016, as mentioned in the annual report, also had several hundreds contractors in 2016.
In terms of geographical distribution, the 56 stocks in our database come from 19 countries of which the highest number of firms is from the UK (11 companies) followed by Canada (9 companies). There are 7 companies form the US and 4 from Finland and Spain. Further, Australia and Brazil have 3 companies each. France, Japan and Norway are represented by 2 companies each and the remaining 9 countries have 1 company each. Given that most countries in the world have at least one energy company, the Global-100 ranking concentration in less than 10% of all countries worldwide is an indication that in many of them the SRI related criteria do not seem to be considered to a sufficient extent by energy companies there. Figure A1 in the Appendix shows the countries and number of SRI energy and resource companies in the SRI E&RC stocks portfolio. Table 2 presents the constituent companies in the Global-100 list broken into numbers for each year.
The source of all the stock price and dividend data for the constituents of the analysed SRI E&RC stocks portfolio is Bloomberg.
We used the ticker symbol of the respective stock exchange, so the price at first was obtained in the currency of the country of the exchange and then we used the Bloomberg currency converting function to change both the stock price and dividends data into US dollars in order to maintain uniformity and consistency for the calculation purposes. Where stock price and dividends were not quoted in full currency value (e.g., pound sterling quoted in pence), we converted them into the respective unit of currency (e.g., to pound sterling) before applying the USD conversion.
Similar to the approach used by Brzeszczyński and McIntosh (2014), the returns of the SRI portfolios were compared with the returns of various stock market indices. However, we extend this type of analysis by utilizing a larger number of comparable benchmarks. We employ four benchmark indices as opposed to only two (FTSE100 as the broad market and FTSE4GOOD as the SRI index) in Brzeszczyński and McIntosh (2014). Our selection of benchmarks captures stocks globally and covers the broad market as well as energy market, SRI and alternative energy market sectors, which creates a broader perspective for the comparison purposes. Barrick Gold Corp Cenovus Energy Inc Note: 'x' means that the respective company appeared on the Global-100 list in the indicated year(s) and, therefore, it is accordingly included in the sample for the analysis.
Open Access Article
(1) Broad Market
As the broad market index, we employ the S&P Global 1200, which is a composite index comprising seven regional and country indices: S&P 500, S&P Europe 350, S&P/TOPIX 150 (Japan), S&P TSX 60 (Canada), S&P/ASX 50 (Australia), S&P Asia 50 and S&P Latin America 40. The S&P Global 1200 is calculated in US dollars. The index captures 70% of the global market capitalisation covering 30 countries inclusive of the countries of origin of the stocks in our SRI E&RC stocks portfolio only except for the stocks from India and South Africa. The main selection criterion for S&P Global 1200 is company size measured by its stock market capitalisation. Hence, it contains predominantly large blue-chip firms. Additional selection criteria are stocks liquidity, which is revised at a monthly frequency based on such indicators as stock's annual value traded, its float turnover and the number of days traded. The S&P Global 1200 index takes into account also sectoral classifications and ensures balance between 10 main broad economy sectors with respect to Global Industry Classification Standard (GICS).
(2) Energy Market Sector
We include the MSCI World/Energy Index as a benchmark for the energy sector. It is designed to capture the large and mid-cap segments across 23 Developed Markets (DM) countries (16 of which are common with the countries of origin of our SRI energy and resource companies stocks). Moreover, it maintains sectoral classifications among seven energy categories that are again common to our SRI E&RC stocks portfolio. The selection criteria are based on index construction approach with a strong emphasis on index liquidity, investability and replicability, which allows for cross regional comparisons across all market capitalisation sizes, sectors and style segments. Similar to S&P 1200 Global index, securities in MSCI World Energy Index are classified in the energy sector following the Global Industry Classification Standard (GICS).
(3) SRI Market Sector
In the SRI category, we use the FTSE4GOOD Global 100 (referred to, henceforth, as: FTSE4GOOD) index as a benchmark. It includes companies with high environmental, social and governance (ESG) ratings. The FTSE4GOOD index is designed to measure the performance of companies that meet globally recognised corporate responsibility standards. The selection criteria are revised on regular basis to meet market expectations and reflect the new developments in the CSR practice. They rely on extensive market consultation process and are approved by an independent committee of experts. The FTSE4GOOD inclusion criteria are split into five areas: (i) environment, (ii) human and labour rights, (iii) supply chain labour standards, (iv) countering bribery and (v) climate change. Each of them is further divided into three categories: (i) policy, (ii) management and (iii) reporting. Subsequently, there are indicators assigned to each of the policy, management and reporting subdivisions. The number of the indicators that a company must meet depends on whether it is classed as having high, medium or low impact in a particular area. Moreover, FTSE4GOOD index excludes the companies with business interests in some industries, such as: tobacco producers or weapons manufacturers.
(4) Alternative Energy Market Sector
In the case of alternative energy market sector, we employ the FTSE ET50 index, which is composed of global companies that are involved in clean energy related businesses. It is designed for the creation of index tracking funds, derivatives and used as a performance benchmark. The selection criteria of the index lead to a diversified mix of clean energy production and clean energy technology and equipment provider companies. Therefore, during the selection process the stocks are screened and weighted to ensure that the index is investable and also sufficiently liquid for trading purposes. The FTSE ET50 index consists of companies from the list of 17 countries (9 of which are common with the countries of domicile of our SRI energy and resource companies stocks). Furthermore, it maintains sectoral classifications among 8 industries including oil and gas, materials and utilities that are, again, common with the industry types of the companies in our SRI E&RC stocks portfolio.
We evaluate the performance of our portfolios against the four indices described above both at price and total return definition levels.
First, we compare the results of the investment in the SRI E&RC stocks portfolio with the 'price index' (PI) versions of the four indices mentioned above. However, the SRI E&RC stocks portfolio includes dividend payments, which is the income to investors holding these stocks. Therefore, we also analyse the returns of SRI E&RC stocks portfolio against the 'total return index' (TRI) versions of the four indices (i.e. the versions of the indices which include dividend payments) in order to ensure that the comparison is conducted on equal ground. On the other hand, the 'total return' versions of the indices are not commonly used by investors as conventional benchmarks. Hence, we also perform direct comparison between the 'price index' versions of the indices and the SRI portfolios without dividends.
Methodology
The Global-100 list was used to construct portfolios of global socially responsible energy companies over the period from February 2005 to January 2016 (11 annual sub-periods) and their returns were compared to the returns of the respective benchmark indices. Since the Global-100 list is announced at the end of January each year, right before the meeting of the World Economic Forum (WEF) in Davos, we assumed that the first portfolio was constructed on the 1 st February 2005. The portfolios were then rebalanced each year on the last working day of January.
The selection procedure of stocks entering the portfolios was as follows. The companies identified on the Global-100 list entered the portfolio in the first year and the portfolio was held until the next Global-100 list was announced a year later. Stocks that no longer appeared on the Global-100 were removed from the portfolio and the companies new to the Global-100 list were included. Effectively, this means that we simulate the trades relying on buying stocks that appeared on the list and selling those that were removed from it. This procedure was repeated every year until the last year in our sample period.
As the Global-100 was an unranked list for a number of years (ranking has been only provided since the year 2010) rather than an index, it had to be assumed that each stock has an equal weighting in the SRI portfolios. This means that a stock which remains in the portfolio from one year to the next when the total number of stocks in the portfolio changes requires an adjustment (either additional purchases or sells) in order to maintain the same equal weighting.
When a company was taken over and disappeared from the stock market in the period of the duration of our portfolios, we assumed that the proceeds were kept in a non-interest bearing Open Access Article account until the portfolio was rebalanced. The reason for this assumption is that private investors are less likely to insist on reinvesting the proceeds and may keep them in their current account until the portfolios are re-shuffled.
The stock price data and dividend payments data were collected and included in the analysis of the SRI E&RC stocks portfolio performance. Data on prices and dividends was imported from Bloomberg.
As mentioned above, similarly to Kempf and Osthoff (2007) and Brzeszczyński and McIntosh (2014), the returns of the SRI portfolios are compared to the returns of market indices. The annual simple holding period returns for the SRI portfolios in two versions (with dividends and without dividends) as well as for the following indices: S&P Global 1200 (price index), S&P Global 1200 (total return index), MSCI World/Energy (price index), MSCI World/Energy (total return index), FTSE4GOOD (price index), FTSE4GOOD (total return index), FTSE ET50 (price index) and FTSE ET50 (total return index) were calculated for all 10 individual years and average annual returns were computed for five-year sub-periods and for the overall ten-year period. In addition, we analyse returns in both bull and bear market periods.
The results in these sub-periods allow to conduct a deeper analysis of the profitability of SRI portfolios and to perform further robustness checks. The annual return was determined as a simple holding period return with any dividends added. For the one-, five-and 11-year periods, the average annual returns using the annual data were calculated. For other sub-periods, returns were calculated using monthly data and then annualised to make them comparable with other periods. Whether the differences between returns of the SRI E&RC stocks portfolio and the benchmark indices were statistically significant was assessed by a t-statistic.
We also analyse the performance of the SRI E&RC stocks portfolio by using the most important risk-adjusted measures, such as the modified Sharpe ratio of Israelsen (2005) and the Certainty Equivalent returns (see, e.g., DeMiguel et al. (2009)), which were calculated for both versions of the SRI E&RC stocks portfolio (with and without dividends) and both versions of all four indices (total return indices with dividends and price indices without dividends).
The Sharpe ratio (Sharpe, 1966 and1994) measures excess return per unit of total risk. However, the classical definition of the Sharpe ratio suffers from inaccuracy errors and incorrect assessment of risk when returns are negative in some sub-periods, so we calculated the modified Sharpe ratio (MSR) of Israelsen (2005): where ER is the excess return defined as mean monthly difference between the portfolio (or index) return and the risk-free return computed for n equal to 12, 60 or 132 months, respectively, and SD is the sample standard deviation of the monthly differences of returns. MSR is a commonly used measure to address the problem of negative returns and alleviates the problems with the traditional Sharpe ratio.
Certainty Equivalent (CEQ) returns are defined as: where μ k and 2 σ k are the mean and variance of excess returns of a given portfolio or an index k and γ is the risk aversion parameter. The formulation of CEQ in (3) assumes a multi-period investor with quadratic utility. The 'normal' level of risk aversion is at the level γ =1, while higher (lower) values of γ indicate higher (lower) levels of risk aversion.
Socially Responsible Investment and Market Performance / 31
Open Access Article Finally, we estimate parameters of the Fama-French three-factor model (Fama and French, 1992;: R pt -R ft = α p + β 1p RMRF t + β 2p SMB t + β 3p HML t + ε pt (4a) and the Carhart (1997) four-factor model: where R pt is the return of the SRI portfolio in period t; R ft is the risk-free return in period t; R mt is the return of the world stock market index in period t and RMRF t = R mt -R ft ; SMB t is the difference in return between small-cap and large-cap portfolios in period t; HML t is the difference in return between high book-to-market stocks (i.e. value stocks) and low book-to-market stocks (i.e. growth stocks) in period t; MOMENTUM t is the difference in return between portfolio of stocks classified as those that have strong momentum and stocks classified as those that have weak momentum and ε pt is the error term. The data for the explanatory variables used in models (4a) and (4b), i.e. for R ft , R mt , RMRF t , SMB t , HML t and MOMENTUM t , were obtained directly from the Ken French database. 4 Defined as Fama/French Global Factors and Portfolios, the factors data is constructed from the portfolios of stocks of 23 different countries. We adopted the factor data from Fama/French Global Factors because 16 out of 19 stocks in our portfolio are from the countries in the list of Fama/French Global Factors.
Market factor is defined as the return of a region's value-weighted market portfolio minus the US one month T-bill rate. SMB is the equally weighted average of the returns of the three small stock portfolios for the region minus the average of the returns of the three big stock portfolios:
SMB = 1/3 (Small Value + Small Neutral + Small Growth) -1/3 (Big Value + Big Neutral + Big Growth) (5)
HML is the equally weighted average of the returns for the two high book-to-market (B/M) portfolios for a given region minus the average of the returns for the two low B/M portfolios:
HML = 1/2 (Small Value + Big Value) -1/2 (Small Growth + Big Growth)
MOMENTUM is the equally weighted average of the returns for the two winner portfolios for a given region minus the average of the returns for the two loser portfolios:
MOMENTUM = 1/2 (Small High + Big High) -1/2 (Small Low + Big Low).
( We also perform estimations of the Carhart (1997) model with crude oil returns as additional control variable based on the following model: where: OIL t is the return of the Brent oil price. Finally, we explore the impact of the crude oil price on the portfolio returns and we estimate the parameters of the following model: Open Access Article as well as the model where the dependent variable is defined as the excess return of the SRI portfolio relative to the international stock market benchmark: (10) where ER pt is the excess return defined as the difference:
ER pt = α p + β p OIL t + ε pt
and where R mt is the return of the world stock market index in period t.
In the next section, we present the results of the analysis of the raw returns of our SRI E&RC stocks portfolio and assess its performance relative to the selected benchmark indices as well as using the risk-adjusted measures described above.
Raw Returns
The results of the preliminary analysis based on raw returns for the entire portfolio of the SRI energy and resource companies stocks show that it did not outperform the broad market index as well as other energy sector, SRI and alternative energy market indices in the 11-year sample period from February 2005 to January 2016. Table 3 presents average annual returns for our whole sample period of 2005-2016 based on the simulation of investment in the energy and resource companies from the Global-100 list compared to all four benchmark indices and reports also the values of the respective t-statistics.
Panel A in Table 3 reports the returns of the SRI E&RC stocks portfolio with dividends and the returns of the benchmark indices in their price index version. Such comparison with stock market indices is often used by financial market investors as well as business media, however it is not entirely accurate because price indices by definition do not include dividends, while any investment in stocks (e.g., in a portfolio of SRI energy and resource companies stocks such as the one investigated in our study) in practice will benefit from the dividends paid out by companies. Nevertheless, we start with such comparison because stock market performance relative to price version of stock indices is often discussed in business media etc., so regardless of their validity for this purpose, they are important benchmarks to which our results should first be referred.
As Panel A in Table 3 shows, there is a slight outperformance of the SRI stocks portfolio relative to all four benchmarks. The average annual return is 1.89% and it is higher than the respective average annual returns equal to 1.20%, -0.24%, -0.63% and -0.42%, although the differences are not statistically significant. However, the outperformance in the individual years in terms of the numbers of the annual periods characterized by the superior results is broadly equal in all 5 cases: the SRI E&RC stocks portfolio has outperformed the index benchmarks 2 times, while the indices S&P GLOBAL 1200 TR Index, MSCI WORLD ENERGY TR Index, FTSE4GOOD TR Index and FTSE ET50 TR Index have outperformed others 2, 3, 2 and 2 times, respectively. Panel B in Table 3 presents the returns of the SRI E&RC stocks portfolio with dividends and the returns of the benchmark indices in their total return versions. This comparison in Panel B is the most relevant one from the practical point of view, because it allows (unlike the results in Panel A) for direct assessment of the same type of returns (in this case: the returns including the dividends) and it reflects the actual investments outcomes (unlike the results in panel C which do not take into account the dividends).
Panel B in Table 3 shows that there is no clear pattern of outperformance by either the SRI stocks portfolio or any of the benchmark indices. In the full period from 2005 to 2016 the SRI stocks portfolio achieved the average annual return equal to 1.89%, while the return of the S&P GLOBAL 1200 TR Index was 3.64%, the return of the MSCI WORLD ENERGY TR Index was 2.44%, the return of the FTSE4GOOD TR Index was 1.13% and the return of the FTSE ET50 TR Index was 0.43%. The average value of the returns of these four benchmarks is 1.91%, which is almost exactly the same as the return of 1.89% for the SRI stocks portfolio. Moreover, as in case of the results in Panel A, the outperformance in the individual years in terms of the numbers of the annual periods characterized by the superior results is also very similar in all 5 cases: the SRI E&RC stocks portfolio has beaten the index benchmarks 2 times, while the indices S&P GLOBAL 1200 TR Index, MSCI WORLD ENERGY TR Index, FTSE4GOOD TR Index and FTSE ET50 TR Index have beaten others 2, 3, 2 and 2 times, respectively.
Finally, Panel C in Table 3 presents the returns of the SRI E&RC stocks portfolio without dividends and the returns of the benchmark indices in their price index version. The purpose of this comparison is to examine and compare the relative performance of our stocks which, at the same time, illustrates the impact of dividends on the SRI stocks portfolio and on the indices.
In the full period from 2005 to 2016 the SRI stocks portfolio without dividends achieved negative average annual return equal to -1.36%, while the corresponding value for the best performing benchmark S&P GLOBAL 1200 Price Index was positive and equal to 1.20%. The average value of the returns of the four benchmarks is -0.02%, which shows that dividends played a relatively more important role in the performance of the investigated portfolio than in case of the benchmark stock market indices.
Apart from the calculation of the average annual returns and the returns in the single annual periods, we also investigated the performance in other sub-samples, i.e. in the rolling 5-year long periods and in the bull and bear market phases. 5 Overall, the performance during the multiple-year periods and during the bull and bear market conditions was mixed and the differences in returns were not statistically significant, although the S&P GLOBAL 1200 Price Index has beaten others most often in the 5-year long rolling samples and also in the bear market phases, while FTSE ET50 TR Index was the best performer in the bear market phases.
Despite the fact that there is no clear evidence of the overall outperformance that could be detected in our results presented in all three panels in Table 3, the variation of returns over time across the individual years shows an interesting pattern that we found in our study. Such effect can be explained by two major events on the global market: the global financial crisis and the changes in the level of the crude oil price.
First, the performance of the analysed SRI E&RC stocks portfolio was the worst in the annual period 2008/2009, which directly follows the global financial crisis of 2007/2008. The total return of the portfolio was -42.39%, while the return excluding dividends was -44.36%. Nevertheless, it needs to be emphasized that this result was still comparable with the changes of the benchmark 5. Bull and bear market periods were have been identified using the idea of non-overlapping 'bull' and 'bear' phases based on major peaks and troughs found in the stock market indices, presented in Gooding and O'Malley (1977) and Woodward and Anderson (2009)
Table 3: Average annual returns for the whole 11-year period (February to January) from 2005 to 2016 for the SRI E&RC stocks portfolio and for the benchmark indexes: 1) Global broad market (S&P Global 1200), 2) Global energy market (MSCI World Energy), 3) Global SRI market (FTSE4GOOD) and 4) Global alternative energy market (FTSE ET50).
Open Access Article Notes for Table 3 ( (Table 3, Table 4, Table 5, Tables 7a, 7b, 7c and 7d, Appendix Tables A1 and B1-B4) bull and bear market periods have been identified using the idea of non-overlapping 'bull' and 'bear' phases based on major peaks and troughs found in the stock market indices, presented in Gooding and O'Malley (1977) and more recently in Woodward and Anderson (2009) indices over the same period, which also suffered severe losses. Hence, this negative performance is clearly related to a broader stock markets trend after the global financial crisis. Second, the worsening performance of portfolios starting from the annual period 2011/2012 onwards coincides in time with a decline of the crude oil price, which started to slide down from its peak in 2011 (which was the second peak in our whole sample period after its previous peak in 2008). The decrease of the crude oil price was first gradual and then substantially accelerated after 2013. More importantly, the performance of portfolios has been worsening also in relative terms after 2011. We interpret this effect as the impact of declining crude oil price on the profitability of many companies, which business directly (or indirectly) depends on crude oil price levels and which stocks were part of our portfolios. 6
The Effect of Dividends
Our calculations allow us also to extract the impact of dividends on the SRI E&RC stocks portfolio performance, which can be directly conducted by comparing the results for the variants of portfolios with and without dividends that are contained in the respective panels in Table 3.
The average annual return of the SRI E&RC stocks portfolio in the variant where the dividends were included in the calculations is 1.89%, while in the variant where the dividends were excluded it is -1.36%. This result has a very straightforward interpretation and also practical implications for stock market investors, which are as follows.
First, the difference in the annual average return that is substantially over 3% (i.e. 1.89% minus -1.36% equal to 3.25%) is large, which indicates that dividend payments matter to investors who allocated their funds in the stocks from our SRI E&RC stocks portfolio. Second, as mentioned in the previous section, the dividends play a relatively more important role in the performance of the investigated portfolio than in case of the benchmark stock market indices. 7 Third, in terms of the qualitative conclusions it makes also considerable difference whether dividends are added or excluded from the calculations, because the annual average return is either positive or negative in these two cases, hence leading to either the overall investment profit or the overall investment loss.
Therefore, dividends appear to matter in the performance of the analysed SRI E&RC stocks portfolio and its individual stocks.
Our results also mean that the SRI energy and resource companies tend to pay relatively large dividends, which is another important finding of this study.
6. This finding is clearly supported subsequently by the estimation results of the parameters of the Carhart model with crude oil price returns as a control variable (discussed later in this paper). The respective estimates of the parameter for crude oil returns are statistically significant and positive, which means that the SRI E&RC stocks portfolio returns are related indeed to oil price returns in the same direction. Therefore, the negative oil price returns starting from the year 2011 are associated with negative returns of the SRI energy and resource stocks portfolios.
7. The comparison of data for average dividend yield indicators for different industries also supports this effect. For example, in our sample period the average dividend yield for the energy industry stocks from the MSCI World Energy Sector Index and S&P Global 1200 Energy Sector Index was 2.84% and 2.94%, respectively, while it was as a rule lower for other industries, e.g. 2.38% for the financial industry stocks from the S&P 500 Financials Sector Index, 1.33% for the information technology stocks from the S&P Global 1200 Information Technology Sector Index or 2.16% from the MSCI World Health Care Index etc.
Open Access Article
Risk-Adjusted Performance
In the next step we turn towards the analysis of the risk-adjusted measures, such as the modified Sharpe ratio (MSR) and Certainty Equivalent (CEQ) returns, as well as the evaluation of the portfolio performance based on the Fama-French and Carhart models.
The values of the modified Sharpe ratio (MSR) are presented in Table 4. They show similar pattern of worsening performance of the SRI E&RC stocks portfolio over time, which is also consistent with the evolution path of the crude oil price.
The values of Certainty Equivalent (CEQ) returns are presented in Table 5 for three variants representing normal risk aversion of investors (γ =1), lower risk aversion (γ =0.5, i.e. half of normal risk aversion level) and higher risk aversion (γ =2, i.e. double the normal risk aversion level). Similarly to Tables 3 and 4, they illustrate the same pattern of results for the profitability of the SRI E&RC stocks portfolio with superior performance in the first two 5-year sub periods (2005-2010 and 2006-2011) and then a substantial deterioration with subsequent underperformance in the next periods.
In the next step we move to the analysis of the Fama-French three-factor model and Carhart four-factor model, which are the most widely used multi-factor models for explaining the performance of investment funds or stock portfolios. Due to space considerations, we focus here on the presentation and discussion of the more extended specification of the and Carhart four-factor model, which encompasses the Fama-French three-factor model, however all the results are available upon request.
In all regressions we first tested for the presence of possible seasonality. Next we performed tests for autocorrelation and heteroscedasticity. For autocorrelation we used Ljung-Box Q test and for heteroscedasticity we applied the ARCH test of Engle (1982). When heteroscedasticity was present in a model, it was addressed by estimating an appropriate GARCH class model. Autocorrelation was removed by adding autoregressive (AR) and/or moving average (MA) terms. Table 6a presents the estimation results of the parameters of Carhart four-factor model represented by equation (4b). In the whole sample only the market factor RMRF t is statistically significant (estimate of 1.07 significant at the 1% level). In the sub-samples, the RMRF t variable is significant in all the 5-year long sub-periods and in most of the single-year sub-periods. The SMB t, HML t and MOMENTUM t factors are mostly insignificant in the sub-samples.
The estimation results of the alpha (constant) parameter presented in Table 6a show that it is negative but statistically not significant in the full period from 2005 to 2016. In the shorter 5-year sub-periods, its estimates are positive and statistically significant 8 in the first two sub-sam-8. The positive and significant estimates of the alpha may imply market inefficiency, however we found this effect only at the beginning of the whole analysed period, so the conclusions about market efficiency have to be carefully formulated. Moreover, a comprehensive investigation of market efficiency would require access to very detailed microstructural data for individual trades and this was beyond the scope of our study. Therefore, we can only conclude that our results might suggest some market inefficiency, although a clear lack of consistency in overperformance points towards the validity of the adaptive market efficiency hypothesis (AMH) proposed by Lo (2004 and (some more recent evidence on AMH is provided e.g. by McGroarty (2014 and2016) or Manahov and Hudson (2014), among others) rather than efficient market hypothesis (EMH). Adaptive market hypothesis incorporates the principles of evolution, such as: adaptation or natural selection, to explain financial markets mechanisms. It is consistent with the evolutionary model of individuals adapting to a changing environment using heuristics. In the context of our study, according to the AMH, the stock prices reflect the information that combines environmental conditions and their movements are the result of interaction of different distinct groups of investors. Under the AMH, the degree of market efficiency is a function of such factors as the number and type of competitors in the market or their adaptability to the evolving market conditions. There are also important theoretical implications of AMH in light of our research and the results reported in this paper provide empirical support for them: (1) relation ples (2005-2010 and 2006-2011), however they are becoming negative and statistically significant in the last three sub-samples (2009-2014, 2010-2015 and 2011-2016). This pattern is entirely consistent with the findings presented earlier in Tables 3, 4 and 5 for the raw returns and for other risk-adjusted measures. 9 Table 6b presents the results from the estimation of the Carhart model with the fifth variable, i.e. the crude oil returns, which serve as the control variable. Its estimate for the entire period is positive and equals 0.1016 (statistically significant at the 1% level). With other estimation results for other variables broadly unchanged in comparison with Table 6a, this finding means that the crude oil price was an important factor in explaining stock returns of the companies from the SRI E&RC stocks portfolio, which is not very surprising given that many of them are directly involved in crude oil business or their financial situation heavily relies (directly or indirectly) on the crude oil price.
However, an interesting effect is the pattern of results for the crude oil returns estimates across all the 5-year sub-periods, which shows a clear decline in the value of the estimated parameter over time (and loss of significance) from 0.0884 (significant at the 5% level) to 0.0014 (not significant), although they are positive in all these sub-samples. This finding means that the crude oil price movements have been an important factor in explaining the SRI E&RC stocks portfolio returns, but their influence weakened over time.
We further explore the role of the crude oil price movements using additional models in sections 6.3 and 6.4.
In the next section 6.2, we also investigate in more details the performance of the SRI E&RC stocks within different sub-groups.
Results for the Sub-groups within the SRI E&RC Stocks Portfolio
In the next step, we inspect more closely what actually happens inside the entire SRI E&RC stocks portfolio by investigating the performance of stocks from the individual sectors. Subsequently, we focus on the analysis of two broader groups of stocks: oil related companies and nonoil related companies.
The results across the distinguished 9 sectoral groups differed quite substantially. The best performing sectors were Alternative Energy and Gas, Water and Multiutilities, which stocks achieved the highest returns equal to 9.44% and 7.17%, respectively, while the worst performing sector was Mining characterised by negative return equal to -16.55%. All returns for the whole 11year period (February to January) from 2005 to 2016 for the individual sub-sectors within the SRI E&RC stocks portfolio (in the variant with dividends) are presented in the Appendix in Table A1. between risk and reward is unlikely to be stable over time, (2) investment strategies perform better in certain environments and worse in others, (3) profit and utility maximization are secondary objectives for investors, whereas their primary goal is survival and (4) survival is achieved through innovation (given that the risk-reward relationship is time-varying in nature, adaptation to changing market conditions is a natural way to behave and to achieve a desired level of expected returns in financial markets).
9. Although the Fama-French and Carhart models are time series models and they are based on time series data (in case of this paper on data at the monthly frequency of observations), such databases can be treated also as panel data, if the portfolio returns are disaggregated into individual stocks returns. Therefore, as a robustness check, we created such database in panel data format and estimated the Fama-French and Carhart models using panel data estimations. The results were qualitatively very similar to the traditional time series approach where portfolio returns were not disaggregated into individual stocks returns (i.e. the estimates of the parameters of the Fama-French and Carhart variables were very similar in terms of value and statistical significance). We do not report those results due to space limitation, but they are available upon request. We would like to thank two anonymous referees for suggesting this interesting idea.
Table 4: Modified Sharpe ratios (MSR) and Standard Deviations (SD) from 2005 to 2016 for the SRI E&RC stocks portfolio (with dividends) and for the total return index versions of benchmark indices.
Notes: 1) The modified Sharpe ratio was calculated based on the formula from Israelsen (2005): MSR = ER/SD (ER/absER) , where ER is excess return defined as mean monthly difference between the portfolio (or index) return and risk-free return computed for n equal to 12, 60 or 132 months, respectively, and SD is the sample standard deviation of the monthly differences of returns. 2). Bold numbers indicate positive MSR figures. 3) Cells highlighted in grey identify the portfolio or index with the highest MSR ratio for that period. 4) Single-year period covers 12 months from 1 st February to 31 st January. 5) Multiple-year period covers five consecutive single-year periods.
Open Access Article
These sectoral differences in performance, as well as the findings from the previous section about the statistical significance of the crude oil returns variable, prompted us further to examine the performance of the SRI E&RC stocks divided into two broader groups: oil related companies and non-oil related companies.
The selection of stocks to these two groups was based on companies that are: (1) energy and resource stocks which are largely oil related and (2) energy and resource stocks which are not oil related. In the case of the former group, companies from the mining industry, oil and gas industry and oil equipment, services and distribution industries were chosen. The companies in the latter group were formed from industries such as alternative energy, electricity and gas, water and multi-utilities.
The results depicting performance of the oil related companies and non-oil related companies are reported in Tables 7a -7d and they reveal very interesting additional patterns.
Tables 7a and 7b present the returns for the whole 11-year period (February to January) from 2005 to 2016 for the oil related companies stocks portfolio (with dividends), for the non-oil related companies stocks portfolio (with dividends) and for the total return versions of the benchmark indexes.
A direct comparison of the average annual return for the whole sample period from 2005 to 2016 reveals a striking result: an investment in the oil related stocks portfolio would have led to the average annual loss equal to -4.27%, while the non-oil related stocks portfolio would have delivered average annual profit equal to 4.61%. The result of the oil related stocks portfolio was consistently worse than all the benchmark indices returns, whereas the result of the non-oil related stocks portfolio was consistently better than all the benchmark indices returns (which were: 3.64%, 2.44%, 1.13% and 0.43%).
Moreover, the oil related stocks recorded only 5 positive returns out of all 11 annual sub-periods and 2 positive returns out of 7 in the 5-year long sub-periods. On the other hand, the non-oil related stocks recorded 8 positive returns out of 11 annual sub-periods and 5 positive returns out of 7 in the 5-year long sub-periods.
Similar picture emerges from Tables 7c and 7d presenting the values of the modified Sharpe ratios for the oil related companies stocks portfolio (with dividends), for the non-oil related companies stocks portfolio (with dividends) and for the total return versions of the benchmark indexes. The average annual modified Sharpe ratios for the group of oil related stocks for the entire period from 2005 to 2016 is negative and equal to -0.01, while for the group of non-oil related stocks it is positive and equal to 0.18.
The results for the variant of the SRI E&RC stocks portfolio without dividends for the oil related companies stocks portfolio, for the non-oil related companies stocks portfolio and for the price index versions of the benchmark indexes are presented in the Appendix in Tables B1 -B4 and they show similar patterns as those discussed above in this section. 10 10. Furthermore, these results provide additional interesting evidence regarding the impact of dividends on portfolio performance. The average annual return of the oil related stocks portfolio is -7.00%, while the version with dividends achieved -4.27%, which means a difference of -2.37%. However, in the case of the non-oil related stocks portfolio, the average annual return is 0.57% while its version with dividends achieved 4.67%, which means a much larger difference of -4.04%. This comparison shows that dividends mattered considerably more in case of the non-oil related companies than for oil related companies.
Table 7a: Average annual returns for the 11-year period (February to January) from 2005 to 2016 for oil related companies stocks portfolio (with dividends) and for total return versions of the benchmark indexes: 1) Global broad market (S&P Global 1200), 2) Global energy market (MSCI World Energy), 3) Global SRI market (FTSE4GOOD) and 4) Global alternative energy market (FTSE ET50).
Notes: 1) *** -means statistical significance at the 1% level, ** -means statistical significance at the 5% level, * -means statistical significance at the 10% level.
2) The t-statistic was calculated based on the paired difference test. 3) Bold numbers indicate positive figures. 4) Cells highlighted in grey identify the portfolio or index with the highest average annual return for the analysed period.
Open Access Article Table 7b: Average annual returns for the 11-year period (February to January) from 2005 to 2016 for non-oil related companies stocks portfolio (with dividends) and for total return versions of the benchmark indexes: 1) Global broad market (S&P Global 1200),
2) The t-statistic was calculated based on the paired difference test. 3) Bold numbers indicate positive figures. 4) Cells highlighted in grey identify the portfolio or index with the highest average annual return for the analysed period.
Table 7c: Modified Sharpe ratios (MSR) and Standard Deviations (SD) from 2005 to 2016 for oil related companies stocks portfolio (with dividends) and for the total return versions of the benchmark indexes.
Notes: 1) The modified Sharpe ratio was calculated based on the formula from Israelsen (2005): MSR = ER/SD (ER/absER) , where ER is the excess return defined as mean monthly difference between the portfolio (or index) return and the risk-free return computed for n equal to 12, 60 or 132 months, respectfully, and SD is the sample standard deviation of the monthly differences of returns.
MSR) and Standard Deviations (SD) from 2005 to 2016 for the non-oil related companies stocks portfolio (with dividends) and for the total return versions of the benchmark indexes.
Notes: 1) The modified Sharpe ratio was calculated based on the formula from Israelsen (2005): MSR = ER/SD (ER/absER) , where ER is the excess return defined as mean monthly difference between the portfolio (or index) return and the risk-free return computed for n equal to 12, 60 or 132 months, respectfully, and SD is the sample standard deviation of the monthly differences of returns.
Impact of Crude Oil Price on Performance of SRI E&RC Stocks Portfolio
The results presented in previous sections clearly point towards the existence of a relationship between the SRI E&RC stocks portfolio financial performance and the dynamics of oil price returns. Therefore, in this section we specifically focus on this issue and we investigate it deeper by trying to answer the question how the crude oil price movements affect the returns of our SRI E&RC stocks portfolio. We also provide evidence regarding how this relation evolved over time. Table 8a presents the estimation results of parameters from model (9) for the SRI E&RC stocks portfolio returns with the crude oil return as the explanatory variable. It shows that in the full period the estimate for the crude oil returns is positive and it equals 0.3004 (significant at the 1% level). Moreover, the estimates of this parameter are also always significant in all sub-samples and they are consistently significant in all 5-year sub-periods and in all single-year periods.
A closer inspection of the evolution of the estimates for crude oil returns across all 5-year sub-periods proves that they increased from the level of 0.2053 in 2005-2010 to 0.4862 in 2009-2014 and then declined to 0.2634 in the last period 2011-2016. This pattern confirms the results reported and discussed earlier indicating the importance of the crude oil overall and it illustrates its declining role over time towards the end of our sample period. Table 8b presents the estimation results of parameters from model (10) for the SRI E&RC stocks portfolio excess returns (defined as the difference between the SRI E&RC stocks portfolio returns and the returns of the world market index) with the crude oil return again as the explanatory variable. It shows that, similarly to Table 8a, the crude oil returns are also statistically significant and positive in the entire sample period as well as in most sub-periods. Furthermore, Table 8b reveals the same effect as previously detected and discussed, i.e. that the crude oil price importance weakened at the end of the analysed sample period. The estimates of crude oil return variable were quite stable in the first six 5-year long sub-periods from 2005-2010 to 2010-2015 at the level between 0.09 and 0.13 (estimates significant in all these cases), but subsequently they dropped to 0.05 (estimate not significant) in the last sub-period 2011-2016.
This finding for the SRI E&RC stocks portfolio excess returns means that the crude oil price directly affects not only the returns of the stocks of SRI energy and resource companies, but it has also an impact on their returns measured relative to the general market conditions (as captured by the world stock market index). This is an important conclusion from this study. The relationships discussed in this section are shown in Figure 1.
Relevance of the Crude Oil Price for Performance of Oil Related Stocks and Non-oil Related Stocks from the SRI E&RC Portfolio
Finally, in this last section we investigate the performance of stocks divided into two groups, i.e. oil related and non-oil related companies, and the relationship between their returns (referred to henceforth as: oil R pt and non-oil R pt , respectively) and the crude oil returns as the explanatory variable. Table 9a reports the estimated parameters of models for the oil related stocks. The estimate for the whole period of the crude oil return variable is positive and equal to 0.4318 (significant at 1% level). In the 5-year periods the crude oil return increases from 0.4211 to 0.5604 and then drops at the end of the sample to 0.3070. This pattern of estimates is similar also in case of the models for the non-oil related stocks presented in Table 9b, but their values are roughly twice as low. The estimate for the whole period of the crude oil return variable is positive, however it is equal to only 0.1897 (significant at 1% level). Notes: 1) Standard errors are included in brackets. 2) Statistical significance is indicated as: *** significant at 0.01 level, ** significant at 0.05 level and * significant at 0.1 level. 3) Sample size is reported as the number of months in the respective samples. 4) All regressions are based on time series models. Guide to estimation methods: OLS = Ordinary Least Squares, ARCH = AutoRegressive Conditional Heteroscedasticity and GARCH = Generalised AutoRegressive Conditional Heteroscedasticity. 5) The reported diagnostic tests include the value of the F-test statistic, the value of the Ljung-Box Q statistic with 10 lags as the test for autocorrelation and the value of the LM statistic with 10 lags as the test for any remaining ARCH effects (their respective p-values are reported in brackets).
In the 5-year periods it displays the same pattern, but it increases from 0.2257 to 0.3266 and then drops at the end of the sample period to just 0.1425. These findings show that the movements in the crude oil price had more influence on the performance of oil related stocks rather than on the non-oil related stocks, which is not surprising. However, the results in Tables 9a and 9b allow us also to measure the magnitude of this difference: it appears that crude oil price returns are related twice as strongly to the performance of the SRI E&RC oil related stocks than to the performance of the SRI E&RC non-oil related stocks. 11
CONCLUSIONS
The main objective of this study was to examine whether the performance of the stocks of the SRI energy and resource firms is superior relative to the major benchmarks and whether portfolios composed of such companies can outperform the market.
We first calculated the raw returns and assessed the performance of the portfolios relative to the broad, energy sector, SRI and alternative energy market indices. We report that in the entire 11-year period (February 2005 -January 2016) the annual average performance of the SRI E&RC stocks portfolio was neither consistently superior nor consistently inferior compared to the corresponding returns of all the benchmark indices. Overall, we found that the market does not penalize or reward the energy and resource companies for adopting the SRI practices 12 , however their performance relies heavily on the changes in the crude oil price.
When the entire sample is divided into oil related stocks and non-oil related stocks, we discovered that an investment in the oil related stocks portfolio would have led to the average annual loss of -4.27%, while the non-oil related stocks portfolio would have delivered average annual profit of 4.61%. The performance of the oil related stocks portfolio was consistently worse than all the benchmark indices returns, whereas the result of the non-oil related stocks portfolio was consistently better than all the benchmark indices.
Another important finding from our study is that the dividends mattered quite a lot for the analysed SRI energy and resource stocks portfolios and the individual stocks, because their inclusion in the calculation of the total returns substantially increased their performance.
The analysis of models of the SRI E&RC stocks portfolio returns with crude oil return as the explanatory variable shows that in the full sample period the estimate for crude oil returns is positive and significant at 1% level. However, across all 5-year sub-periods the estimate of crude oil return variable first increased between 2005-2010 and 2009-2014 periods and then declined in the last period 2011-2016. This pattern confirms our other results indicating the importance of the oil price, but also illustrates its declining role over time towards the end of our data sample. 11. It is noteworthy that the literature has not yet extensively focused on the effects of oil price changes on firm-level stock returns, which would allow for a more in-depth analysis (given that firms within the same sector naturally exhibit heterogeneous responses to oil price changes). The lack of empirical evidence in this area is related to the fact that there exist very few studies that rely on the firm-level portfolios construction (see a review by Degiannakis et al. (2014)). Only a very limited number of papers report results based on firm-level data. They show that individual firms' stock returns respond to changes in oil prices (see Boyer and Filion (2007), Scholtens and Wang (2008), Narayan andSharma (2011) or Tsai (2015)), yet the evidence about the nature, magnitude and variation of these reactions is scarce. Hence, our paper also contributes to this particular new line of literature by addressing the gap in research on the effects of oil price changes on company-level stock returns, which was identified in a review paper by Degiannakis et al. (2014).
12. Given that firms chase multiple objectives and the maximisation of their share price (or maximisation of shareholder value) is just one of them, the situation within which the companies analysed in this study are not being penalised for adopting the SRI principles may already be sufficiently satisfactory for them if they gain, indeed, social acceptance etc.
Open Access Article
In the models with the excess returns, we also found similar effects, which implies that crude oil price directly affects not only the returns of the SRI energy and resource companies stocks, but it has also an impact on their returns measured relative to the general market conditions (as captured by the international stock market index). This is another important conclusion from this study.
Our findings also provide evidence that the movements in the crude oil price had more influence on the performance of oil related stocks rather than on the non-oil related stocks. Furthermore, we measured the magnitude of this difference: it appears that crude oil price returns are related twice as strongly to the performance of the SRI E&RC oil related stocks than to the performance of the SRI E&RC non-oil related stocks.
Finally, our analysis shows that the group of SRI energy and resource companies from the Global-100 list in the 11-year period 2005-2016 has been limited to 19 countries of origin including 16 developed nations. This indicates that in many emerging economies, where production and consumption of energy and natural resources are substantial and steadily growing, the SRI related criteria are yet to be fulfilled by the firms from these countries.
The findings from this study have broad important policy implications for financial market regulators and environmental protection agencies in addition to the investors who allocate their funds in energy and resource companies stocks (including alternative energy firms). They also should raise awareness among stock market investors to mobilise capital in more sustainable ways and, possibly, to channel it towards more sustainable methods of energy production.
APPENDIX
Results in the Appendix depict the returns for the individual sub-sectors within the entire SRI E&RC stocks portfolio as well as the returns for the oil related companies (without dividends) and the non-oil related companies (without dividends). Table A1 presents the returns for the whole 11-year period (February to January) from 2005 to 2016 for the individual sub-sectors within the SRI E&RC stocks portfolio (with dividends).
Tables B1 -B4 present the returns for the whole 11-year period (February to January) from 2005 to 2016 for the oil related companies stocks portfolio (without dividends) and the nonoil related companies stocks portfolio (without dividends) and for the price index versions of the benchmark indexes: 1) Global broad market (S&P Global 1200), 2) Global energy market (MSCI World Energy), 3) Global SRI market (FTSE4GOOD) and 4) Global alternative energy market (FTSE ET50).
Open Access Article Table B1: Average annual returns for the 11-year period (February to January) from 2005 to 2016 for oil related companies stocks portfolio (without dividends) and for price index versions of the benchmark indexes: 1) Global broad market (S&P Global 1200), 2) Global energy market (MSCI World Energy), 3) Global SRI market (FTSE4GOOD) and 4) Global alternative energy market (FTSE ET50).
2) The t-statistic was calculated based on the paired difference test. 3) Bold numbers indicate positive figures. 4) Cells highlighted in grey identify the portfolio or index with the highest average annual return for the analysed period. Notes: 1) *** -means statistical significance at the 1% level, ** -means statistical significance at the 5% level, * -means statistical significance at the 10% level.
2) The t-statistic was calculated based on the paired difference test. 3) Bold numbers indicate positive figures. 4) Cells highlighted in grey identify the portfolio or index with the highest average annual return for the analysed period.
Open Access Article Table B3: Modified Sharpe ratios (MSR) and Standard Deviations (SD) from 2005 to 2016 for oil related companies stocks portfolio (without dividends) and for the price index versions of the benchmark indexes.
Notes: 1) The modified Sharpe ratio was calculated based on the formula from Israelsen (2005): MSR = ER/SD (ER/absER) , where ER is the excess return defined as mean monthly difference between the portfolio (or index) return and the risk-free return computed for n equal to 12, 60 or 132 months, respectfully, and SD is the sample standard deviation of the monthly differences of returns. 2) Bold numbers indicate positive MSR figures. 3) Cells highlighted in grey identify the portfolio or index with the highest MSR ratio for that period. 4) Single-year period covers 12 months from 1 st February to 31 st January. 5) Multiple-year period covers five consecutive single-year periods.
Table B4: Modified Sharpe ratios (MSR) and Standard Deviations (SD) from 2005 to 2016 for non-oil related companies stocks portfolio (without dividends) and for the price index versions of the benchmark indexes.
Notes: 1) The modified Sharpe ratio was calculated based on the formula from Israelsen (2005): MSR = ER/SD (ER/absER) , where ER is the excess return defined as mean monthly difference between the portfolio (or index) return and the risk-free return computed for n equal to 12, 60 or 120 months, respectfully, and SD is the sample standard deviation of the monthly differences of returns. 2) Bold numbers indicate positive MSR figures. 3) Cells highlighted in grey identify the portfolio or index with the highest MSR ratio for that period. 4) Single-year period covers 12 months from 1 st February to 31 st | 2019-05-30T13:13:58.515Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "430eb5640199cf3cd16fbe551549123e6b32cfd6",
"oa_license": "CCBY",
"oa_url": "https://rgu-repository.worktribe.com/preview/1937891/BRZESZCZYNSKI%202019%20Socially%20responsible%20(VOR).pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1a7096d551f9641a2b2d2c159479f2e903269ef7",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
213192089 | pes2o/s2orc | v3-fos-license | Study of sweetened seawater transportation by temperature difference
This study evaluates the vapor transportation by transmission pipelines during seawater desalination. This study seeks to reach a high rate of water transportation during desalination. Hence, the results obtained from this research are closer to reality than other analyses. Other benefits of this research include increasing efficiency, studying the element-to-element transmission, and considering flow as a compression case. The water desalination system comprises three parts of evaporation, transportation, and condensation. In the transportation part, equations of continuity, momentum, and energy are implemented, and the temperature of the vapor is calculated at the beginning of the condensation pipe. Other achievements of this study include the division of transportation lines to small elements and the implementation of vapor condensation in transportation lines. This study used pipelines with diameters of 1, 2, and 4 m to transmit vapor to Ramsar city and the heights of Takhte Soleiman, 16 km away from the city with the elevation of 2000 m. The results show that diameter, transportation length, and temperature differences are, respectively, the most influential factors on the efficiency of sub-atmospheric vapor transportation. The outcomes of this study were presented as the outflow of condensed water at the destination. Considering the margin of safety in calculations, it was scientifically proved that the results obtained in this study were approximately 10% more than results derived from other studies in the literature that are based on the incompressibility of fluids.
Introduction
Water is a national capital. Until twenty years ago, the most important national asset of the countries was energy, but soon, they would exchange water with oil. Not to be outdone, it is enough to note that there are no less inexpensive alternatives to energy than oil, but there is no substitute for water right now, and today the emphasis is on saving. The evaporation of seawater in warm regions, its transportation with clouds, and finally its condensation as rain in low temperatures is a clear example of the natural desalination of seawater, which annually provides billions of tons of desalinated water for various parts of the world. The basis of such natural desalination is the meaningful difference between the warm source or sea and the condensation location or Mountaintop, which has become the purpose of studies for some researchers to investigate desalination through natural potentials [1,2]. Among the requirements for this method, there is the existence of a cold area that provides a sufficient temperature difference for the transportation and the condensation of vapor.
Desalination methods of seawater generally contents, including Multi Stages Flash distillation (MSF), Multi-Effect Distillation (MED), and also, membrane methods which include Reverse Osmosis (RO). The energy consumption in these methods is variable from 1.75-40 Kwh/m3 that depending on the technology utilized and plant size [3,4,5,6]. Moreover, energy consumption for desalinated water transfer should be calculated and considered separately. According to the RO method, the efficiency of thermal energy conversion into electrical energy should also be considered. The method of this article is natural, the seawater sweetening and transferred without using any energy.
There is a great deal of research into seawater desalination using seawater evaporation and condensation by temperature difference [1,2,4,7,8,9,10,11,12,13,14,15,16,17,19]. This study used a formulation to calculate the transfer of mass and energy in evaporation and condensation parts, technically studied vapor transportation for long distances. To achieve a circulated mass flow rate of raw water in the evaporation part as well as the length of condensation line (Line) in the destination, a trial-and-error approach was used to solve related equations by establishing connections between three parts of evaporation, transportation, and condensation. Other assumptions of this study include the incompressibility of vapor; to deal with this assumption; an average value is assigned to the density of water vapor [1]. In the present study, (1) the use of a larger and a more detailed model, together with (2) more thermal and cooling details, (3) the use of transmission pipes with various diameters, (4) the creation of more realistic and compatible climatic conditions for the origin and the destination, (5) in the large diameters of pipe the calculations are more accurate and there is not any limitation for Reynolds number, and, finally, (6) the assumption of vapor compressibility are the contributions of this study that can lead to the advancement of knowledge and technology in this method. To conduct a field survey, the coastal city of Ramsar with particular characteristics of the warm area was studied along with the heights of Takhte Soleiman 16 km away from the city, as the cold area.
Materials and method
Same as the parent article, this research includes three parts of evaporation, transportation, and condensation. The assumptions made throughout this study are as below: The sub-atmospheric pressure exists in all parts of evaporation, transportation, and condensation. The latent heat of vaporization of raw water is stored in the evaporation part. Throughout all stages, the pipes and their joints are sealed and resistant to water entrance. The pipe used in the transportation part is adiabatic and no heat exchange has occurred with the environment. The existing and defined parameters are assumed to be fixed through time. The transportation and condensation parts are made of round and smooth pipes.
Evaporation part: In this section, raw water is evaporated by the internal energy of seawater or saline water. A pipe with the diameter that is four times the diameter of transportation pipes enters the seawater vertically and causes it to evaporate as a result of atmospheric pressure in the pipe. It should be noted that it is better to de-aerate the seawater before these operations. The reason for seawater de-aeration is to remove non-condensable gasses that reduce the efficiency of operations. The evaporation part performs naturally, which is the result of decreasing the temperature around the water. Two factors can increase the efficiency of evaporation within the evaporation part. The first factor is the water inflow to the vertical pipe, and the second factor is the increase of temperature difference between seawater and inside of the pipe, which is the temperature equivalent with the internal pressure of the pipe.
Transportation part: After the evaporation, the vapor enters the transportation part. In this part, water vapor tends to move towards high elevations because of less pressure and temperature in them. In this stage, the atmospheric pressure of the pipe is decreased as a result of the increase of elevation and the compressibility of vapor.
Condensation part: After the water vapor has reached the highest point, the condensation part starts. In this stage, a pipe with the same diameter as transportation pipes is directed towards the consumption sector. Since the pipe is not adiabatic in the condensation part, vapor heat is transferred from inside of the pipe to the environment, which causes the act of condensation.
Evaporation
In this part, the temperature of the raw water is decreased due to evaporation. After that, the water flows and warm water replaces cold raw water. In this process, the temperature decrease due to heat emission from raw water is equal to the latent heat of vaporization and, thus, the following equation is driven: where M is the circulated mass flow rate of raw water (kg/s), m e is the amount of water evaporated (kg/s), T S is the temperature of the seawater (K), T l,e is the temperature of the vapor water (into the pipe) (K), c is the specific heat capacity of the water which is determined according to the temperature and salinity of the seawater (J/K.kg), and λ is the latent heat of vaporization of water that is determined in terms of temperature (J/ kg). However, another formulation is needed to calculate circulated mass flow rate of raw water, which is derived from energy equations as below [20]: From which the equation below can be driven: where M, m e , m 0 and h indicate the circulated mass flow rate of raw water, the extracted water, the water returned to the tank and the enthalpy, respectively.
Transportation
In short pipes and nozzles, one can ignore the effects of viscosity, but in long pipes, the effects of flow friction on the wall of the pipe must be considered. In order to provide a practical formulation, considering the friction, a limited volume of a pipe is considered, and by equations of continuity, energy, and momentum, the equations governing the fluid transfer will be derived. Finally, the result of the formulation for the total length of transfer is presented. The physical recognition of the problem and the aristocrats over the nature governing the problem, including the linear distance, and the height between the cold and the warm points, and the range of temperature variations between these points are considered as the most critical primary knowledge of the problem. The pipeline can be composed of a large number of interconnected pipes; the thermodynamic conditions at the beginning and the end of the pipe are different, and each element has its specific fluid velocity. Figure 1 depicts an inclined pipeline (α degree) with a constant diameter as an example of a transportation pipeline, in which mass and momentum balance can be generally shown with Eqs. Continuity equation [21,22,23]: where ρ is the density of water vapor (kg/m 3 ) and u is the vapors velocity (m/s). Momentum equation [22], [23]: τ w À ρgsinðαÞ ¼ ∂ðρUÞ ∂t (5) ρ, U, P, D, τ W , g, and α represent vapor density, vapor velocity, vapor pressure, pipe diameter, wall shear stress, gravitational acceleration, and the angle of pipeline relative to the horizon, respectively. Using the definition of the friction coefficient f in the equation τ w ¼ 1 2 fρU 2 , τ w can be omitted from Eq. (2). Therefore, we have In this study, in order to solve an analytic problem with the assumption of steady state and compressible that is as close as possible to reality, the basis of the calculation will be considered. Considering the pipe length to be L, and the slope of the transportation line ðαÞ, Eq. (5) will be obtained which contains three pressure drops, including compressible pressure drop ðΔP ρ ¼ Δρ ⋅U 2 Þ, hydrostatic pressure drop ðΔP H ¼ ρgL ⋅sinðαÞÞ, and friction pressure drop where p is the vapor pressure (Pa), g is the acceleration due to gravity (m/ s 2 ) and Δz is the height difference between the two ends of the element (m). Energy Equation: according to the first law of thermodynamics [20]: where h is the vapor enthalpy (J/kg), Q is the amount of heat flow extracted per unit length (W/m), Δx is the length of the element. Simultaneously, solving Eqs. (3), (4), (5), (6), (7), and (8), variations of density, velocity, temperature and pressure along the transportation pipe can be calculated.
By analyzing the transport pipe and achieving related parameters and information, the flow of mass through the pipe can be calculated: Where m t , is the transportation rate (kg/s) and D is the pipe diameter (m). According to continuity equation, since the rate of steam transport leads to only one result for each element, it is not necessary to write any of equations above for any of them.
Condensation
In the condensation part, to avoid energy use, the pipe insulation is removed so that the steam will condense and convert to distilled water by getting heat from the environment.
The latent heat of condensation is driven from the water vapor, and the condensation operation occurs in this section due to the loss of heat Q (in Eq. (10)). Precise formulation based on the theory and analysis regarding condensation and evaporation of crossover flow is presented in the literature review [24].
where Q is the amount heat flow extracted per unit length of this part (W/ m), T I is the temperature of the liquid inside the pipe (end of transportation pipeline temperature) (K), T O is the external air temperature (K), R I is the inner surface heat transfer resistance between fluid and material of the pipe (m 2 .k/w), R o is the outer surface heat transfer resistance between fluid and material of the pipe (m 2 .k/w), λ k is the thermal conductivity of pipe (W/m.K),r 1 is the internal radius of the pipe (m), r 2 is the external radius of the pipe (m).
As water and steam are exist in the condensation pipe, all converted heat is consumed on condensation of the water vapor while the water temperature stays constant and therefore gives the following equation for the equilibrium between (m) the amount of the heat flow exhausted to outside and the latent heat released by condensation of the steam (Kg/s).
Where L c , is the length of condensation part (m), and λ is the latent heat of condensation of water (J/kg).
Calculation method
The computational element algorithm for this study is presented in Figure 2. The small number of spatial variations will slow down the calculation and be disregarded for the final answers. On the other hand, if these steps are significant, although the computational speed will be high, the accuracy of the outputs will be significantly reduced. Hence, after obtaining the experience in the numerical solution of transportation Eqs, taking a spatial step of 100 m will result in an increased speed of the numerical calculations and the desired accuracy.
Validation
If the incompressible fluid is considered, the term of Δρ and U 2 in Eq. (7) will be omitted. The steady-state conditions and incompressibility will only remain two terms of the pressure drop, ρgLsin (α) and 2fLρU 2 D , which will include the unknown velocity U. The designation of this study is the velocity determination with the assumption of the compressibility to estimating the parameters iteratively until achieving proper answer (3 terms for pressure drop). The another paper with an important limitation (3Â10 3 < Re < 10 5 ) attempts to determine the velocity by the following empirical equation [1]: Holding f ¼ fD 4 and substituting S A ¼ πD (7) and by simplifying formula (7), above equation is obtained. Now that we have assured of the accuracy and precision of the equations and formulations suggested in this study, we can proceed to validate the accuracy of the program written by these equations. For this purpose, the current assumptions and results were compared to those conducted in previous studies [1]. The parent article has assessed two cities of Sana and Al-Quran as its case studies. Assuming the fluid to be non-compressible and the number of elements for transportation to be equal (to one) in both studies, a comparison of results obtained in this study with those of parent article is shown in Table 1. The reason for the discrepancy for the pipe with 4 m of diameter is that the Reynolds number has exceeded its allowed threshold in Blasius formula, resulted in some errors in the parent article. Nevertheless, in the program of this study, the Blasius formula is not used and, thus, any value of the Reynolds number can be used in the program.
Case study
The magnitude of the difference between the cold and the warm source temperatures, as well as the short path, is a convenient and feasible application point of view. In case these two priorities are met, the only technical and operational concern would be the inclination of the path. In any case, the mentioned items can have efficient results. A case study with satisfying results was observed in the desalinated water transfer through cold vapor method from the coastal city of Ramsar in the north of Iran to the cold 2,000-meters high 2000 summit in mount Takhte Soleiman region 29 km away. Meteorological data related to sea-level temperature in Ramsar (a warm source) and the Takhte Soleiman heights (a cold source) have been depicted in Table 2. In this figure, each temperature that is shown with a point is the average of the recorded temperatures for one month. The average daily (maximum) and nightly (minimum) temperatures were reported each month, whereas, for Caspian sea owing to the negligible difference between day and night temperatures at sea level, only one monthly average was reported [25,26]. Table 2 shows the values of seawater temperature on the coast of Ramsar as well as the maximum and minimum cold temperature in the heights of mount 2000. These results are obtained for the pipes with 1, 2, and 4 m of diameters. As it was expected, at the times when the average temperature difference between warm and cold resources is higher, the condensation of cold water is increased and, which results in higher production of freshwater. In all diameters of pipes, the highest condensation rate has occurred in the summertime, including May, June, and July. These data sets are related to climate conditions at mount 2000, and Table 3. (The information of pipeline between Ramsar to mount 2000 for the diameters of 1, 2 and 4 m in steady state.). several differences might emerge in the most productive months of different case studies. As shown in Table 3, the increase of pipe diameter results in the amount of purified water to increase; moreover, the rate of changes in water production is more than the rate of diameter changes, such that the approximate rate of water produced through pipes with four meters of diameter is 13 kg per second. It can be witnessed that by increasing the diameter of pipes that produces more freshwater, the length of the condensation pipe also increases and may, in some cases, reach up to 100 km. Hence, one should consider the most significant length of the condensation pipe when assembling such systems.
Result
In this stage, it is possible to implement the calculations and formulations of this study on the cities of Sanaa and Al-Quran (case studies of the parent article). In the columns with the header [1], the water productions are included based on the calculations done in the parent article. These comparisons lead to the formation of Tables 4 and 5, the content of which is similar to Table 3. In Tables 4 and 5, the mass values calculated using this research methodology are also higher than those calculated in the parent article.
In a material comparison, it can be easily understood that the mass of materials used in the warm zone and the cold destination are minimal compared to the mass that is considered for transportation. From an engineering point of view, this fact draws on the importance of capital investment for pipelines, and it is likely that in terms of project management, significant attention will be devoted to this phase. Even though beyond the length of the vapor transportation line, presently, the industry producing high-quality and light pipes in various diameters with acceptable strength, mass, heat-insulation, ease of installation, and operation skill is at an assuring stage and due to an investment return in due time, there seem to be no limitations for piping. Therefore, transportation methods can be made applicable by performing general, environmental, and meteorological studies, with proper oversight on the temperature conditions of the origin, destination, and selection of the correct points for transportation. The geometries of the transportation pipeline can reduce the system efficiency, but which is essential in the transportation part is the length of the pipe.
Another point that has to be considered is that in case the temperature falls below zero in the mountain, the system will clog due to freezing, and it will be disabled. This limitation has to be considered in selecting the piping location based on the climatological and weather history of the region. In case water provision is necessary for places with the possibility of the temperature dropping below zero at the destination, LC can be determined through calculations and fluid discharge and collection equipment, such as a pressure or vacuum-maintaining valve, and a reservoir.
Conclusion
Physically and following the current principles in the transport phenomena, by considering the technology and the ability of humankind in the construction of long pipelines, the vapor pipeline is both practical and logical. There are many benefits to the production of desalinated water through this transfer, which has attracted the attention of researchers in the development of this method. Transitional cross-section and excessive temperature differences are active factors in transporting. Increasing the diameter as far as possible, choosing the climatic regions that provide maximum natural temperature difference at a minimum distance, and the raw-water options with salinity values less than seawater (such as in effluents and brackish water) can improve the system efficiency. The calculations of vapor transport without simplifying assumptions, and without interfering assumptions and presumptions about the condensation part, and the consideration of the maximum salinity and the fact that the fluid is compressible provides more reliable numbers increasingly for vapor transfer relative to the modeling of the previous researcher.
Declarations
Author contribution statement Koosha Aghazadeh: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.
Reza Attarnejad: Conceived and designed the experiments; Performed the experiments; Wrote the paper. | 2020-03-19T10:52:38.501Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "88d9befe6104a60b06cd4be6ad0318dd80cc497c",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844020304187/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a21dab5174bc9925e6bab2a5627dfe73eae5ccf",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
54708600 | pes2o/s2orc | v3-fos-license | Low crested coastal defence structures on the Catalan coast of the Mediterranean Sea: how they compare with natural rocky shores
Mediterranean coastal areas are threatened by coastal development that modifies the coastline through the construction of buildings and infrastructures such as roads, railways, promenades and ports. The main consequences of this development are changes in the deposition processes and wave regime that lead to increasing erosion. In addition, the sea-level rise and increasing storminess due to climate change may also contribute to beach recession and flooding in coastal areas. Along the Catalan coast, beach tourism is one of the main resources of the local economy, and there has SCIENTIA MARINA 71(2) June 2007, 259-267, Barcelona (Spain) ISSN: 0214-8358
-La afectación por problemas de erosión de la costa Mediterránea ha aumentado significativamente en las últimos tiempos.Zonas turísticas como la costa Catalana, han experimentado una demanda creciente de construcción de Estructuras de Baja Cota (EBC) para la defensa costera con el fin de estabilizar la arena en playas de uso recreativo.Estudiamos la composición de los epibiontes en tres sistemas de EBC, comparándola con la de comunidades rocosas naturales cercanas a los EBC.El objetivo es analizar dicha composición a escala regional, así como determinar la posible existencia de patrones en relación con el tipo de costa que rodea las estructuras (arena o rocas), la distancia a la costa rocosa natural y la orientación de los bloques de cada estructura.Las comunidades que crecen en los EBC son parecidas a las de la roca natural, aunque siempre tienden a presentar una menor diversidad y riqueza específica.Las especies presentes en los EBC representan entre un 60% y un 95% de las de la roca natural, mientras que las diferencias en el número de taxones entre ambos sustratos aumenta a medida que lo hace la distancia que los separa.En la costa catalana, los EBC funcionan como un sustrato rocoso pobre en especies, donde nunca se llega a alcanzar la composición de las comunidades clímax características de la zona.
been an increasing demand to obtain land and stability on the coastline.The construction of coastal defence structures to protect beaches from erosion and even to help to maintain artificially nourished beaches has significantly increased during the last few decades.The most common local forms of these constructions are jetties and, most recently, offshore breakwaters (i.e.low-crested structures [LCS]) with or without groins.
LCS are structures parallel to the shore that can be submerged or regularly overtopped by waves (Lamberti et al., 2005).On the Catalan coast, LCS are formed by single units protecting small beaches or systems formed by several units plus, in some cases, groins.Their presence has direct consequences on the hydrodynamics of the coastal cell, and on the sediment transport and composition (i.e.granulometry and organic content; Martin et al., 2005).Since the nature and structure of soft-bottom assemblages is directly related to sediment descriptors (Gray, 1974), significant changes in the infaunal communities are expected after the construction of sea defences.Indeed, a broad scale comparison of soft-bottom assemblages associated with LCS structures along with the Atlantic, Adriatic and western Mediterranean European coasts, under very different hydrodynamic regimes (i.e.macro, micro and a-tidal shores) and coast types indicated shifts in the structure and function of the infaunal communities after the construction of LCS.Compared with reference sites, deposit feeders tended to increase (in parallel with an organic enrichment of sediments) on the landward side of the structures, while suspension feeders increased on the exposed side following an increase in the coarse fraction of sediments (Martin et al., 2005).
As for biota, LCS represent a new rocky habitat for colonisation that modifies the natural limits of rocky shores (Moschella et al., 2005).The communities growing on seawalls (i.e.sea-defences, harbours) are similar to those from natural rocky shores but in essence a poor imitation (Moschella et al., 2005).The common structure of sea defences based on blocks creates strong gradients of exposure resulting in a patchy community composition of the biota (e.g.Glasby and Connell, 2001).Moreover, the structural design of the LCS can be modified to influence cover and species composition (Moschella et al., 2005) according to the particular ecology of the area.LCS may also facilitate the spread of invasive species that encounter a new colonisation substrate (Airoldi et al., 2005) and thus potentially compete with the native biota.
In this study we explore patterns of community composition on LCS compared with natural rocky shores, representing different structures along the Catalan coast.Differences from the European regional study (see Moschella et al., 2005) are expected when small local detail is considered, taking into account the particularities of this shoreline without tides that alternates from rocky to sandy beaches.Indeed, LCS impact processes do not scale up or down (Airoldi et al., 2005) and the environmental consequences of a construction very much depend on the ecological context, local characteristics and background knowledge (Airoldi et al., 2005).
We here analyse what grows on several LCS structures compared with the nearest rocky communities.We stress the differentiation of habitat (seaward vs landward) and the effect of single structures (i.e.formed by a single barrier) vs more complex structures (i.e.formed by several barriers) resulting from the LCS construction (see Fig. 1).We also consider patterns related to the distance between natural and artificial substrates, and the importance of the habitat surrounding the LCS (i.e.sand or rocks) for community composition.The final goal is to provide informative data to managers on how we expect the biota to evolve following the apparently inevitable construction of LCS on Mediterranean coasts.
Study sites and sampling design
The study was conducted within the framework of the project 'Environmental Design of Low Crested Coastal Defence Structures' (DELOS;EVK3 2000-22038).Three sites with a different number of LCS units were studied (see Fig. 1 and details in Table 1).
In all cases the structures were parallel to the shoreline, emerged 1 to 2 m above sea level and were situated 50 to 200 m from the coastline.On this coast, tides are not significant and the LCS are emerged except during occasional storm events, when all three structures can be overtopped by the waves.The structures are located on the Catalan coast: Altafulla (Costa Daurada) with a single structure, Cubelles (Costa Daurada) with three LCS closed by two lateral jetties and Sant Antoni de Calonge (Costa Brava) with three LCS with groins, also closed by jetties (Fig. 1).The three LCS structures were constructed (or reworked) more than ten years ago to prevent beach erosion.The beaches are regularly nourished to supplement the action of the LCS.For each structure, we selected the nearest natural rocky shore as a reference area to compare the composition of the biota with that of the LCS.
The cover and diversity of epibiont species was studied in the upper infralittoral zone (i.e. the zone characterised by alternation of emersion and submersion; see Guidetti et al., 2004).The biota was studied in late spring and summer from 2001 to 2003.Visual inventories were conducted from outside the water when the water was backing down due to the small wave movement.Two scientists estimated independently the cover (as a percentage of area occupied) of all macroscopic species in a 25 x 25 cm quadrat subdivided into 25 subquadrats.The percent cover was quantified by giving each individual taxon a score ranking from presence (less than 5%) to 25%, 50%, 75% and 100% cover in each small quadrat, and then adding up the scores of all the smaller quadrats.This method has proved to give consistent estimates when compared with other, more robust techniques, particularly in areas with high hydrodynamics and low disturbances such as the ones studied here (e.g.Sala and Ballesteros, 1997;Cebrián et al., 2000).The species were either identified in situ or taken to the laboratory for further classification.A single data matrix was kept from the field data.
Four randomly distributed points with three replicated quadrats per point were sampled in three different situations for each LCS: 1) exposed side (seaward), 2) sheltered side (landward), and 3) within blocks (only in Altafulla).In the reference areas, we used the same sampling design but only the exposed side of the natural rocks was available for study.
Statistical analysis
The total number of species (S), the percentage area covered (N) and the Shannon-Wiener diversity index (H') were obtained for each sample and analysed by means of a between-groups (sites) and exposure (orientation) ANOVA design, and a between-groups (sites) and nature of the coast (LCS or natural rock) design (Statistica software).We had three sites (Altafulla, Cubelles and Calonge) with three levels each: reference site (R), seaward side (S) and landward side (L).Each level had twelve replicates.Post hoc multiple comparisons were done by the Tukey HSD test (Sokal and Rohlf, 1995).All data were tested for normality and homoscedasticity prior to statistical analysis.The patterns of distribution of the communities were explored and graphically represented using multi-dimensional scaling (MDS) and the Bray-Curtis similarity index (Clarke and Warwick, 1994).Significance tests for differences among locations and sites were performed using the analysis of similarities (ANOSIM), whereas the similarity percentage procedure (SIMPER) was employed to identify the contribution of each species to dissimilarities (Clarke, 1993;Clarke and Warwick, 1994).Data were square root transformed and standardised prior to multivariate statistical analysis.
RESULTS
The composition of the natural communities differed significantly among the three sites (Fig. 2), thus indicating regional differences, particularly between the northern (Sant Antoni) and the two southern stations (Altafulla and Cubelles) (Table 2a.).The main differences were the following: The cover of Corallina elongata Ellis and Solander was very high in the south and low in Sant Antoni, whereas Cystoseira mediterranea Sauvageau showed the opposite pattern (Figs.3a, b).There was a high cover of Mytilus galloprovincialis Lam. in Cubelles (Fig. 3c).In Altafulla, there were also two samples with C. mediterranea, as seen by the two outliers closer to the Sant Antoni stations (Fig. 3).Sant Antoni showed the highest number of species and diversity (Fig. 4a, b), while Cubelles and Altafulla had similar numbers of species, with the diversity being particularly low in Altafulla (Fig. 4b).The total cover of the biota was similar in the three reference sites (Fig. 4c).
The composition of the biota was significantly different between natural and LCS substrates, including the different orientation of the structures (landward and seaward; Fig. 5; Table 2b.).Corallina elongata was the most abundant species at all sites and in all orientations (Fig. 6a), contributing to 80% of the similarity in the LCS assemblages.The highest dissimilarity was found between the reference points and the landward side of the structure (64%), followed by the reference points and the seaward side (57%) and finally the dissimilarity within the structure (53%).
The different cover of Corallina elongata and Mytilus galloprovincialis explained the differences between the assemblages in the natural rock and those on the exposed side of the structure.Within the LCS, the high cover of Litophyllum incrustants Philippi at the exposed site (Fig. 6b) and the low cover in the absence of M. galloprovincialis (Fig. 6c) on the landward side of the LCS were mainly responsible for the differences observed.
The number of taxa was always significantly higher in the natural substrate than on the LCS (Fig. 4a; ANOVA p>0.001), the latter including between 60 and 95% of the taxa present at the reference sites.The diversity was also significantly higher on the natural shores than on the LCS except on the landward side of the structure in Altafulla, where it was similar to the that of the reference site (Fig. 4b).The cover, however, did not show any significant pattern according to the orientation (Fig. 4c).The number of structures and the presence of groins did not seem to have any effect on the LCS's community parameters since Altafulla (1 structure) showed an intermediate number of species, diversity and cover compared with the other localities (3 structures; Figs.4a, b, c).
Differences in the number of species between the natural and the artificial substrates increased proportionally to the distance between the two systems (Fig. 7).
DISCUSSION
Regional differences in the hard-bottom assemblages were more important than the nature of the substrate (i.e.natural vs coastal defences) in the composition of the epibiota of the three systems studied.The main regional differences encountered were in agreement with the pattern of community composition in relation to water quality all along the Catalan coast described by Ballesteros et al. (1984) and reassessed in Pinedo et al. (2006).At the northern station, the reference sites are characterised by the dominance of communities typical of high water quality (i.e.dominance of Cystoseira mediterranea), while in the south, the reference communities indicate a stronger human influence (i.e.dominance of the Corallina elongata turf).This shift in community composition indicates some degree of eutrophication (i.e.presence of rivers and ports).Within the two southern stations, the high cover of Mytilus galloprovinciallis in Cubelles compared with Altafulla also reflects an even lower water quality at the former station.
The different mineral composition of the reference sites does not seem to have an effect on the composition of the epibiota.Studies showing the importance of biomineralogy in the structuring of sessile benthic communities show how quartz-rich substrates hold less diverse and mature communi- ties than limestone coasts (i.e.Bavestrello et al., 2000), contrary to the patterns observed in this study.Thus, water quality seams to be the major factor driving the structure and composition of the epibiota on the Catalan coast for both natural or artificial substrates.
The results from this study also confirm previous reports of intrinsic differences between natural and artificial rocky shores (i.e.Bulleri, 2005;Ballesteros et al., 2006), even for these three LCS structures that had been colonised for over 10 years, which is close to the time for the development of a 'normal' hard-substrate community (Hawkins et al., 1983).Indeed, the species present on the studied LCS are later colonisers (i.e.Moschella et al., 2005), indicating maturity in the artificial assemblages.The main differences between the natural and the artificial assemblages are a lower number of species and diversity in the artificial substrates and a significantly different community composition.These patterns may be due to a less diverse substrate-surface on the lowcrested defence compared to the natural rocks (see Fletcher and Callow, 1992 for a review).The presence of different scales of crevices and pools creates more diverse microhabitats that can potentially be colonised by a number of species with a different autoecology (Metaxas and Scheibling, 1993;Russell, 1999).Indeed, experimental manipulation of artificial substrates towards increasing micro-crevices and fractures, and also increasing the presence of rock pools, resulted in an increase in the number of species and diversity on the artificial substrate (Moschella et al., 2005).
The main species responsible for the differences between the natural and the artificial substrate were the high cover of Corallina elongata on the artificial substrate, and the opposite pattern of Miytilus galloprovincialis, which was particularly rare on the landward side of the LCS (Figs. 3a, c).C. elongata is the most common species on the Catalan coast when water quality is good to acceptable (i.e.Pinedo et al., 2006), and it extends all over the Cystoseira mediterranea zone when the latter disappears (Ballesteros et al., 1984;Thibaut et al., 2005).On the artificial substrates studied here, the presence of C. elongata indicates some stabilisation of the substrate, since the turf was well developed and several centimetres wide, and this species needs several years to expand (growing rates for Corallina sp. of 1.4 mm month -1 ; Andrake and Johansen, 1980).
Moreover, a frequent rebuild and perturbation of boulders in LCS generates more immature communities dominated by fast-growing ephemeral algae together with Mytilus sp.(Bacchiocchi and Airoldi, 2003).The high cover of Corallina sp. on the LCS is probably solely a consequence of a low diversity of the community on the artificial substrate.
The proliferation of Mytilus galloprovincialis on artificial substrates is a common phenomenon that has already been described in different seas [e.g.Ballesteros et al. (2006) in the Mediterranean, Bacchiocchi and Airoldi (2003) in the Adriatic, Glasby and Connell (2001) in Australia].M. galloprovincialis are particularly abundant on the exposed sides of the structures, while they may become rare on the sheltered sides either because of confinement (Burcharth, 1993), or because people collect the mussels for recreational purposes (Bacchiocchi and Airoldi, 2003).In Cubelles, we also observed an extremely high sediment deposition on the sheltered side of the structure (Gacia, unpublished data), which caused the presence of a fine film of silt covering the assemblages on the landward side.We thus hypothesise that the lack of M. galloprovincialis on the landward side in Cubelles may be due to the high level of confinement of the shallow landward blocks causing a suffocation of the filtration system of the bivalve (Dare, 1976).Indeed, M. galloprovincialis is very abundant on the Catalan coast in polluted and perturbed areas (such as ports, the Ebro Delta and coastal constructions in general) but it also indicates high hydrodynamics (Pinedo et al., 2006), which explains why it is so abundant on the exposed side of the structures.
A recent extensive investigation on intertidal hard-bottom assemblages from the Catalan coast shows that the quality descriptors used for natural assemblages cannot be applied to artificial blocks (Ballesteros et al., 2006).Indeed, Cystoseira mediterranea, the algae characteristic of pristine intertidal communities, never occurs on artificial blocks, even if they are more than several decades old.Thus, taking into consideration our results, we can conclude that the community encountered on the exposed side of the different structures can be considered the 'climax' of the hard-bottom assemblages growing on artificial substrate.It is important to recall that the management of the structure will very much determine the succession of communities from artificial blocks (i.e.Burchard and Lamberti, 2005), and that periodical rebuilding should shift succession processes in the biota, causing a substitution of this 'mature' community towards communities dominated by fast-growing ephemeral algae (Moschella et al., 2005;Bulleri, 2005), the latter potentially causing an accumulation of algal debris on the sheltered shores and thus decreasing the quality of the coastal waters.
In this study, the number of structures in the LCS system does not appear to have a significant influence on the composition of the associated hard-bottom communities.This is probably due to the proximity of natural rocks to the structures, which ensures the arrival of propagules to the different LCS elements.However, there was an inverse relationship between the number of species in the LCS communities and the distance between the LCS and natural rocky shores (see Fig. 7).Although this pattern should be further explored, it indicates the importance of natural rocky shore assemblages to nourish the artificial substrates with propagules and guarantee a development of their communities similar to the natural ones.
We show here how the presence of natural rocks near the artificial sea defences minimises the differences between the communities growing on the natural substrates and the ones growing on the artificial ones.This is totally opposite to what occurs on other coasts such as the Adriatic, where the presence of large extensions (hundreds of kilometres) of artificial structures in areas without natural rock may transform the hard-bottom assemblages into corridors for invasive species and/or cause a decrease in the genetic separation between the native populations (Airoldi et al., 2005).
In summary, the impact of a coastal construction should be regarded from a coastal cell perspective, since major changes in sediment transport and softbottom assemblages have been shown and the composition of the new assemblages growing on hard substrates will depend on the nature and characteristics of the nearby coast.Moreover, in addition to the focus of coastal managers on hydrodynamic and sediment morphodynamic issues in the planning of coastal constructions, they should always pay special attention to analysing the biotic patterns and dynamics at a regional scale, always avoiding overconstruction and major transformations that may cause significant perturbations in the functioning of our coasts.
FIG. 2
FIG. 2. -MDS plot for the biota from natural rocky shores on three sites.Triangles are for Sant Antoni de Calonge, squares for Cubelles and circles for Altafulla.Differences between groups were tested using ANOSIM (global R=0.579; p<0.001).
FIG. 4. -Differences in number of species (a), diversity (b) and cover (c) in the composition of the biota between reference sites and orientations of LCS blocks.White bars: Sant Antoni de Calonge; grey bars: Cubelles; dotted bars: Altafulla; * indicates significant differences (p<0.01) between sites within treatments.
FIG. 7. -Differences in number of species (N) and Shannon diversity index (H') between the exposed site of the LCS (lcs) and their respective reference sites (r) at each sampling station, in relation t o their distance in metres.
TABLE 1 .
-Descriptors of the three LCS structures studied.
TABLE 2 .
-Results of the two-way crossed ANOSIM with significant differences for the factor 'Region' (global R = 0.579; p<0.001) with levels S = Sant Antoni, C = Cubelles and A = Altafulla; and 'Nature of the substrate' (global R = 0.468; p<0.001) with levels R = reference, L = landward, S = seaward.Data were square root transformed and standardised. | 2018-12-05T08:05:40.798Z | 2007-06-30T00:00:00.000 | {
"year": 2007,
"sha1": "bab1d685e11d5e87c0d35c08d40c14c6480f3cb1",
"oa_license": "CCBY",
"oa_url": "https://scientiamarina.revistas.csic.es/index.php/scientiamarina/article/download/5/5/5",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bab1d685e11d5e87c0d35c08d40c14c6480f3cb1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
18996001 | pes2o/s2orc | v3-fos-license | A Rapid, Simple, Liquid Chromatographic-Electrospray Ionization, Ion Trap Mass Spectrometry Method for the Determination of Finasteride in Human Plasma and its Application to Pharmacokinetic Study
A fast, accurate, sensitive, selective and reliable method using reversed-phase high performance liquid chromatography coupled to electrospray ionization ion trap mass spectrometry was developed and validated for the determination of finasteride in human plasma. After protein precipitation with perchloric acid, satisfactory separation was achieved on a Zorbax Eclipse® C8 analytical column using a mobile phase consisted of acetonitrile, 2 mM ammonium formate buffer (58:42, pH adjusted at 2.5 using formic acid); the flow rate was 0.25 mLmin-1 and the column oven was set to 50°C. Tamoxifen citrate was used as internal standard. This method involved the use of [M +H]+ ions of finasteride and IS at m/z 373 and 372 respectively with the selected ion monitoring (SIM) mode. The calibration curve was linear over the range of 0.1–60 ng mL−1. The limit of quantification for finasteride in plasma was 0.1 ng mL−1. The intra-day and inter-day repeatability (precision) were 2.68-13.87% and 2.14-14.69% respectively. Intra-day and inter-day accuracy were 98-101.57% and 99.7-110%. The assay method has been successfully used to estimate the pharmacokinetics of finasteride after oral administration of a 5 mg tablet of finasteride in 12 healthy volunteers.
Introduction
Finasteride is a synthetic antiandrogen which acts by inhibiting type II 5-alpha reductase, the enzyme that converts testosterone to dihydrotestosterone (DHT). It is used as a treatment in benign prostatic hyperplasia (BPH) in low doses, and prostate cancer in higher doses.
It is also used for treatment of male-pattern baldness (alopecia androgenetica) in men at a dose of 1 mg daily. The study by Thompson et al. (1) indicates that Finasteride reduces the rate of prostate cancer by 30%. It is also indicated for use in combination with doxazosin therapy to reduce the risk for symptomatic progression of BPH. Several methods for determination of finasteride in biological samples have been developed. These methods include highperformance liquid chromatography (HPLC) (2-5), polarography (6), liquid chromatography-spectrometry (LC/ESI-MS) method for the determination of finasteride in human plasma is described. The method was validated with good selectivity, linearity range, precision, accuracy, and limit of quantification (LOQ). After addition of tamoxifen citrate (Figure 1) as internal standard (IS) and protein precipitation with perchloric acid, a LC-ion trap mass spectrometer with electrospray ionization source (ESI) was used for the quantification of finasteride. By analyzing plasma samples collected from 12 volunteers participating in pharmacokinetic study, the applicability of the developed method was demonstrated.
Reagents and chemicals
Reference standards of finasteride and tamoxifen citrate were obtained from Sigma (St. Louis, MO, USA). Oral dosage forms (1 mg tablets) of finasteride were manufactured by Soha Pharmaceutical Co. (Tehran, Iran) and Merck Pharmaceutical Co. HPLC-grade acetonitrile, methanol, ammonium formate, formic acid and perchloric acid were purchased from Merck (Germany). All solvents were filtered through a 0.45 µm membrane and degassed prior to their use in the analyses. Milli-Q grade (Millipore, Bedford, MA, USA) water was used in all cases. Stock solutions of finasteride and internal standard (IS) tamoxifen citrate at concentration of 0.1 mg mL −1 were prepared in methanol and were stored at −20 ºC.
Instrument
An Agilent LC-MS-1100 ion-trap mass spectrometer interfaced with electrospray ionization (ESI) ion source was used. Drying gas temperature was set at 350 ºC. Nebulizing gas flow was 10 Lmin −1 . Skimmer 1 and skimmer 2 were at 32.1V and 6.0V respectively. Ion charge control (ICC) was on with the target adjusted at 100,000 and maximum accumulation time at 200 ms. Positive selected ion monitoring (SIM) mode and [M+H] +1 for both finasteride (m/z 373) and tamoxifen (m/z 372) were chosen for determination of finasteride ( Figure 2). The data were collected and processed using ChemStation software. tandem mass spectrometry (7-9) and an isotope dilution mass-spectrometric method (10). A review article by Macek discusses these methods in detail (11). HPLC is often the method of choice (13). However, HPLC methods for finasteride measurement in plasma suffer from limitations such as low sensitivity, poor selectivity and being time-consuming due to complex sample preparation procedures. Among these methods, LC-MS/MS method developed by Guo et al. (12) may have the highest sensitivity, but the determination process is complex and a two-stage cleaning process and need for using different cartridges seem to be necessary to obtain good selectivity and accuracy. From the reported procedures, the complexity and length of the sample pretreatment can be reflected by the different limits of quantification (LOQ), which vary from 0.2 to 10 ng mL −1 .
LC-MS has become the widely used analytical tool for the pharmacokinetic study and quantification of drugs and metabolites in biological samples due to its high sensitivity and selectivity. In the present study a fast, sensitive, selective, rapid and accurate liquid chromatographic-electrospray ion trap mass
Chromatographic conditions
Separations were performed on a 150 mm × 4.6 mm ID, 5 µm particle, Agilent Zorbax Eclipse ® C 8 analytical column. The mobile phase was a mixture of acetonitrile, 2 mM ammonium formate buffer (58:42) and the pH of the mixture was adjusted at 2.5 with formic acid; the flow rate was 0.25 mLmin -1 and the column oven was set to 50ºC, total run time was 13 min. The mobile phase was prepared daily.
Plasma sample preparation
To a 500 µL aliquot of plasma is added 50 µL tamoxifen solution (1 µg mL -1 in methanol) as internal standard and mixed. 100 µL methanol is then added and mixed followed by the addition of 100 µL perchloric acid (70%), mixing for 30 sec and centrifugation at 1048.32 g for 10 min. 50 µL of supernatant is directly injected into the analytical column.
Assay specificity and matrix effect Specificity was assessed by extracting samples of six batches of blank plasma and then comparing the results for plasma samples spiked with tamoxifen (IS) and finasteride. The chromatograms were also inspected visually for interfering chromatographic peak area from endogenous substances.
In order to investigate the effect of ion suppression on mass signals the following procedure was performed: The infusion pump was connected to the HPLC system by a "zero volume tee" before the ion source and the HPLC system pumping the mobile phase, which was the same as that used in the routine analysis of finasteride, i.e. acetonitrile: 2 mM ammonium formate buffer (58:42) at 0.25 mLmin -1 . The infusion pump was set to transfer 30 µLmin -1 of a mixture of analyte and internal standard in mobile phase (at 50 ng mL -1 and 10 ng mL -1 concentration levels for both finasteride and tamoxifen). A sample of human pooled blank plasma was subjected to the sample preparation procedure described in section 2.4. The supernatant was injected into the HPLC system while the standard mixture was being infused. Any ion suppression would be observed as a depression of the MS signal in this system.
Stability
The short-term room temperature, longterm storage, stock solution, post-preparative and freeze/thaw stabilities were tested. To test the stability of finasteride in plasma, QC samples were stored under different conditions. The freeze/thaw stability test was performed by freezing-thawing for 3 times. Specifically, freezing was performed at -20 ºC for 24 h and thawed at room temperature. Shortterm stability testing was performed at room temperature over 8 h, and long term stability was examined at -20 ºC over 2 months. Postprepative stability testing was performed by comparing after-day analysis with the first intra-day analysis.
Validation
The method was validated for selectivity, linearity, precision, accuracy and recovery. The selectivity test was performed by analyzing the blank plasma sample to test for the interference at the retention time areas of finasteride and IS. Linearity was tested over the concentration range of 0.1-60.0 ng mL −1 . In order to construct the calibration curve, a set of eleven non-zero finasteride calibration standards with concentration of 0.1, 0.25, 0.5, 1.0, 5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0 ng mL −1 , were prepared by spiking proper amounts of standard solution into blank plasma samples and following the procedure described in section 2.4. The standard samples were treated as described in section 2.4. The retention times of IS and finasteride were 10.7 and 12.6 min, respectively. The concentrations of unknown samples were calculated using the regression equation of the standard curve.
Three quality control samples at concentrations of 0.1, 30.0 and 60.0 ng mL −1 , (LLOQ, MQC and HQC) were prepared by spiking blank plasma samples (using the same stock solutions which were used for the calibration standard), and the added amount of internal standard in each quality control sample was also 100 ng mL −1 . Sensitivity was determined by analyzing control human plasma samples in replicates (n = 6) spiked with the analyte at the lowest level of the calibration curve, i.e. 0.1 ng mL −1 . Intra-day precision and accuracy were evaluated by analyzing each QC sample six times on the same day, while interday precision and accuracy were evaluated by analyzing each QC sample in 6 consecutive days.
The recovery of finasteride and IS from plasma samples after addition of perchloric acid was calculated by comparing the peak area response of extracted analytes with unextracted equal amount of standards. Finasteride standard solutions at concentrations of 0.1, 30.0 and 60 ngmL −1 were prepared using mobile phase and standard working solution and the added amount of internal standard was 100 ng mL −1 .
Pharmacokinetic study
The developed method was used to investigate the plasma concentration-time profile of finasteride after administration of five 1-mg tablets (total 5 mg) dose. The study was approved by the ethics committee of Shahid Beheshti School of pharmacy. Twelve healthy volunteers (male) participated in the investigation. The age of volunteers was between 24 and 46 years (average 32.4 years). Their body weight was between 55 and 83 kg (mean 71.0 kg) and their body height was 155-178 cm (mean 169.9 cm). All the volunteers took health exam to ensure that they have normal liver, heart rate, blood and electrocardiogram. Before and during the 2 weeks for test, volunteers did not take any other medicine. Following written informed consent, volunteers took five 1-mg tablets of finasteride with 240 mL tap water. Drinking and smoking were not allowed, and light breakfast was given at 3 h after taking the drug. Lunch was low fat food, given at 5.5 h after taking the drug. Blood samples were collected in heparinized tubes pre-dose (0 h) and at 0.3, 0.6, 1, 1.3, 1.6, 2, 2.3, 2.6, 3, 3.5, 4, 5, 6, 8, 10, and 24 h post-dose. Plasma samples were immediately separated by centrifugation at 143.4 g and stored at −20ºC until analysis.
Optimization of chromatographic and mass spectrometric conditions
The developed method was used for a pharmacokinetics study in 12 healthy volunteers. After oral administration of 5 mg finasteride, the concentration versus time profile of finasteride was constructed. The results showed that the method was reliable and adequate to provide pharmacokinetic concentration-time profile for a dose of finasteride as low as 5 mg.
The purpose of the present study was to develop a sensitive, simple, fast and reliable LC/ESI-MS method for the determination of finasteride in human plasma. In the previously published HPLC methods acetonitrilepotassium dihydrogenphosphate water solution or acetonitrile-water (2-5) has been usually used as mobile phase. In some other published methods (12, 11) a drying step after liquidliquid extraction has been necessary. In the present study, separation of finasteride and tamoxifen was achieved on an Agilent Zorbax Eclipse ® C 8 analytical column by using a mobile phase consisting of acetonitrile-water. Fnasteride and tamoxifen were protonated in an acidic mobile phase before entering the ionization chamber.
In order to select an appropriate ionization mode in LC/MS analysis, the mass spectra were acquired in ESI mode by scanning from 50 to 850 amu. The base peak intensities obtained in positive mode were higher than those obtained in negative mode. The positive ion mass spectra of finasteride and internal standard in scan mode were characterized by a protonated molecular ion [M +H] + at m/z 373 and 372, respectively as base peaks. Therefore, SIM mode involved selective monitoring of m/z 373 and 372 in the vicinity of retention times for finasteride and tamoxifen respectively. The optimized ESI-MS conditions are described in Section 2.2.
A few representative chromatograms of finasteride and tamoxifen (IS) are shown in Figure 3. The retention times of IS and finasteride in total ion chromatogram were 10.7 and 12.6 min, respectively.
Selectivity and matrix effect
The LC/ESI-MS method shows high selectivity because only selected ions from the analytes of interest are monitored. Chromatograms of the blank and the spiked human plasma samples (see Figure 3.) indicated no significant interferences at the retention time areas of the analyte and IS. ESI positive MS spectra for finasteride and tamoxifen were dominated by the [M+1] + ions, i.e. m/z 373 for finasteride and 372 for tamoxifen.
It was very important to investigate the matrix effects to develop a reliable and reproducible LC/ESI-MS method. Here, the matrix effect was evaluated by the following experiments: finasteride and tamoxifen were spiked separately into human blank plasma as well as into the mobile phase as solvent.
After being treated according to the procedure described in Section 2.4, these samples were injected into LC/ESI-MS. No significant difference was observed between the peak area in chromatogram of spiked plasma samples and the peak area in the chromatogram obtained by injection of the solution of finasteride and IS in mobile phase. It was also shown that no endogenous compounds significantly influenced the ionization of finasteride and IS. Furthermore, in the set up described in section 2.5 no significant ion suppression was observed for the MS signals of finasteride and tamoxifen.
Linearity
The linear regression equation, was y = 0.102x + 0.022 with correlation coefficient being 0.999 which its y and x are the ratio of peak area of finasteride per IS and finasteride concentration, respectively. Other linear regression data are SD = 0.0448, slop = 0.102 and intercept = 0.022. The method reported here is very sensitive due to using optimum ESI-MS conditions and the advantages of LC/ESI-MS in the selected ion monitoring (SIM) mode. The lowest standard concentration in the calibration curve was considered as the lower limit of quantification (LLOQ), which was 0.1 ng mL −1 . For LLOQ, the mean deviation percentage from the nominal concentration was 8.5% and precision was 13.9%. A good signalto-noise ratio (10:1) was observed at the LLOQ indicating that the corresponding value could be reached. The sensitivity of the developed method was higher than previously published HPLC methods (2-5) and polarography (6) for determination of finasteride in plasma.
The sensitivity of the developed method was also comparable with that of the previously reported LC/MS-MS methods (7,9,11,12). The lowest LLOQ reported for LC/MS-MS is 0.2 ng mL −1 which is comparable with the LLOQ of our method (0.1 ng mL −1 ). However the reported dynamic range for LC/MS-MS is 0.2-120.0 ng ml −1 which is wider than the dynamic range of our method (0.1-60.0 ng mL −1 ). This is due to the nature of ion trap mass analyzer which has a limited space for ion accommodation. In this analyzer, accumulation of ions higher that a certain amount in the trap leads to the cross talk among them which adversely affect the linearity of the responses produced by the system. Furthermore, the sample preparation procedure which consists of one-step protein precipitation and direct injection is much simpler than those of other LC-MS methods (12) comprising the use of SPE and liquid-liquid extraction.
Precision, accuracy and recovery
Both the intra-and inter-day accuracy and precision of the developed method were determined by six replicate analyses of quality control samples containing known concentrations of finasteride ranging from 0.1 to 60.0 ng mL −1 . The precision of the method was described as relative standard deviation (R.S.D.). The accuracy was described as a percentage of measured concentrations versus nominal concentrations of finasteride in QC samples. The results of intra-and inter-day accuracy and precision are listed in Table 1 and 2.
Composition of the mobile phase was found to be the critical factor for achieving good chromatographic peak shape and resolution. In the present study, a mixture of 2.5 mmol/L ammonium acetate solution and acetonitrile, (42:58 v/v, pH adjusted at 2.6) was used as a mobile phase. The selection of tamoxifen as the internal standard was based on our previous experience on ion-trap LC-MS system which gives robust and reproducible signal for this compound.
The matrix effect was minimal and no co-eluting endogenous compound interfered with the ionization of the analyte and internal standard. This was mostly due to the low pH of mobile phase (2.6) and the use of perchloric acid for protein precipitation which minimizes the ion suppression.
The lower limit of quantification (LLOQ) is two fold better than the lowest quantification limit reported by Guo et al. (12), on a quadropole LC-MS system. This could be due to the cumulative nature of ion trap mass analyzer in comparison to quadrupole analyzer in which ions are quickly pass the mass analyzer and do not get the chance of accumulation. Further more, the method reported by Guo et al. (12) involves sample extraction procedures and tandem mass spectrometry, two parameters which are not implicated in the present method.
Stability
Stability was evaluated as a part of method validation. Finasteride standard at concentrations of 0.1, 30.0 and 60.0 ng mL −1 (LLQC, MQC and HQC) were used for stability experiments. Ten milliliters of 0.1, 30.0 and 60.0 ng mL −1 finasteride standard solutions were prepared by diluting standard solution with blank plasma, and the added amount of internal standard in each quality control sample was 100 ng mL −1 . The results indicated that the difference of measured concentration from time 0 to 8 h was less than 4.1% when these samples were placed at room temperature, which allowed us to conclude that processed samples were stable for at least 8 h. When these standard solutions were stored at −20ºC, four thaw and freeze cycles were performed before being processed as described in section 2.4. During each cycle, 1 mL standard was processed. The difference of measured concentration from nominal concentration was less than 5.0% for 0.2, 30.0 and 60.0 ng mL −1 , and the results indicated that the stability of finasteride was not affected by freezing and thawing. In long-term stability experiments, after storage for 1 month at −20°C, more than 96.1% of finasteride remained according to their peak areas at each concentration.
Application
The developed method was successfully used for the determination of plasma concentrations of finasteride after oral administration of five 1-mg tablets (total 5 mg) dose to 12 healthy volunteers in a bioequivalence study for a generic formulation of finasteride (trademark: Finasteride Soha) and the reference formulation (trademark: Propecia). Figure 4 shows the finasteride plasma mean concentration vs. time profile obtained after the single oral administration of 5 mg of Finasteride Soha (test) and Propecia (reference) in 12 volunteers.
The pharmacokinetic parameters are shown in Table 3. The obtained values are consistent with previously published reports (12, 14), This article is available online at http://www.ijpr.ir which indicated the suitability of the developed analytical method for pharmacokinetic studies.
Conclusions
A fast, sensitive, specific LC/ESI-MS method for the determination of finasteride in human plasma was developed and validated. Compared with the previously published methods, significantly lower limit of quantification, which is 0.1 ng mL −1 , was obtained. Using of 0.5 mL aliquot of plasma sample and protein precipitation with perchloric acid, are the advantages of this method. The method has been successfully applied to the pharmacokinetics studies and satisfactory results were obtained, which demonstrates that the method is reproducible, sensitive and reliable. | 2017-03-31T01:34:21.259Z | 2012-03-10T00:00:00.000 | {
"year": 2012,
"sha1": "8ebe7d26db5d69b0dea526b2e7605984f5cbd046",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8ebe7d26db5d69b0dea526b2e7605984f5cbd046",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
243940161 | pes2o/s2orc | v3-fos-license | Unified Formulation of Phase Space Mapping Approaches for Nonadiabatic Quantum Dynamics
Nonadiabatic dynamical processes are one of the most important quantum mechanical phenomena in chemical, materials, biological, and environmental molecular systems, where the coupling between different electronic states is either inherent in the molecular structure or induced by the (intense) external field. The curse of dimensionality indicates the intractable exponential scaling of calculation effort with system size and restricts the implementation of numerically exact approaches for realistic large systems. The phase space formulation of quantum mechanics offers an important theoretical framework for constructing practical approximate trajectory-based methods for quantum dynamics. This Account reviews our recent progress in phase space mapping theory: a unified framework for constructing the mapping Hamiltonian on phase space for coupled F-state systems where the renowned Meyer-Miller Hamiltonian model is a special case, a general phase space formulation of quantum mechanics for nonadiabatic systems where the electronic degrees of freedom are mapped onto constraint space and the nuclear degrees of freedom are mapped onto infinite space, and an isomorphism between the mapping phase space approach for nonadiabatic systems and that for nonequilibrium electron transport processes.
The paper presents a new general formulation for constructing exact mapping approaches on constaint phase space for a finite number of discrete electronic states of nonadiabatic systems.
INTRODUCTION
Because the difference between the mass of an electron and that of a nucleus is at least 3 orders of magnitude, the celebrated Born-Oppenheimer (BO) approximation makes the assumption that the electronic and nuclear motions are separated 5 . In the BO scheme, the adiabatic electronic states are obtained when the coordinates of nuclei are fixed. The potential energy surface (PES) for the relevant adiabatic electronic state is then produced either in advance or on-the-fly as nuclear dynamics is considered.
The BO approximation is, however, not valid in nonadiabatic dynamics that occurs in many important quantum mechanical phenomena, such as photochemistry, electron transport/transfer, and cavity-modified transition processes in chemical, materials, biological, and environmental molecular systems [6][7][8][9] . Nonadiabatic dynamics includes quantum mechanical behavior of both electrons and nuclei. In such processes, nuclear dynamics involves two or more coupled electronic states, where the state-state coupling is either inherent in the molecular structure or induced by the (intense) external field. As it is often intractable to use "numerically exact" methods for realistic multi-dimensional anharmonic systems, considerable effort has been focused on developing practical (trajectory-based) methodologies [10][11][12][13][14] to address the fundamental nature of nonadiabatic dynamic processes in complex (large) molecular systems.
The phase space formulation of quantum mechanics 15-17 that employs both coordinate and momentum variables offers a widely useful tool to gain insight on bridging quantum and classical counterpart concepts. Since the renowned work of Meyer and Miller 13 and its successful applications to describing the electronic-to-rotational or electronic-to-vibrational resonance energy transfer in nonadiabatic collision reactions 18, 19 , the Meyer-Miller mapping model has offered an important theoretical framework for developing practical trajectory-based nonadiabatic dynamics methods 14, . A recent review 28 has briefly summarized the important developments and applications on the Meyer-Miller mapping model.
In this Account, we will report our more recent progress in phase space mapping theory since 2016, which is focused on a novel unified framework for mapping Hamiltonian models 1 which offers a novel way to derive the Meyer-Miller model, a new comprehensive formulation for the one-to-one correspondence mapping onto phase space 1, 3,4,42 , and an isomorphism between the mapping phase space approach for the coupled multi-state system and that for the secondquantized many-electron Hamiltonian system 2 . In the applications to nonadiabatic transition processes, we will discuss the comparison of the phase space mapping approaches to two prevailing trajectory-based methods, Ehrenfest dynamics 10 and Tully's fewest switches surface hopping (FSSH) 11 .
MEYER−MILLER MAPPING HAMILTONIAN MODEL
In 1927 Dirac demonstrated that the time-dependent Schrödinger equation for an F-state quantum system is identical to Hamilton's equations of motion (EOMs) for the action-angle variables 43 . In 1979 Meyer and Miller suggested a heuristic mapping Hamiltonian model with the "Langer correction" for a finite set of electronic states of a molecular system such that both nuclear and electronic degrees of freedom (DOFs) are treated on the same footing for dynamics 13 .
They use the diabatic representation for simplicity. The Meyer-Miller Hamiltonian is Here, , R P are the nuclear coordinate and momentum variables (the total number of nuclear where the F electronic states form an orthogonal complete basis set, that is, Here ˆe le I is the identity operator of the electronic state space, and R are the elements of the real symmetric matrix for the potential energy operator.
In Ref. 20 Stock and Thoss used the oscillator model of angular momentum proposed by Schwinger and suggested the mapping relations and ˆn m a a n m where mn is the Kronecker delta. Substitution of eqs 8 and 9 into eq 7 leads to eq 1, which demonstrates that parameter 1/ 2 comes from the commutation relation eq 9 and is the ZPE of the harmonic oscillator for each underlying electronic DOF 20 . It is then evident that eq 1 is an exact mapping Hamiltonian model for eq 2, the coupled F-electronic-state Hamiltonian in quantum mechanics. In practice, the ZPE parameter is chosen to be 1/ 3 , / 2 Here, an excitation stands for the occupation of the corresponding state, and the vacuum state, 0 , is orthogonal to any occupied state n .
2) Because only a single excitation is invoked, we can define the creation and annihilation operators as ˆ0 0 n n a n a n in eq 13 yields Here is the element in the nth row and mth column of commutator matrix Γ . Equation 16 implies or the commutation relation Equation 18 is the commutation relation between the position and momentum operators of the quasi-particles of the electronic mapping DOFs. It is evident that the conventional canonical commutation relation eq 9 is only a special case of eq 18.
When the equality holds for the mapping phase variables of the electronic DOFs, the mapping Hamiltonian The conventional Meyer-Miller mapping Hamiltonian eq 1 is intrinsically a special case with Models III, IV, V and VI for the coupled multi-state Hamiltonian operator (proposed in Ref and (25) respectively. Similar to eq 19, when we have the equality for the mapping phase variables of the electronic DOFs for eq 21 or eq 25 or Here we demonstrate two more mapping Hamiltonian models. Consider an equivalent representation of the Hamiltonian operator of eq 7, Employing the mapping strategy by analogy with the classical vector as described in Section III- Switching between q and r and that between q p and r p make no difference. It is then trivial to obtain from eq 32 It is evident that the two mapping Hamiltonian models proposed in Ref. 47 are simply a special case of eq 30 and of eq 33, which are yielded in the unified framework for mapping Hamiltonian models for coupled multi-state systems 1 . The Clifford algebra can be used for the mapping Hamiltonian models (eqs 21-25 and eqs 32-33) that involve 4F mapping phase variables for the electronic DOFs. When different mapping Hamiltonian models in the unified framework are treated in the same fashion, they in principle produce the same results, regardless of different convergence performance.
General Formulation of the One-to-One Correspondence Mapping on Phase Space for Systems Involving Both Continuous and Discrete DOFs
In addition to the mapping Hamiltonian, evaluation of physical properties lies in the center of the phase space formulation of quantum mechanics. As first pointed out for general coupled multi-state systems in Ref. 3 for developing the formulation for evaluation of physical observables. In eq 34, the possible value of parameter lies in 1 , F represents the invariant measure on the mapping phase space for the nuclear and electronic DOFs, Tr n stands for the trace over the nuclear DOFs, and Tr e is the trace over the F electronic states. The inverse one-to-one correspondence mapping from phase space Because the nuclear DOFs involve infinite energy levels, their integrals are over the mapping nuclear phase space with infinite boundaries. The classification scheme 16,17 can be recast into the definition of the mapping kernel for the nuclear DOFs (in eq 36) and that of the inverse kernel (in eq 37) In eq 40 and eq 41 , f ζ η is a scalar function that defines the mapping nuclear phase space of choice. For example, the Wigner function 15 takes As pointed out in Ref. 42 , it is trivial to show that the Q-version, W-version, or P-version of Ref. 36 of eq 47 in the general phase space formulation.
As long as we exactly solve the EOMs (for nuclear and electronic DOFs) in eq 47, the formulation of the correlation function eq 46 is exact for describing nonadiabatic systems 3,4 .
It is, however, often challenging if not at all impossible to exactly solve the EOMs when both nuclear and electronic DOFs are coupled. When we make the trajectory-based dynamics approximation, eq 46 is then recast into When only the electronic state variables evolve with time, that is, in the frozen nuclei limit, trajectory-based dynamics governed by Hamilton's EOMs of the mapping Hamiltonian (eq 1, eq 15, and other exact mapping Hamiltonian models) produce exact results, that is, eq 48 is equivalent to eq 46. When both nuclear and electronic DOFs are involved, the independent trajectory generated by the mapping Hamiltonian of Section 3.1 is an approximation to the quantum Liouville equation of the corresponding phase space, that is, eq 48 is an approximation to eq 46. The formulation of the correlation function eq 48 is often expressed on constraint space Here occ j denotes the index of the initially occupied state. Both the frozen-nuclei limit and Born-Oppenheimer limit are satisfied in the eCMMcv approach. Our general phase space formulation is not limited to the mapping of a finite set of states onto constraint phase space eq 34 or eq 49. Other options that satisfy eq 19, eq 26, or eq 27 are possible for the constraint space, upon which the one-to-one correspondence mapping can be established. More discussion on this will be available in a forthcoming paper.
Finally, we note that it is easy to extend the phase space mapping approach to the adiabatic representation or other representations. As shown in the Supporting Information of Ref. 42 , we can directly apply the strategy of Ref. 29 to the comprehensive mapping Hamiltonian model (of
APPLICATIONS TO NONADIABATIC SYSTEMS
The preceding discussion has reviewed the unified framework for phase space mapping approaches for nonadiabatic quantum dynamics. Below we highlight a few illustrative applications of the eCMM and eCMMcv approaches. Ehrenfest dynamics 10 or FSSH 11 is also implemented for comparison. The FSSH results are obtained first in the adiabatic representation, then projected to the diabatic representation.
The Spin-Boson Model
The spin-boson model is a prototype model for such as electron transfer/transport processes. It depicts a two-electronic-state system coupled with a harmonic vibrational bath environment.
Such a type of model involves key features of nonadiabatic quantum systems in condensed phase 49 . As "numerically exact" results are often available [50][51][52] Hamiltonian eq 20 when nm nm . Parameter of eq 1 in fact represents a parameter for the diagonal element of a diagonal commutator matrix. That is, commutator matrix Γ is equal to the product of a constant ( ) and an identity matrix. It hints that parameter of eq 1 can be negative. Figure 3 demonstrates that the negative value for parameter is indeed possible as well as useful for the spin-boson model. The most important feature of Figure 3 is that parameter of eq 1 in principle should not to be interpreted as a conventional ZPE parameter. This is confirmed by more results for the spin-boson model at even zero temperature in Ref. 3 . (Adapted with permission from Ref. 4 . Copyright 2021 American Chemical Society.)
Three-Electronic-State Photodissociation Models
The second set of benchmark models are the coupled three electronic-states where the PESs are Morse oscillators as proposed in Ref. 53 . The PESs and coupling terms are depicted in Refs. 42,53 . The set of models mimic ultrafast photo-dissociation processes in molecular systems.
Since they involve relatively local coupling terms, the state-state coupling is nearly zero at short times. It is expected that the Born-Oppenheimer limit is indicated in short-time dynamics of these models. Figure 4 demonstrates that the eCMMcv results are in good agreement with the exact data. The performance of eCMMcv is superior to that of eCMM in all three models.
While eCMMcv yields more accurate results than eCMM, eCMMcv is also less sensitive to the value of parameter . This is mainly because the Born-Oppenheimer limit is satisfied in eCMMcv. Figure 4 indicates that eCMMcv performs better than Ehrenfest dynamics and FSSH for the three ultrafast photo-dissociation models.
Seven-Site Fenna-Matthews-Olson (FMO) Monomer
The third application is the site-exciton model of the Fenna-Matthews-Olson (FMO) monomer of green sulfur bacteria 54 . The FMO monomer involves seven photosynthetic pigments (Bacteriochlorophyll). When each pigment is denoted by a site or state, a seven-site model can be established. The parameters of such a typical model are described in Ref. 55 . Fifty effective modes (nuclear DOFs) per site are enough for converging the simulation results. We study both diagonal and off-diagonal elements of the reduced density matrix for the sites (i.e., electronic DOFs) of the photosynthetic system. While the diagonal element represents the population of the site (shown in Figure 5), the off-diagonal elements are the electronic coherence terms (demonstrated in Figure 6). Panel 5(e) or Panel 6(e) implies that Ehrenfest dynamics performs poorly for the FMO monomer. In contrast, either eCMM or eCMMcv is capable of producing much more reasonably good results when parameter , 1 2 Figures 5-6 show that the overall performance of eCMMcv is slightly better than eCMM for the FMO monomer model. Figure 7 depicts population dynamics of the FMO monomer at three different temperatures. It is shown that the relaxation time scale increases as the temperature decreases. It is often demanding for "numerically exact" methods to study the zero temperature behavior of such as the FMO monomer model. The eCMM/eCMMcv approaches predict that the time scale of the oscillation of the population (of site 1) at 0K lasts significantly longer than that at 77K.
Atom-in-Cavity Models
In recent progress on laser techniques and optical microcavities 56 Ref. 42 .
As demonstrated in Figure 8 and Figure 9, the results that Ehrenfest dynamics or FSSH predicts significantly deviate from exact cavity-modified chemical dynamics even at very short times. In contrast, both eCMM and eCMMcv achieve considerably better performance. Either of eCMM and eCMMcv is capable of semi-quantitively capturing the negative (positive) spike in the population of the ground (excited) electronic state around 1800 a.u. t , which corresponds to the reabsorption and re-emission process of the earlier emitted photon by the atom. Figure 10(b) then shows that the steady current of the Landauer model shows an "S"-shape voltage dependence that is more distinct as the temperature decreases (or increases). A crossover behavior exists in the relation between the steady current and the temperature for the Landauer model 2 . The isomorphism indicates the phase space approach with the comprehensive mapping Hamiltonian (such as eq 20 or eq 28) will in principle also be useful for studying the second-quantized many-electron Hamiltonian system where both electronic and nuclear motions are involved in experimentally related electron transport processes. with the commutator matrix will be useful for on-the-fly nonadiabatic dynamics 39, 40 . The expression of the phase space mapping approach in the adiabatic representation is available in the Supporting Information of Ref. 42 . Provided that the initial condition is fixed, results of quantum nonadiabatic dynamics are independent of the representation of the electronic basis.
The important criterion is satisfied in the phase space mapping approach of this Account.
Regardless of successful applications of the trajectory-based approximate nonadiabatic quantum dynamics approaches in the phase space formulation, a major drawback is that the detail balance 64,65 for both electronic state and nuclear DOFs is not rigorously satisfied when the entire system reaches thermal equilibrium. To the best of our knowledge, no approximate practical nonadiabatic dynamics methods have been developed for fundamentally addressing this challenge for realistic systems where nuclear quantum effects should be considered. It is shown in our recent work that the sign problem is inevitable in path-based or trajectory-based approaches in obtaining the thermal equilibrium distribution of general nonadiabatic systems 66 .
Although the strategies of Refs. 67 it remains a challenge to practically employ them for systematically improving over the mapping Hamiltonian dynamics for complex (large) nonadiabatic systems.
Since the celebrated Meyer-Miller mapping Hamiltonian model 13 was proposed, phase space mapping theory has been still growing to meet the needs of nonadiabatic quantum dynamics of complex (large) systems. This Account highlights our recent progress: a unified framework for constructing the phase space mapping Hamiltonian 1 , a general phase space formulation of quantum mechanics for nonadiabatic systems where the electronic DOFs are mapped onto constraint space and the nuclear DOFs are mapped onto infinite space 3,4,42 , and an isomorphism between the mapping phase space approach for nonadiabatic systems and that for nonequilibrium electron transport processes 2 . Given the considerable interest in photoexcited or external field modified dynamic processes, it is expected many developments and extensions will be underway. ACKNOWLEDGMENT | 2021-11-11T06:23:49.666Z | 2021-11-10T00:00:00.000 | {
"year": 2022,
"sha1": "76a261f51370b60d16745abb033d70b328167804",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a91e07dbbadda8d18051d3b9b5bff0de5e3e55bb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3557198 | pes2o/s2orc | v3-fos-license | Intimate partner violence in urban Pakistan: prevalence, frequency, and risk factors
Background: Intimate partner violence (IPV) is an important public health issue with severe adverse consequences. Population-based data on IPV from Muslim societies are scarce, and Pakistan is no exception. This study was conducted among women residing in urban Karachi, to estimate the prevalence and frequency of different forms of IPV and their associations with sociodemographic factors. Methods: This cross-sectional community-based study was conducted using a structured questionnaire developed by the World Health Organisation for research on violence. Community midwives conducted face-to-face interviews with 759 married women aged 25–60 years. Results: Self-reported past-year and lifetime prevalence of physical violence was 56.3 and 57.6%, respectively; the corresponding figures for sexual violence were 53.4% and 54.5%, and for psychological abuse were 81.8% and 83.6%. Violent incidents were mostly reported to have occurred on more than three occasions during the lifetime. Risk factors for physical violence related mainly to the husband, his low educational attainment, unskilled worker status, and five or more family members living in one household. For sexual violence, the risk factors were the respondent’s low educational attainment, low socioeconomic status of the family, and five or more family members in one household. For psychological violence, the risk factors were the husband being an unskilled worker and low socioeconomic status of the family. Conclusion: Repeated violence perpetrated by a husband towards his wife is an extremely common phenomenon in Karachi, Pakistan. Indifference to this type of violence against women stems from the attitude that IPV is a private matter, usually considered a justifiable response to misbehavior on the part of the wife. These findings point to serious violations of women’s rights and require the immediate attention of health professionals and policymakers.
Introduction
Intimate partner violence (IPV) is the most common form of violence faced by women in both high-and low-income countries and, due to its magnitude, is recognized as a substantial public health problem. 1 One in three women worldwide is reported to experience IPV at some point in her life. 2 This violence confers tremendous suffering on the women affected, as well as on their children. 3,4 According to the World Health Organisation's multicountry study on violence against women in intimate relationships, the lifetime prevalence of physical or sexual violence ranges between 15% and 71%, and past-year prevalence also shows a wide variation (4%-54%), with the lowest rates found for Japan and the highest for Ethiopia, Peru, and Bangladesh. 4 There are different theoretical models that can be used to understand why violence occurs within intimate relationships. These include psychopathological, sociological, gender, and family systems theories. Sociological theories indicate that low education, economic vulnerability, stress, lack of support from authorities (healthcare services, social welfare), and a closed social network increase the risk of IPV. 5 Gender theories describe the cultural and social constructions of gender, where masculinity is associated with aggression and power, and femininity with subordination. 5,6 This, in combination with a material gender-power dimension, where men are assigned more economic and political power and where women are more dependent, increases the risk of violence. Psychopathological theories bring in individual men's interpersonal problems and functional deficits, including certain psychiatric diseases explaining variations between individuals. Family systems theories focus on communication, relationship, and problem-solving skills of couples in whom violence occurs. 5 Pakistan is a low-income country with 172 million inhabitants. It is a male-dominated society, where partner violence is accepted as a cultural norm and viewed as normal behavior within a marriage. 7 Indifference to this type of violence against women stems from attitudes that partner violence is a private matter and usually a justifiable response to misbehavior on the part of the wife, although it is understood as being against Islamic teachings. [7][8][9] Most Pakistani women are ignorant of the fact that violence is a crime, and those who do report violence fear punitive action from the husband's family and/or losing their children, and few women of middle and lower class backgrounds can survive independently. 10 Moreover, social norms strongly discourage women from living on their own, especially young women. 10 Poverty is a substantial problem faced by a large proportion of the population, resulting in ongoing efforts to satisfy the basic necessities of life. 11 According to the 2000-2007 Pakistan demographic health survey, more than half of the women and about one-third of the men in Pakistan lack basic education. 12 Approximately 30% of women are in some kind of paid employment, 12 but most women in Pakistan are confined to the home, doing housework for the extended family, and are excluded from decision-making. 7 Studies from Pakistan on IPV against wives are few. Furthermore, these studies are either facility-based, based on small convenience samples, and/or conducted outside of urban Karachi. These studies indicate a prevalence of 16%-76% for physical violence and 12%-16% for sexual violence. For psychological violence, the prevalence was found to be at least 23% and reaching extremely high levels (.60%), 7,13,14 with a rising trend noted during the past 30 years for all three forms of violence. 10 Studies in other Asian countries have also reported high prevalence figures. In rural Vietnam, the lifetime and past-year experience of physical IPV amounted to 31% and 8%, respectively. 15 The Indian National Family Health Survey, conducted across all Indian states in 2005-2006, found that 35% of 28,139 married women reported experiencing life-time physical IPV, with or without sexual violence from their husbands, 7.9% reported both physical and sexual IPV, and 28% reported experiencing physical IPV only. 16 From eastern India, a study of 1718 married women found that 16% were exposed to physical violence and 25% to sexual violence, while 52% suffered psychological abuse in their lifetime. 17 Another study from India comprising 9938 women aged 15-49 years reported a high prevalence of physical violence (40%). 18 A study from Iran of 2400 married women found that 15% had suffered physical abuse from their husbands in the previous year, 42% sexual abuse, and 82% various degrees of psychological abuse. 19 Cultural norms in Pakistan stipulate that violence against women is not to be discussed openly. 7 To perform a large-scale community-based study on this topic demands collaboration with local health organizations, because government-run health facilities are often poorly staffed and without resources for research and surveillance studies.
The aim of this community-based study, conducted among married women living in low-and middle-income areas in urban Karachi, was to investigate the prevalence and frequency of physical and sexual violence and psychological abuse perpetrated by husbands against their wives, and any associated sociodemographic risk factors.
study design and population
This cross-sectional study was performed in Karachi, Pakistan. Karachi has about 16 million inhabitants and forms a district within the Sindh province. 12 Karachi is further divided into 18 towns. In this study, 759 married women aged 25-60 years, living in two of the towns with approximately 720,000 inhabitants, were included. The response rate was 93.7%.
Due to the restrictive attitudes concerning women's movements and decision-making in Pakistani society, 14,20 it was necessary to link up with a health organization that maintained a surveillance system for data collection and Partner violence in urban Pakistan had health workers who were known in the community. Government health facilities were initially contacted, but because they lacked resources, we were advised to contact the Health and Nutrition Development Society (HANDS). 21 HANDS is a nongovernmental organization working closely with the government health services, and provides basic health facilities, primary education, and income-generating opportunities, as well as institutions to empower communities in the low-and middle-income areas of Karachi. 21 HANDS' facilities are equipped with trained people who shoulder full responsibility for local healthcare services at the primary care level (maternal and child health, immunization, oral rehydration therapy, control of diarrheal diseases, nutrition counseling, growth monitoring, treatment of minor illnesses), and field sites have been established to follow up on these activities. Community midwives with 18 months of training are available at these facilities to provide general antenatal and postnatal care, to assist during deliveries, and to provide family planning services. 21 These midwives carried out the data collection for this study.
HANDS manages the health facilities in two major towns (Gadap and Bin Qasim), and has established 10 health field sites in these towns. For this study, six of these health field sites were randomly chosen for data collection. Many different ethnic populations reside in these towns. Socioeconomically, the population belongs mainly to the lower and middle socioeconomic strata. 22 Therefore, the data gathered from these two towns can only be generalized to the lower and middle socioeconomic groups of Karachi. 22,23
Data collection
The data collection instrument used was the Multi-country Study on Women's Health and Life Experiences Questionnaire developed by the World Health Organisation for public health research, with a focus on interpersonal violence. 24 The questionnaire was developed for use in different cultures and is considered to be crossculturally appropriate. To date, it has been used in more than 15 countries. The abuse questions were developed on the basis of a variety of other abuse assessment scales (Index of Spouse Abuse and the Conflict Tactics scales) with established reliability and construct validity. 25,26 This instrument was translated into Urdu, the national language generally spoken in Pakistan. A few items were excluded, regarded as being unacceptable in this context, such as women's alcohol consumption patterns, whether women acted as heads of the households, and if the husband had multiple sex partners. The questionnaire went through face and content validity assessment by experts, including a psychologist, an epidemiologist, a sociologist, a community-based medical doctor, the field supervisor, a public health specialist, and the data collectors. The final questionnaire contained items addressing sociodemographic and psychosocial factors, general and reproductive health, different forms of violence, its frequency, and any health effects of the violence inflicted.
The data were collected by community midwives employed by HANDS in March-August 2008, using a multistage random sampling technique in the selected area ( Figure 1). In each field site, and via the surveillance system set up by the community midwives, the required number of households was randomly selected (using computer-generated numbers from Epi Info™) from a list of all households in which women of the required age resided. Ten women refused to participate in the initial stage of the interview and were replaced by a neighboring woman of the same age. A further 41 women decided to discontinue the interview when half-way through, and were not replaced, which gave a dropout rate of 6.3%. In a household with more than one eligible woman, only one woman was selected, by asking the youngest and the oldest, alternately. Information related to the husbands was obtained from the women, and relates only to the current husband.
sample size calculation
In order to detect a 1.6-fold increase in risk of physical, sexual, and/or psychological violence and abuse with 80% probability and an estimated 20%-30% prevalence rate in the study sample, we calculated that we needed a sample size of about
108
Ali et al 664 individuals. It was decided to aim for 800 respondents, and 810 were approached. In total, 759 women were included in the study.
Training of data collectors
Six community midwives received training for one week, conducted by the main author of this study and a psychologist in collaboration with members of the Women Lawyers' Association (a nongovernmental organization that supports women's legal rights) and HANDS. The training included the rationale behind the study, known prevalence and causes of IPV, women's vulnerability, ethical considerations, and communication and interview skills. Two of the interviewers were lost during the training period and four data collectors continued.
Each interview was conducted in the local language, Urdu. The study was presented as a women's health study to the household members, and not until the conversation was safe from being overheard were any sensitive questions asked. The interviews were conducted in the respondent's home, where privacy could be ensured, otherwise at a nearby school or HANDS facility. To ensure quality of the data, about 5% of the participants were reinterviewed at random, and only minor differences were detected in the responses given.
Variables Dependent variables
IPV is defined as any act of physical, sexual, or psychological abuse by a current or former partner, whether cohabiting or not. 4 Physical violence was measured as moderate (slapping, throwing things, pushing, shoving) or severe (hitting, kicking, dragging, beating, choking, burning). Sexual violence was defined as being coerced to perform sexual acts against the woman's will and physically forced into sexual intercourse by the husband. Psychological abuse was measured as insulting the woman or making her feel bad about herself, belittlement or humiliation in front of others, doing things to scare or intimidate her on purpose, and threats to hurt her or someone she cared about. Lifetime exposure to violence after marriage was assessed by items assessing acts of violence, forming composite measures for physical, sexual, and psychological violence, respectively, along with their frequency (how often it had occurred). Past-year exposure was obtained as a summary measure only of the different forms of violence and not by individual items. For bivariate and multivariate analyses, the dependent variables were dichotomized into experience of violence as opposed to no experience of physical or sexual violence or psychological abuse, respectively.
Independent variables
Sociodemographic variables were analyzed as independent risk factors. Age was divided into three groups and later dichotomized into younger and older age groups (25-35 years and 36-60 years). Educational attainment was grouped into no education, primary (up to eight years), secondary schooling (9-10 years), intermediate (11-12 years), and higher education (at least 13 years), and for multivariate purposes education was dichotomized into no formal education as opposed to any length of schooling. The employment status of the husbands and wives were dichotomized into being employed or not. Those that were in paid employment were further categorized as unskilled workers (eg, construction, messenger, landlord, farmer, watchman, servant, shopkeeper), skilled workers (eg, fisherman, gardener, carpenter, trader, driver, tailor), and low-and medium-level professionals (eg, soldier, police officer, teacher, health professionals, receptionist, secretary, lady health visitor, school teacher). This variable was further dichotomized into skilled workers (including the professionals), and unskilled workers.
The socioeconomic status variable was constructed from a list of household assets. Each respondent marked the assets available in the household and these assets were assigned different weights according to how common they were in households and their market price, eg, electricity, radio, and/or television (rated as 1), telephone and/or computer (2), and refrigerator and/or air conditioner (3). The weightings were determined by a team of researchers from the Aga Khan University, with experience of conducting communitybased studies. The weights were summed and divided into quartiles. Families up to the 25th centile were rated as being of low socioeconomic status, and then each quartile was rated as lower-middle socioeconomic status, upper-middle socioeconomic status, and high socioeconomic status, respectively. Socioeconomic status was further dichotomized into low socioeconomic status as the exposure category versus middle and upper socioeconomic status. This way of grouping households into different socioeconomic status groups has also been used by other studies in this area. 27,28 The number of children was grouped into five categories, ie, 0, 1-2, 3-4, 5-6, and $7. This variable was thereafter dichotomized into 0-4 children as opposed to $4. The number of family members was measured as those living
109
Partner violence in urban Pakistan together and sharing one kitchen in a household. The variable was dichotomized into the number of members in the family. One to four members was considered the reference and $5 as the exposure category. statistical analysis SPSS (v 10.0; SPSS Inc, Chicago, IL) was used for all statistical calculations. 29 Odds ratios (OR) with a 95% confidence interval (CI) level were used in the bivariate and multivariate analyses to estimate associations between sociodemographic variables and lifetime exposure to all three forms of violence. Statistically significant variables in the bivariate analyses were entered into the multivariate model, one at a time. Final models are displayed.
ethical considerations
The ethical principles of violence research defined by the World Health Organization were strictly followed. 30 All respondents were informed about their free choice to participate and to withdraw whenever they wished during the research phase. Data collectors secured written consent from all respondents before the interview. Those women who disclosed experiences of violence and expressed a need for support were referred to the Pakistan Women Lawyers Association and Women's Social Security Department, Government of Pakistan, a social welfare department for women, located in the Sindh secretariat, where counseling is given by female lawyers and social workers, who further offer support in divorce cases and provide income generation schemes to victims of violence. The study was approved by the Institutional Ethical Review Committee of Aga Khan University in Karachi, Pakistan. Linking up with the HANDS organization secured the data collection process, because unfamiliar women introducing themselves as data collectors would hardly have been accepted by the families. Furthermore, data collectors unfamiliar to the households may have been put at personal risk. The women who participated in the study were provided with referrals to mental health professionals, and lawyers for a free of cost consultation. Moreover, women in the community were also given awareness sessions by the lawyers with regard to women's rights.
sociodemographic pattern
Of the participating women, about half had no formal education (47.6%) and the majority of them were housewives (Table 1). Of the male spouses, 36.2% had no formal
Forms of violence
Of the 759 women, 57.6% reported a lifetime experience of physical violence and, of these, 54.2% reported severe incidents of physical violence ( Table 2) and 56.3% reported past-year exposure to physical violence. For sexual violence, the corresponding figures for lifetime and past-year prevalence were 54.5% and 53.4%. For psychological violence, the corresponding figures were 83.6% and 81.8%, respectively. In the majority of cases, violence was experienced as repeated acts, ie, more than three times per year (see Table 2 for detailed prevalence figures). The different forms of violence and their overlapping nature are shown in detail as a Venn diagram of lifetime exposure in Figure 2. The most commonly occurring single form was psychological violence (19.1%). An overwhelmingly large group reported all three forms of violence, ie, 43.9% (n = 333) in their lifetime and 87.1% (n = 661) reported any kind of violence exposure.
Associations with sociodemographic and psychosocial factors
Poor socioeconomic life circumstances constituted the main risk factor for all forms of lifetime violence (Table 3). Older women were more at risk of physical and sexual violence than their younger counterparts, with an OR of 1.65 and a 95% Confidence Interval [CI] of 1.23-2.23. Physical and sexual violence were associated with almost identical risk factors, ie, no formal education for either the woman or the husband, older age of the husband, more than five children in the family, and living in an extended family setup, as compared with having fewer children and living in a smaller family, respectively ( Table 3). Statistically significant risk factors for psychological abuse were the husband having no formal education (OR 2.21, CI: 1.41-3.47) and being an unskilled worker or unemployed (OR 3.18, CI: 2.15-4.71) and, linked to this, low socioeconomic status of the family (OR 2.21, CI: 1.37-3.54). The educational level of the husband had a statistically significant association with all three forms of violence over the lifetime. Analyses of risk factors for past-year experience of any forms of violence were carried out, but are not shown in the tables because these were almost the same as for lifetime exposure.
Discussion
The results of this study revealed extremely high lifetime and past-year prevalence rates, and also a high frequency of all forms of IPV against women belonging to the lower and middle income strata in Karachi. The picture that evolves is that psychological abuse seems to be present in more than 80% of the families. Furthermore, the prevalence figures for physical and sexual violence are of similar size; more than 50% of the population in this study reported such experiences, and 44% reported exposure to all three forms of violence. Our findings point to poor life circumstances contributing to IPV in this setting, including low occupational status of the husband, low family socioeconomic status, too many children, and living with extended family.
The major strength of our study was its community-based nature, and the respondents having been selected by random sampling. Furthermore, it comprised a comparatively large sample from a country where violence in the family is not discussed or questioned openly. In addition, a well-known instrument was used for data collection, and the response rate was extremely high (93.7%). It was possible to reach out to individual women because data collection was done by community midwives who were well trusted in the community. This trust was essential because IPV is an extremely sensitive topic in Pakistan, where it is generally considered an inappropriate subject for a woman to discuss with a stranger.
One of the weaknesses in our study is that the two towns selected for this study comprised people only from the lower and middle socioeconomic strata, but failed to reach the upper socioeconomic strata. However, we do consider the data to be valid and representative of similar socioeconomic areas in Karachi, because the population was carefully selected at random in a multistaged procedure. There is reason to believe that violence against women is even more common in rural areas, squatter settlements, and the suburbs, due to extremely low educational attainment levels and poverty amongst both men and women.
A further weakness is that we were not able to acquire specific data on past-year violence exposure. The data collectors asked for detailed information on acts of violence and their frequency only for life-time experience. Past-year prevalence was inquired about as a summary ("has any of this happened in the past year?"), for physical, sexual, and psychological violence. Past-year prevalence data is often thought to be a more reliable assessment of IPV than events occurring over the lifetime because of less recall bias. 12,15,31 However, pastyear prevalence figures were close in magnitude to lifetime figures in our study, which is interpreted as violence faced by women in Pakistani families being ongoing year by year, with few women being able to obtain a divorce as a way to end the violence. Support for this assumption also comes from recent focus group discussions with women living in the same area (unpublished data). It is also a fact that the women, due to continuous exposure to different forms of violence and abuse, may have difficulties in differentiating recent events exactly from more distant violence experiences. The fact that community midwives performed the data collection does, however, increase the likelihood of accurate estimates because trust and confidence was established. Another limitation of a cross-sectional study is that it is not possible to establish causal relationships.
The high prevalence figures found for past-year and lifetime exposure of all three forms of violence can be understood in the light of the fact that women's opportunities to end the violence are few. This is due to perpetration of violence being considered as normal male behavior. The subordinate role of women in the society and family allows the violence to continue and keeps divorce rates low, especially among the low-and middle-income groups. 9 The prevalence of violence in our study was higher than that found in studies conducted in Vietnam, India, and Bangladesh, 15,16,32 but similar to findings from Iran, specifically for sexual and psychological violence. 19 This might be due to the higher level of gender inequality among low-and middle-income women in Pakistan, who generally accept violence within marriage and poor life circumstances, but also due to a high level of trust in community midwives that made disclosures possible.
The multivariate analyses confirmed that low education and low occupational status of the husband were important risk factors for physical violence and perpetration of psychological abuse, but lack of formal education in women was only an important risk factor for sexual violence. In one of the earlier studies from Vietnam, we also noted that male factors (low educational attainment, poverty) were risk factors for partner violence against women. 15 This is in line with what has also been found in other sociological and public health studies. 4,16,33 Striving for job security can create conflict and stress among men of low educational achievement. Rather than using any other
113
Partner violence in urban Pakistan coping strategy, violence towards the wife may be used as a stress reliever. 34 Low level of education in women as a risk factor for IPV exposure has been explained as being linked to a higher degree of acceptance of traditional gender roles than would be the case with better educated women, and thereby less ability to withstand such violence. 35 The Iranian study similarly identified that illiterate and unemployed women were at a higher risk of violence. 19 These findings emphasize the importance of education for both men and women. However, some studies from other countries 32,34,36,37 have shown that better educated women sometimes face an increased risk of experiencing IPV, but this may be of a temporary nature.
Large family size was also identified as a risk factor for IPV. This can be explained by the fact that when the number of people in a household increases, financial stresses and miscommunication also increase, and this may result in violence towards the wife. 32,38 Another study from Karachi also supports this finding, in that the presence of in-laws was found to be a risk factor for violent perpetration, and not only by the husband. 13 The woman's age was not identified as a statistically significant risk factor for any of the forms of violence when controlled for in the multivariate analyses. However, there were indications in the bivariate analysis that older age could be a risk factor for physical and sexual violence. This can be interpreted as being due to the fact that violence against women in Pakistan is ongoing year-by-year, and older women will be more exposed over their lifetime.
Socioeconomic status was, in this study, a statistically significant factor for sexual violence and psychological abuse, which is in line with findings from other studies. 15,39 This finding illustrates that within those families that are most vulnerable in terms of low education and low socioeconomic status, violence occurs more commonly. As has already been explained, this may be due to high stress levels, mirroring difficulties in managing everyday life, particularly in men, who are viewed as the main breadwinners. 40
Conclusion
The prevalence of all forms of IPV being perpetrated in the lifetime was extremely high in the low-and middleincome strata in Karachi. Married women face this violence repeatedly. Sociodemographic factors were identified as contributing to the occurrence of this type of violence, with those having the least resources being most affected. The institutionalized and serious gender inequality accepted as a normal part of daily life by both women and men has contributed to the present situation. Few women are able to act on this by getting a divorce because a single woman's chances of living a decent life and taking care of her children alone are extremely limited. Table 4 Associations between sociodemographic and psychosocial variables with lifetime physical, sexual, and psychological violence, final models, presented as adjusted odds ratios with 95% confidence intervals (n = 759 married women) This situation requires serious and urgent attention at all levels of societal organization, by policymakers, political stakeholders, and professionals. Policy initiatives are needed, as are legal actions, to criminalize men's violence against women. Basic education needs to be made available for both girls and boys, with special attention placed on female education. Gender equality teaching and training should be included at different levels in the school curriculum. Healthcare staff and social authorities need training on the identification, counseling, management, and prevention of violence against women. Training of nurses and medical doctors in counseling of young couples for the prevention and management of IPV should be part of their basic education. Mass media involvement is necessary to create a debate on such gender discrimination practices and to encourage women's empowerment in society and in the family. | 2014-10-01T00:00:00.000Z | 2011-03-16T00:00:00.000 | {
"year": 2011,
"sha1": "832112fb61cc7c868388bc6ab177b68f871f1bf9",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=9287",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc273c9fb11bf64fffdaf43ee80f3b186706656d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252087312 | pes2o/s2orc | v3-fos-license | The Glycolysis Inhibitor 2-Deoxy-d-Glucose Exerts Different Neuronal Effects at Circuit and Cellular Levels, Partially Reverses Behavioral Alterations and does not Prevent NADPH Diaphorase Activity Reduction in the Intrahippocampal Kainic Acid Model of Temporal Lobe Epilepsy
Temporal lobe epilepsy is the most drug-resistant type with the highest incidence among the other focal epilepsies. Metabolic manipulations are of great interest among others, glycolysis inhibitors like 2-deoxy d-glucose (2-DG) being the most promising intervention. Here, we sought to investigate the effects of 2-DG treatment on cellular and circuit level electrophysiological properties using patch-clamp and local field potentials recordings and behavioral alterations such as depression and anxiety behaviors, and changes in nitric oxide signaling in the intrahippocampal kainic acid model. We found that epileptic animals were less anxious, more depressed, with more locomotion activity. Interestingly, by masking the effect of increased locomotor activity on the parameters of the zero-maze test, no altered anxiety behavior was noted in epileptic animals. However, 2-DG could partially reverse the behavioral changes induced by kainic acid. The findings also showed that 2-DG treatment partially suppresses cellular level alterations while failing to reverse circuit-level changes resulting from kainic acid injection. Analysis of NADPH-diaphorase positive neurons in the CA1 area of the hippocampus revealed that the number of positive neurons was significantly reduced in dorsal CA1 of the epileptic animals and 2-DG treatment did not affect the diminishing effect of kainic acid on NADPH-d+ neurons in the CA1 area. In the control group receiving 2-DG, however, an augmented NADPH-d+ cell number was noted. These data suggest that 2-DG cannot suppress epileptiform activity at the circuit-level in this model of epilepsy and therefore, may fail to control the seizures in temporal lobe epilepsy cases.
Introduction
Epilepsy is one of the most complicated neurological diseases that is characterized by neuronal hyperexcitability and sudden, simultaneous discharges which appear as seizures.
Vahid Ahli Khatibi and Mona Rahdar have equal contributed to this work.
Almost 1% of the general population are diagnosed with epilepsy, and about 40% of the cases are pharmacoresistant. Temporal lobe epilepsy has the highest incidence among the other types of epilepsy and exhibits considerable resistance to antiseizure agents [1]. Epilepsy is associated with many behavioral disorders, including anxiety and depression, as reported in epileptic patients and animal models; therefore, introducing novel and potent approaches and compounds to control seizures and behavioral comorbidities is of paramount importance.
Among many novel strategies, metabolic manipulations have widely attracted attention as the ketogenic diet (KD), low carbohydrate and high-fat diet is shown to have been effective in many drugs resistant cases. KD acts through a great many mechanisms among which, the most important is to cut out glycolysis and subsequent attenuation of lactate shuttle between neurons and astrocytes [1] and, more importantly, decreased cytosolic ATP concentration. Lactate dehydrogenase inhibition which results in direct inhibition of glycolysis is demonstrated to have suppressed interictal discharges in the intrahippocampal kainic acid model of temporal lobe epilepsy, which is one of the most drugresistant models [2].
2-deoxy d-glucose (2-DG) is another glycolysis inhibitor that has been studied over the last few years for its possible therapeutic actions [3][4][5][6][7]. However, interestingly, both its proconvulsant and anticonvulsant effects have been observed according to the different methods of epilepsy induction in animals. For instance, in i.v PTZ, i.v kainic acid and electroshock-induced seizures, 2-DG decreased seizure threshold while in the 6-Hz seizure test it led to seizure threshold increment [5]. Furthermore, in pilocarpine-induced epilepsy, while elevating seizure latency, 2-DG diminished seizure duration and severity [8]. In in-vitro models like high [K] o , likewise, 2-DG dwindled interictal epileptiform activity [9]. 2-DG is of interest not only for its possible anticonvulsive implication but also for its potency in the inhibition of cancerous cell growth. Indeed, it is already in clinical use for treating SARS-Cov-2 [10] and has had promising results in the suppression of cancerous cell growth in clinical trials [11]. Therefore, owing to the major inconsistency between the effects of 2-DG in different epilepsy models, and more importantly, given that models like i.v kainic acid could not be considered as focal temporal lobe epilepsy [12], more preclinical studies are required to report the extend of 2-DG's potency in suppressing the seizures and behavioral comorbidities in a model that may potentially resemble human temporal lobe epilepsy.
The intrahippocampal kainic acid model of temporal lobe epilepsy, which is deemed as an appropriate simulator of human temporal lobe epilepsy due to hippocampal sclerosis seen in this model [13], has long been used to assess the therapeutic effects of nominated compounds to control the seizures and/or treat epilepsy. Although a previous study has reported that comorbid psychiatric symptoms including, anxiety and depression behaviors are not significantly different between control and epileptic animals in this model [14], severe cell loss in the dorsal hippocampus and general ipsilateral hippocampal deformation plants doubt in the mind whether follow-up tests would probably bring out different results. Previous lesion studies have posited a link between dorsal and ventral hippocampus lesions, which are seen in this model, and diminished anxiety levels [15]. Additionally, shriveled hippocampus is associated with depression behavior [16].
Considering previous observations, the present study aimed to further explore the potency of 2-DG in reversing the electrophysiological (at the circuit and cellular levels), behavioral, and histochemical consequences of intrahippocampal kainic acid injection in mice which bears a striking resemblance to human temporal lobe epilepsy; we hypothesized that 2-DG can suppress behavioral (if present), electrophysiological and histochemical alterations following temporal epilepsy induction.
To test this hypothesis, Local Field Potentials (LFP) and patch-clamp recordings were used to assess the circuit and cellular level effects of 2-DG respectively. Interictal epileptiform activity is a known characteristic of intrahippocampal kainic acid model of temporal lobe epilepsy and suppression of such activity has been widely used to assess the efficiency of different antiseizure candidates [17]. Burst activity, as well as hyperexcitability of survived neurons in intrahippocampal kainic acid model of temporal lobe epilepsy, is noted [18,19]; we evaluated passive membrane properties and spontaneous activity of the survived dorsal CA1 pyramidal neurons and how 2-DG affected intrahippocampal kainic acid model of temporal lobe epilepsy induced alterations. Moreover, we addressed how behavioral comorbidities like anxiety and depression, if present, were affected by 2-DG injection. Zero-maze test was used to evaluate the anxiety status of the animals; an open field test was performed to assess locomotion status as well as fear/anxiety behavior. Sucrose preference test which is an indicator of anhedonia, the core symptom of depression [20,21], was done to evaluate depression in the animals. Furthermore, since the nitrergic system is involved in the regulation of excitatory and inhibitory neurotransmission, and becomes imbalanced during epileptogenesis [22,23], we also evaluate alterations in the NOergic neurotransmission by using NADPH-diaphorase staining.
As Duveau and colleagues reported in 2016 [17], most of the neuropathological and electrophysiological features that are seen in human MTLE can be reproduced in the kainic acid mouse model of TLE. Therefore, this animal model of TLE is deemed to be a potent simulator of human temporal lobe epilepsy owing to severe cell loss and sclerosis noted in the hippocampus and, more importantly, chronic spontaneous seizures following kainic acid injection. Here, unilateral injection of kainite into mouse hippocampus was used to evaluate the beneficial effects of 2-DG against TLE, which is already in clinical use for suppression of cancerous cell growth and covid-19 treatment. 2-DG has also been in various stages of preclinical and clinical development for epilepsy for several years. The present work tried to extend the preclinical profile of 2-DG to better understand how this agent can exert beneficial effects. If potent, hence, inhibition of glycolysis by 2-DG could be considered as a therapeutic intervention for the treatment of epileptic patients with drug resistance.
Animals
This study was carried out on 64 adult male NMRI mice (weighing 30-35 g; Pasteur institute, Tehran). The animals were housed with free access to standard pellet diet and tap drinking water ad libitum. They were kept in a temperaturecontrolled (23 ± 2 °C) animal house free from any source of chemical or noise pollution under the 12:12 h light: dark cycle. All animals received human care and gentle handling throughout the study, as it has been shown that proper techniques and frequency of handling were used to reduce stress and anxiety [24,25]. Mice were single housed after the surgery; although social housing is deemed to be the optimal way of housing, previous studies showed that single housing does not significantly affect behavioral tests in mice [26]. Hence, we single-housed the mice in order not to arouse aggression, especially in epileptic animals. All experimental procedures and animal care were conducted in accordance with the National Institute of Health
Study Design
The present study intended to investigate the behavioral, electrophysiological and histochemical consequences of glycolysis inhibition on the intrahippocampal kainic acid model of temporal lobe epilepsy. Three separate groups of experiments were conducted to assess the effects of glycolysis inhibition by 2-DG on: (1) Kainic acid-induced hyperexcitability in CA1 pyramidal neurons using patchclamp recording, (2) Local field potential (LFP) recordings to measure epileptiform activity, and (3) Behavioral tests to assess the locomotor activity, anxiety and depression behaviors. To assess histological alterations, however, the animals were randomly chosen from the animals which had undergone behavioral tests.
Kainic acid was stereotaxically microinjected into the dorsal hippocampus of the left hemisphere (day 0, for more details, see the epilepsy induction section) and 2-DG (300 mg/kg, i.p) was injected 3 weeks after IHKA injection for 7 days (Fig. 1) and the last injection was given 90 min before starting the experiments. This time was chosen based on the study done by Koenig et al., 2019, who reported that 90 min after 2-DG injection, induction of ketosis is observed [27].
The dose of 2-DG was chosen based on previous studies that report the anticonvulsant and antiepileptic action of 2-DG [5,9].
The animals randomly were divided into the following five groups: Control group, a group in which the mice underwent stereotaxic surgery and received intrahippocampal saline (40 nL) injection ( as a solvent for kainic acid) on the 0th day; Control + 2-DG group, a group that mice received intrahippocampal saline (40 nL) on the 0th day and i.p injection of 300 mg/kg 2-DG from the 20th to 28th day, once a day; Epileptic group, a group in which the mice received an intrahippocampal injection of kainic acid on the 0th day; Epileptic + 2-DG group: a group in which epileptic mice treated i.p with 2-DG at the dose of 300 mg/ kg from the 20th to 28th day, once a day; epileptic + saline group: a group that epileptic mice received 300 mg/kg i.p saline from the 21st to 27th day, once a day.
The order of the behavioral tests was as follows: zero maze, open field and sucrose preference test on the 20th, 21st, and 22nd days respectively in control and epileptic groups; in epileptic + 2-DG and control + 2-DG groups, however, the tests were performed on the 26th, 27th, and 28th days respectively (Fig. 1A). Since in the behavioral study, we did not find a significant difference between control groups and control + 2-DG groups (for more details see results), and according to the animal ethics guidelines to minimize the number of animals used, this group was omitted in LFP and patch-clamp recordings. Due to the fact that saline and handling did not lead to a significant alteration in LFP groups, we omitted epileptic + saline groups from patch-clamp recordings (Fig. 1C). It should be noted that according to previous studies in our laboratory, saline and surgery do not lead to considerable variation between the groups; hence, the study does not contain intact and sham groups [28].
Epilepsy Induction
Temporal lobe epilepsy was induced as previously described by Sada et al. [2]. Briefly, mice were anesthetized by intraperitoneal injection of ketamine (100 mg/ kg) and xylazine (10 mg/kg) and fixed in the stereotaxic frame. Then, 0.8 nmol kainic acid was dissolved in 40 nL normal saline and directly injected into the left dorsal hippocampus (− 1.6 mm to the Bregma, 1.6 from the midline, and 1.2 mm deep from the dura mater) according to the atlas of Paxinos and Franklin (2001). Due to the non-convulsive status epilepticus, verification of model induction was endorsed by frequent interictal epileptiform activity (sharp-wave complexes) (see below for details) as well as severe cell loss in the dorsal CA1 pyramidal cell layer (see Nissl Staining) (Fig. 2C, D). After the experiments, the anesthetized animals were decapitated and the brains were dissected out for injection site verification ( Fig. 2A).
Open Field Test
To measure the locomotor activity, animals were placed in the center of an open field (35 × 35 × 35 cm) after half an hour of habituation to the experimental room. In the following ten minutes, moved distance and velocity were analyzed by EthoVision XT 11 software. To assess thigmotaxis, which is the tendency of an animal to stay near the walls of the open field, a center (15 × 15 × 15) was defined and time spent in the center was analyzed. The arena was cleaned with ethanol 85% between the trials.
Zero-Maze Test
Anxiety behavior was evaluated by using zero-maze apparatus [29]. The apparatus (60 cm in diameter, 5 cm wide
Sucrose Preference Test
Rodents are shown to prefer sweet water rather than tap water. Suffering from depression, however, they tend to consume less sweet water in comparison with tap water in normal conditions [21]. To perform the test, the animals were given access to two tap water bottles for 24 h as habituation in their home cage. During the next 24 h, both bottles were taken and replaced with two new bottles, one containing 3% sucrose solution while the other filled with tap water. As the diameter of the drinking hole had been noted to influence on the amount of water consumed by the animal [31], the holes were all equalized in size using a 2 mm drill. The proportion of sweet/tap water consumption was calculated afterwards.
Local Field Potentials (LFP) Recordings
Mice underwent stereotaxic surgery to record local field potentials. They were anesthetized with an intraperitoneal injection of ketamine (100 mg/kg) and xylazine (10 mg/ kg). The ear bars were placed delicately prior to muzzle fixation. Lidocaine 2% was injected under the scalp skin 5 min before making an approximately 2 cm incision in the skin. Following Bregma-Lambda adjustment to a plane level, three holes were made by a fine drill. To prepare electrodes, two stainless steel wires (127 μm in diameter, A.M. system Inc., USA) were intertwined to give the electrode suitable strength and flexibility. The electrode, then, was soldered to a connector and placed in the dorsal hippocampus (− 2.1 mm AP, 1.5 mm ML, 1.2 mm DV). Six screws (one as the reference electrode above the cerebellum) were screwed to the scalp. Lastly, dental cement was used to fix the electrodes. The LFP signals were continuously recorded for 13 h at 1 kHz sample rate and low-pass filtered at 250 Hz while the animals were freely moving. Interictal epileptiform discharges were defined as sharp-waves, having more than twofold amplitude compared with baseline, as well as having a frequency between 1 and 20 Hz. The discharges were detected and analyzed by MATLAB 2016 software. At the end of the experiments, brains were removed to verify the proper placement of the electrode (Fig. 2B).
Patch-Clamp Recording
To investigate the possible cellular-level effects of epilepsy induction by kainic acid on the electrophysiological properties of hippocampal CA1 pyramidal neurons, and whether 2-DG can reverse these possible alterations, whole-cell patch-clamp recording was performed as follows. Briefly, the animals were deeply anaesthetized with ether and then decapitated. The brains were removed immediately and placed in ice-cold artificial CSF (ACSF) containing (in mM): 206 sucrose, 2.8 KCl, 1 CaCl 2 , 1 MgCl 2 , 2 MgSO 4 , 1.25 NaH 2 PO 4 , 26 NaHCO 3 , 10 d-glucose, and saturated with 95% O 2 and 5% CO 2 (pH 7.3-7.4; 300 mOsm). Transverse slices (300 μm) were A the injection site of kainic acid or saline. B electrode location in LFP group animals. C normal dorsal hippocampus (control); D severe cell loss in CA1 and, to less extent, in CA3 as well as swollen dentate gyrus in kainic acid-treated hippocampus compared to the control hippocampus. Scale bar: 300 µm cut by using a vibroslicer (7000 smz-2, Campden Instruments Ltd, UK). Slices were placed in a holding chamber containing ACSF composed of (in mM) 125 NaCl, 2.5 KCl, 1.5 CaCl 2 , 1.25 NaH 2 PO 4 , 25NaHCO 3 , 10 d-glucose, pH 7.4, 300 mOsm for at least 60 min at 32-35 °C. The slices were kept at room temperature (23-25 °C) before transferring to the recording chamber. After incubation for at least 1 h, each slice was individually transferred to a submerged recording chamber on the stage of an upright microscope (BX51WI, Olympus); they were continuously superfused with oxygenated ACSF at a rate of 2-3 ml/ min at 23-25 °C afterwards. Patch pipettes [borosilicate glass capillary (1.5 mm O.D., 0.86 mm I.D)] were pulled with a PC10 two-stage vertical puller (Narishige, Japan). The pipettes' resistance was 3-6 MΩ when filled with an internal solution containing (in mM): 135 potassium gluconate, 10 KCl, 10 HEPES, 1 MgCl 2 , 2 Na 2 ATP and 0.4 Na 2 GTP. The pH of the internal solution was set to 7.3 by KOH, and the osmolarity was adjusted to 290 mOsm. Whole-cell patch-clamp recordings were performed using Multiclamp 700B amplifier equipped with Digidata 1322A data acquisition board, and pClamp nine software (Axon, Molecular Devices, CA, USA). All recordings were done from CA1 pyramidal neurons in current-clamp mode. The recordings were filtered at 5 kHz, sampled at 10 kHz and stored on a personal computer for offline analysis. The data were analyzed offline by using clampfit version 11.2 (Molecular devices) and MATLAB 2016 software.
The passive electrical properties of the CA1 pyramidal neurons were measured by applying hyperpolarizing current pulses (− 50 to − 400 pA, 800 ms). The resting membrane potential was recorded after the initial break-in of the cell membrane. To obtain input resistance, the current-voltage curve was drawn and its slope was measured as the resistance using the first four sweeps (0 to − 150 pA). The membrane time constant (tau) was evaluated by the exponential fitting of capacitive voltage relaxation. Further, membrane capacitance was obtained by dividing the time constant by the input resistance.
Spontaneous activity was recorded and analyzed in a 60 s epoch. The firing regularity was quantified by the coefficient of variation (CV) of the ISI (inter-spike interval) which was calculated as the ratio of the standard deviation to the mean of ISI. The amplitude of AHP was measured from the threshold to the peak of the hyperpolarization following the action potential. To investigate the impact of kainic acid and 2-DG injection on rebound APs, a hyperpolarizing ramp current (1000 ms, − 300 pA with a slope of 0.345 pA/ms) followed by a depolarizing current pulse (100 pA for 300 ms) was applied. Burst activity was assessed in 125 s epochs.
Nissl Staining
To show the brain injury induced by intrahippocampal injection of kainic acid, Nissl staining was performed. Following anesthesia (100 mg/kg ketamine and 10 mg/kg xylazine), transcardial perfusion was performed with saline and 4% paraformaldehyde, 1.33% picric acid in 0.1 M phosphate buffer (pH 7.4). Then, the mice were decapitated and brains were removed and post-fixed in the same fixative. To verify injection and electrode sites, the brains were cryoprotected in 20% sucrose buffer at 4 °C overnight. Coronal Sects. (20 μm) containing the hippocampus were serially cut using a cryostat (Leica CM1850, Germany). However, to evaluate cell loss, the brain blocks were processed and embedded in paraffin and 8 µm sections were obtained using rotary microtome apparatus (Cut5062, Germany) and mounted on gelatin-coated slides. Nissl staining (0.1% Cresyl violet) was performed afterwards. To assess morphological properties of the CA1 pyramidal neurons (diameter of the soma), the long axis length of the soma was measured in the neurons containing a visible nucleus, nucleolus, and primary dendritic cone (from the neck of the dendritic cone to the opposite pole of the soma) using a computer-based image analysis system (Olympus BX60, DP12, Olysia Soft Imaging System, Japan).
NADPH Diaphorase Staining
The mice were anesthetized (100 mg/kg ketamine and 10 mg/kg xylazine, i.p) on the day 21st (epileptic and control groups) or 29th (epileptic + 2-DG group) (Fig. 1A) and were perfused transcardially with a cold fixative containing 4% paraformaldehyde and 1.33% picric acid in 0.1 M phosphate buffer (PB, pH 7.4) following 0.9% saline perfusion. The brains were then dissected out from the skull, post-fixed overnight in the same fixative at 4 °C and cryoprotected by being immersed in 20% sucrose until they sank. The brains were freeze-sectioned coronally at 50 µm thickness, between the AP 1.2 and 2.4 mm posterior to the Bregma (Paxinos and Franklin, 2001) using a cryostat (Leica CM1850, Germany). NADPH-d staining was performed by incubating free-floating sections in the light-protected 0.1 M PB (pH 7.4) solution, containing 1 mg/ml nicotinamide adenine dinucleotide phosphate diaphorase (β-NADPH-d), 0.1 mg/ ml nitroblue tetrazolium (NBT), and 0.3% Triton X-100 (all reagents were obtained from Sigma, St. Louis, MO, USA) at 37 °C for 1-2 h. The sections, then, were mounted on the gelatin-coated slides and cover-slipped with Entellan. Seven sections from the anterior-posterior axis of hippocampal CA1 area per animal were examined under light microscopy to localize NADPH-d + neurons. The NADPH reactive cells were photomicrographed by the same Olympus microscope as mentioned above and manually counted.
Statistical Analysis
SPSS 26 (IBM SPSS Statistics. Armonk, NY: IBM Corp) and GraphPad Prism 8 software (GraphPad, La Jolla, CA, USA, respectively) were employed to compare the data between groups and significance levels. One-way ANOVA and student's t-test were used to make a comparison between independent variables while the ANCOVA test was utilized to mask the effect of locomotion on anxiety behavior (see results and discussion). Pearson's test (or Spearman's test when a non-parametric test was needed) was employed to assess the correlation between the variables. Numerical data were expressed as mean ± standard error of the mean (SEM) and a value of P < 0.05 was considered statistically significant.
Epileptic Animals Showed Increased Locomotor Activity, but 2-DG Treatment was not able to Reverse Kainic Acid-Induced Enhancement in Locomotor Activity in the Epileptic Group
The epileptic and epileptic + 2DG animals travelled a significant longer distance (N = 8 in each group; P < 0.05 for both) than the control group (N = 8; Fig. 3A). Note, that there was no significant difference between control and control + 2-DG groups. Moreover, thigmotaxis was not significantly different between the four groups indicating no substantial change in fear-motivated and anxiety behavior of the epileptic animals (Fig. 3A, C). Furthermore, the average velocity of the movement during exploration in the open field was significantly higher in both epileptic and epileptic + 2DG than in control (P < 0.05 for both; Fig. 3B) indicating more locomotor activity in epileptic and epileptic + 2-DG animals, and 2-DG treatment could not reverse the changes.
Epileptic Animals Expressed Reduced Anxiety Behavior, 2-DG Only Slightly Reversing the Changes
Less anxiety-like behavior in the epileptic animals compared to control group was extrapolated from more time spent in the open arms (P < 0.01; Fig. 3C), more open arm entry (P < 0.01; Fig. 3D), smaller latency of first entry to open arm (P < 0.01; Fig. 3E), and more head dipping frequency (P < 0.05; Fig. 3F). Nevertheless, body stretching frequency, which is deemed to reflect anxiety level the most [30] was not significantly different between the groups (Fig. 3G). Even though 2-DG seems to have slightly reversed the alterations (number of open arm entry and latency of first open arm entry), the drug group cannot resemble the control group. 2-DG had a similar effect on control + 2DG animals as it had on the epileptic animals (Fig. 3).
Increased Locomotor Activity Seems to be the Main Player in the Zero-Maze Test
ANCOVA (analysis of covariance) revealed that the locomotion state of the animals affects the zero-maze parameters as a covariate. Pearson's test (or Spearman's test) demonstrated a significant correlation between moved distance in the open field and all the zero-maze parameters except for body stretching frequency; there was no interaction between the group and locomotor variables, hence, a necessary prerequisite of ANCOVA test was met. Masking locomotion's effect by the ANCOVA test, animals in the control, epileptic, and epileptic + 2-DG showed no significant difference in the parameters of anxiety behavior but head dipping frequency (Table1).
Epileptic Animals Appeared to Present with Depression-like Behavior, 2-DG Partially Leading to more Sweet Water Consumption
The ratio of sweet water to tap water consumption was lower in the epileptic animals, being partially reversed in the epileptic group receiving 2-DG (P < 0.05; Fig. 3H). It is worth noting that depression-like behavior deduced from the result of this test suggests an all-or-none pattern. It is to say, each epileptic animal was either depressed (tap water consumption was much higher than sweet water consumption) or not depressed (sweet water consumption was much higher than tap water consumption). 2-DG had no considerable effect in the control + 2-DG group (Fig. 3H).
2-DG Failed to Suppress Interictal Sharp-Wave Complexes
A preferential enhancement of glycolysis in the activated brain during diseases, like epilepsy, has been reported [4] and based on this, the antiseizure effect of glycolysis inhibition has been proposed [5,7,9,23]. Therefore, we next examined the electrophysiological consequences of 2-DG treatment, which inhibits glycolysis competitively and prevents ATP production, in a kainic acid model of temporal lobe epilepsy.
13 h of continuous LFP recording was conducted from the dorsal hippocampus of the epileptic animals. Sharpwaves were frequently seen in the epileptic group while were never observed in control animals (Fig. 4A, B). Analysis of these sharp-wave complexes revealed no significant difference in the ratio of total sharp-wave complex time (Fig. 4C), the ratio of mean sharp wave duration (Fig. 4D), and sharp wave frequency (Fig. 4E) between the epileptic + 2-DG (N = 5) and epileptic + saline (N = 5) groups. The ratios were obtained by division of the quantities taken on the 21st and 28th days in each group (28th/21st). Although the parameters are reported for the whole 13 h, the sharp-wave complexes were scrutinized in one-hour epochs during the The epileptic + 2-DG group showed lower latency as well but did not reach the significance level. H head dipping frequency was higher in epileptic and epileptic + 2-DG animals in comparison with control groups. I body stretching frequency was not significantly different between the groups. J Sucrose preference is shown to be diminished in epileptic animals, 2-DG partially reversing it. Notably, 2-DG led to a slight alteration in the test results in control + 2-DG (N = 6) group compared to the control animals. The bars represent the mean ± SEM. *P < 0.05, **P < 0.01 whole 13 h and no notable suppression was noted (data not shown). LFP results suggest that 2-DG is not able to suppress epileptiform activity at the circuit-level. Table 1 Correlation between locomotion status and zero-maze parameters and the effect of masking locomotion's interference in zero-maze results. There was a significant correlation between locomotion status and the zero-maze parameters except for body stretching frequency (the first row). ANCOVA test results revealed that there was no significant difference in the zero-maze parameters between control, epileptic, and epileptic + 2-DG animals after masking locomotion's interference (second row) while ANOVA test results had shown a significant decrease in anxiety status of the animals (last row; and Fig. 2). Head dipping frequency, however, was significantly different between the groups even after masking locomotion's effect. *P < 0.05, **P < 0.01
Fig. 4 Interictal epileptiform activity and 2-DG's circuit-level effect.
A and B 10-min epochs from control and epileptic animals respectively, indicating interictal epileptiform discharges (sharp-wave complexes) (five complexes are seen in this epoch). These complexes were never seen in control animals. The ratios of total sharp-wave complex time C, sharp wave frequency (frequency of the discharges within a complex) D, and sharp-wave duration (mean duration of each complex) E were not significantly different between epileptic + 2-DG and epileptic + saline groups during 13 h of recording.
Epileptic animals (N = 5) were recorded on the day 21th (3 weeks after epilepsy induction), then received 2-DG for a week and recorded on the day 28th again (epileptic + 2-DG group). epileptic + saline group (N = 5) was added to figure out whether the increment in epileptiform activity following 2-DG injection was a result of 2-DG or progression of the pathology. The ratios were obtained by dividing the data acquired on day 28th by the data on day 21th. Data are shown as mean ± SEM By applying hyperpolarizing currents (Fig. 5A) patch-clamp findings showed that resting membrane potential remained unchanged either following induction of epilepsy or inhibition of glycolysis in the epileptic group (N = 6) and epileptic + 2-DG group (N = 7), when compared to control group (N = 8; Fig. 5B). The membrane resistance was also not affected by intrahippocampal kainic acid injection and 2-DG treatment (Fig. 5C). However, the time constant was shorter in both epileptic and epileptic + 2-DG groups (P < 0.01 and P < 0.001, respectively; Fig. 5D) compared to control group. Furthermore, the membrane capacitance was significantly decreased in these groups (P < 0.01 and P < 0.05, groups. resting membrane potential B as well as membrane resistance C, were similar between the three groups. Membrane tau D and capacitance E, however, were significantly lower in epileptic and epileptic + 2-DG groups compared to control animals. The pyramidal layers of CA1 are shown in control F, Epileptic G, and epileptic + 2-DG H groups. Note that only a small proportion of CA1 pyramidal neurons have survived following kainic acid injection (also see Fig. 1). I Demonstrates the diminished size of the survived cells in epileptic and epileptic + 2-DG groups. Data are expressed as mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. Scale bar: 50 µm respectively; Fig. 5E) when compared to the control neurons. Changes in membrane capacitance were associated with a significantly smaller cell size of survived pyramidal neurons (Figs. 5F-H) both in the epileptic (N = 4, P < 0.05) and in the epileptic plus 2-DG group (N = 4, P < 0.05; Fig. 5I). As it can be seen in Nissl-stained sections, induction of epilepsy also led to a remarkable cell loss in the epileptic group and inhibition of glycolysis did not stop the loss of neurons in the epileptic group receiving 2-DG (Fig. 2D). Whole-cell current-clamp recordings revealed that neurons from epileptic mice (N = 3) showed enhanced neuronal excitability when compared to control cells (Fig. 6A) and exhibited a significant higher firing frequency (P < 0.05; Fig. 6C) but treatment with 2-DG(N = 5) following induction of epilepsy reduced the neuronal excitability and firing frequency (Figs. 6A, C).
Furthermore, although the amplitude of action potential was not affected by epilepsy induction when compared with the control group, inhibition of glycolysis in the epileptic mice that received 2-DG resulted in a significant increase in the AP amplitude (N = 4, P = 0.033; Fig. 6D, B). Induction of epilepsy following intrahippocampal injection of kainic acid led to a slightly slower depolarization phase of the action potential, as evidenced a non-significant increase in the rising tau compared to control cells, but 2-DG treatment resulted in a significantly faster time constant of the rising phase of the action potential, as compared both to the control and epileptic groups (P < 0.05; Fig. 6E). We also found that either induction of epilepsy or 2-DG treatment had no significant effect on the AP half-width (Fig. 6F). Nevertheless, due to afterdepolarization (ADP), which is only noted in the epileptic + 2-DG group (Fig. 5b), the duration of AP (measured at threshold voltage) was significantly increased both in epileptic and epileptic + 2-DG groups compared to control group (P < 0.05; Fig. 6G). The amplitude of AHP did not differ significantly between the groups (Fig. 6H). Inhibition of glycolysis, but not induction of epilepsy alone, was accompanied by a significant increase in the coefficient of variation of interspike interval (ISI) when compared to either control (P < 0.05) or epileptic (P < 0.05; Fig. 6I) groups. This may indicate the irregularity of the firing pattern of the CA1 pyramidal neurons in epileptic mice that received 2-DG.
The latency of onset of the first post-inhibitory rebound spike, following a hyperpolarizing ramp current, was significantly shorter in the epileptic neurons than in the control ones (P < 0.01); likewise, it was significantly lower in the epileptic + 2-DG group compared to the control group (P < 0.05, Fig. 6J, K).
To further assess the impact of 2-DG treatment on the firing pattern, we analyzed quantitative burst activity. Burst activity in hippocampal pyramidal neurons has already been demonstrated in epileptic cells [32]. Here, neurons obtained from all the epileptic animals showed severe burst activity (Fig. 7A); in the epileptic + 2-DG group, however, neurons from 2 of the 5 animals showed no burst activity; the other three animals showed attenuated burst activity compared with the epileptic animals (Fig. 7B). Even though the difference between the parameters did not reach to the significance level, the number of bursts decreased by 61% (Fig. 7C), the mean duration of each burst diminished by 207% (Fig. 7D), the mean AP number in each burst decreased by 71% (Fig. 7E), and the mean pause between the bursts dwindled by 41% (Fig. 7F). Patch-clamp results, hence, suggest that 2-DG could notably reverse the alterations that ensued from epilepsy induction at the cellular level. Moreover, it led to alterations in AP properties that did not assemble the control animals.
Kainic Acid Led to Severe Pyramidal and NADPH-d Positive Cell Loss in the Dorsal CA1 Pyramidal Layer
Nissl staining showed that the integrity of the dorsal CA1 pyramidal cell layer was disrupted and remarkable cell loss, especially in CA1, was evident following intrahippocampal kainic acid administration (Fig. 2C, D). In parallel, NADPH histochemical staining for nitrergic neurons revealed that the number of NADPH-d + cells in the pyramidal layer of dorsal CA1was significantly decreased (P < 0.05) in the epileptic (N = 3 mice, 21 sections) animals when compared with the control group (N = 4 mice, 28 sections) and epileptic (N = 3 mice, 21 sections) groups (Fig. 8A, a, B, b, C, c, D, d, E).
2-DG Treatment Increased NADPH-d + Neurons in the Dorsal Hippocampus of Control Animals While Failing to Alter their Number in the Kainic Acid-Treated Animals
In the control + 2-DG group, elevated NADPH-d + was observed compared with the control group (20.71 ± 1.01 in the control + 2DG group (N = 4 mice, 28 sections); p-value < 0.001; Fig. 8E). In the epileptic group receiving 2-DG, however, no significant alteration in the number of NADPH-d-stained neurons (N = 4 mice, 28 sections; p-value = 0.73; Fig. 8E) was noted compared with the epileptic group. In contrast, in the contralateral dorsal hippocampus of epileptic and the epileptic + 2-DG group, the NADPH-d + cell number was approximately threefold more in comparison with the ipsilateral side (P < 0.05; Fig. 8F, G); Note that the number of NADPH-d + neurons in the contralateral hippocampus of these groups was even mildly more compared to control group. There was no notable difference between the ipsilateral and contralateral hippocampus in the control group (data not shown).
Discussion
In the present study, we aimed to ascertain the extent to which glycolysis inhibition by 2-DG affects cellular function, electrophysiological measures, and epilepsy-related behavioral deficits using the mouse intrahippocampal kainic acid model. The unilateral intrahippocampal kainic acid model is a relevant animal model of medial temporal lobe epilepsy and provides a useful platform to investigate the mechanisms of epilepsy and the effectiveness of therapies for temporal lobe epilepsy, including inhibition of glycolysis by 2-DG. Its anticonvulsant efficacy in several rodent models of epilepsy, such as chemoconvulsant [33,34] and electroconvulsive [35] seizure models has been demonstrated.
In this study, we used behavioral, electrophysiological and histochemical approaches to elucidate the impact of 2-DG treatment on alterations induced by intrahippocampal microinjection of kainic acid. Here, all experiments exclusively were conducted on male mice for the following reasons. First, there is agreement among studies that males have a higher incidence of epilepsy and they are more vulnerable to epilepsy acquisition than females in human populations [36,37], and second, the difference in the fluctuation of the hormone levels between females during the ovarian cycles might differentially cause variability in the data [38,39], thus making interpretation of comparison more difficult.
Our behavioral results showed a decrease in the anxiety behavior measured by the zero-maze test, but not the open field test, as well as an increase in the locomotor activity and depression behavior in the epileptic animals. It has been elucidated that there is a link between hippocampal sclerosis and depression-like behavior in epileptic models [15,16]. In our model of temporal lobe epilepsy, hippocampal sclerosis is a well-known remark, as evidenced by a significant cell loss in the CA1 area of the hippocampus. This is consistent with the finding reported by [40,41]. Although a previous study showed no depression behavior in the mice model of intrahippocampal kainic acid using tail suspension and forced swimming tests [14], here, we report a significant depression behavior in the NMRI mice model of epilepsyinduced by intrahippocampal injection of kainic acid, as evidenced by a significant reduction in the preference for the 3% sucrose solution compared to control mice. This finding is consistent with previous studies in epileptic mice [31,42]. The sucrose preference test was used as a measure of anhedonia, which is the core symptom of depression and it is a behavior that is commonly observed in patients with the major depressive disorder [43].
Moreover, hippocampal lesions reduce anxiety behavior in rodents. Even so, a considerable controversy is faced when it comes to differentiating behavioral disorders, including anxiety, between the dorsal and/or ventral hippocampus of lesioned animals [44,45]. Locomotor activity, however, is affected by such lesions especially dorsal hippocampus lesions [46]. Increased locomotor activity has already been reported in the intrahippocampal kainic acid model of temporal lobe epilepsy [14]; the same study also reported no altered anxiety behavior in kainic acid-treated mice in the elevated plus-maze test.
Nonetheless, here, we explicitly show that both locomotor activity and anxiety behavior are altered in the epileptic group, as evidenced by increased locomotor activity and decreased anxiety behavior. Although cell loss is noted in the ventral hippocampus besides the dorsal part (kainic acid injection site), sclerosis is mostly seen in the dorsal hippocampus [47]. This fact raises a hypothesis about whether increased locomotion due to dorsal hippocampal sclerosis has affected the zero-maze parameters. To test this hypothesis, we first attempted to see if there was a correlation between the locomotion state and each parameter of the zero-maze test. Interestingly, all the parameters except for body stretching frequency were significantly correlated to the locomotion state of the animals. Hence, using the ANCOVA test, we masked locomotion's effect on the parameters and the results were staggering; except for head dipping frequency, the other three parameters (open-arm time, number of open-arm entry, and latency of first open arm entry) did not reach to significance level between the three groups while ANOVA test results were indicating quite the opposite. All in all, at least to some extent, significantly reduced anxiety-like behavior in the kainic acidtreated animals compared with the control animals, was a (N = 3), and epileptic + 2-DG (N = 5) groups. Note the high frequency of action potential firing in the epileptic group, being highly reversed by 2-DG. B superimposed APs from the three groups clearly indicating the alterations (see below). Note the afterdepolarization (ADP) only in epileptic + 2-DG group (b). Dramatically increased frequency of AP firing C in a 60 s epoch in epileptic animals compared to the control group; 2-DG, however, reversed the changes substantially. D AP amplitude augmented significantly compared to both epileptic and control groups. AP rising tau was significantly lower in epileptic + 2-DG group compared to control animals. Although not reaching to significance level, it was lower in epileptic group compared to epileptic + 2-DG group as well E. Even though AP half-width was much similar in the three groups F, AP duration was significantly higher in 2-DG treated epileptic animals G owing to ADP noted only in the epileptic + 2-DG group. H Afterhyperpolariziation, although being lower in the epileptic and epileptic + 2-DG groups, did not reach to significance level between the three groups. I 2-DG led to increased irregularity compared to the epileptic and control groups. J latency of rebound APs is significantly smaller in epileptic and epileptic + 2-DG groups compared to control animals. K Superimposed rebound APs following a hyperpolarizing ramp current. APs occurred after injection of + 100 pA current following the ramp current (dashed line); the APs are shown with a different time scale to show the latency of APs following the injections. The bars represent the mean ± SEM. *P < 0.05, **P < 0.01 ◂ result of augmented locomotor activity. Consistently, body stretching is deemed to be the most emotionally-driven posture of the animal in the zero-maze apparatus [30]; here we indicated that body stretching frequency is not affected by locomotion state and is not significantly different between the three groups. Masking interventional effect of increased locomotion on zero-maze results, unchanged anxiety behavior in epileptic animals was noted; this was consistent with unchanged anxiety extrapolated from thigmotaxis assessment. Maia et al. have shown that rats treated with kainic acid were hyperactive in the open-field test and exhibited less anxiety-like behaviors [48]. They have also reported that kainic acid treatment was associated with severe cell loss in hippocampus. These findings are in line with our results.
In the present study, we showed that 2-DG at the cellular level almost suppressed the electrophysiological alterations induced by intrahippocampal injection of kainic acid, which produces one of the most drug-resistant epilepsy models with a striking resemblance to human epilepsy because of sclerosis seen in the hippocampus of the treated animals. However, at the neuronal circuit level, 2-DG seems to fail to suppress the sharp-wave complexes. This incident begs Fig. 7 Burst activity suppression by 2-DG. A and B 15 s epochs from epileptic and epileptic + 2-DG animals. In 2 out of 5 epileptic + 2-DG animals burst activity was never seen. Mean burst number, mean AP number in each burst, mean burst time, and mean pause between two bursts did not reach to the significance level between the two groups in 125-second epochs C-F. Nevertheless, the percentages of alterations were substantial (− 61%, − 207%, − 71%, + 41% respectively) indicating remarkable burst activity suppression by 2-DG (N = 3 in epileptic and epileptic + 2-DG groups) however, NADPH-d + cells were significantly higher compared to the ipsilateral dorsal hippocampus both in epileptic F and epileptic + 2-DG G groups. Note that the number of NADPH-d + neurons is slightly higher in the contralateral CA1 of epileptic and epileptic + 2DG animals compared to the control animals. The NADPHd + cells are shown by the arrows. Data are shown as mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. Scale bars: A 300 µm, B 100 µm the question of whether neuronal electrophysiological alterations induced by kainic acid injection are responsible for this incompetency of 2-DG. In this regard, Forte et al. [49] illuminated that 2-DG exerts its anticonvulsant effects through different mechanisms at the cellular and circuit levels; at the cellular level, K ATP channels seem to play the major role while at the circuit level, suppression of epileptiform activity depends on GABA-A receptor activation. This activation is mediated by increased pentose phosphate pathway (ppp) as glycolysis enzyme phospho-fructo kinase is inhibited by 2-DG, and subsequently, upstream substrates are shifted to ppp.
Even though inhibited IPSPs are postulated to be the primary mechanism of hyperexcitability in surviving CA1 pyramidal neurons [17,50], analysis of passive properties of CA1 neurons showed decreased membrane capacitance which leads to membrane tau decrement and hyperexcitability. This capacitance decrease is likely to occur due to diminished cell size seen in the intrahippocampal kainic acid-treated animals as we demonstrated here. However, consistent with the previous reports, 2-DG did not alter the passive membrane properties of CA1 pyramidal neurons [7]. Furthermore, alterations in the active properties of CA1 pyramidal neurons in the epileptic mice were associated with a decrease in the rebound AP latency following a hyperpolarizing current which is evidence of neuronal hyperexcitability [51]. However, here, we revealed that 2-DG can substantially reduce the firing rate in epileptic animals. Since inhibition of glycolysis is associated with ATP deprivation, there is posited to be a link between firing rate alteration following 2-DG injection and subsequent K ATP channel [52]. One probable explanation for the reduction in neuronal excitability induced by the 2-DG treatment could be the prolonged AP duration and subsequent relative refractory period increment, which is likely to be a key player in burst activity suppression and decreased AP frequency throughout the recording.
Moreover, our findings showed a decrease in the AP rising tau as well as an increase in the amplitude and duration of APs in the 2-DG-treated epileptic mice. Since inhibition of glycolysis by 2-DG could be associated with attenuation of ATP level in the neurons, this, in turn, may affect the function of ion channels and/or Na + -K + ATPase that are involved in AP production and thereby changes the AP waveform. Since there is a tight coupling between Na + -K + pump and glycolysis [53], its functional disruption causes severe alteration in neuronal activity [54][55][56]. Furthermore, inhibition of glycolysis by 2-DG has also been reported to suppress synaptic transmission in the CA1 region of hippocampus [57].
Next, we assessed whether induction of epileptic activity and glycolysis inhibition modify the NADPH-diaphorase activity, as a histochemical marker of the nitric oxide synthase (NOS), since there are several reports confirming the role of the nitric oxide system in the pathophysiology of mood disorders, including depression [58,59], and epilepsy [60][61][62]. A reduced number of NADPH-d + cells in the dorsal hippocampus of i.p and i.c.v kainic-acid treated animals has already been reported [22,63]. Here we show that NADPH-d + cells are smaller in number in the ipsilateral dorsal hippocampus CA1 in the intrahippocampal kainic acid model of epilepsy. Consistent with previous reports [64] however, in the contralateral dorsal CA1 of epileptic animals (and also epileptic + 2-DG group), NADPH-d + cells were slightly more frequent than in the control animals. This could be a compensatory mechanism attempting to augment NO signaling in the contralateral hippocampus after decreased NADPH-d + cell number in the kainic acid-treated side. Although 2-DG was unable to increase NADPH-d + cell number in the drug animals, it led to an increase in the number of NADPHd + in control-2-DG received rats. It could be speculated that increased NADPH concentration in the interneurons owing to pentose phosphate pathway (PPP) potentiation following glycolysis inhibition by 2-DG leads to such augmentation. To explain more, increased NADPH will bring about increased reduction of nitro blue tetrazolium to diformazan (the visible dye) inside the neurons containing NADPH diaphorase which were not NADPH-d + when PPP and, subsequently NADPH concentration was low. Interestingly, NADPH-d + neurons in the hippocampus are demonstrated to release GABA too, indicating that NO acts as a paracrine/retrograde co-transmitter [65]. If so, loss of these GABAergic neurons could be a cause of disappeared IPSPs following the kainic acid injection, mentioned above. Additionally, it is recently argued that NADPH diaphorase activity in aldehyde-fixed tissue is not enzymatic rather, it is mediated by NO-containing factors which promote the reduction of nitro blue tetrazolium to diformazan [66].
In conclusion, while, at the cellular level, 2-DG treatment significantly reverses the electrophysiological alterations following epilepsy induction by intrahippocampal kainic acid injection, it seems to be incompetent in suppressing circuit-level changes (as shown by interictal epileptiform activity). In behavioral part of our study, on the other hand, only partial improvement was noted which could be a direct impression of hypometabolism induced by 2-DG. Moreover, while glycolysis inhibition by 2-DG was associated with an increase in the number of NADPH-d + cells in the control group, its application was unable to alter the NADPH-diaphorase activity, as did not change the number of NADPH-d + neurons in the epileptic animals, which may imply a severe NADPH-d + cell loss.
Limitation of the Present Study
It should be noted that due to severe cell loss in CA1, finding healthy neurons was a daunting task as the remaining neurons were rather fragile. Consequently, the sample size is rather small (see below). Even so, for the first time, as far as we know, in the intrahippocampal kainic acid model, we report the passive properties of the cell membrane and action potential (AP) properties of the dorsal CA1 pyramidal neurons, using the whole-cell patch-clamp recording. Furthermore, since in the present study we have not done voltage-clamp recordings of ion channels, the discussion on the cellular basis of altered neuronal excitability is speculative. | 2022-09-07T06:17:57.680Z | 2022-09-05T00:00:00.000 | {
"year": 2022,
"sha1": "f892a07bd938a1e0d3a4575ca639d25a421f70b1",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11064-022-03740-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b56d5c721f0bf9584b3147fd55757a9be33a96c",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254127903 | pes2o/s2orc | v3-fos-license | Validated HPLC–PDA methodology utilized for simultaneous determination of Etoricoxib and Paracetamol in the presence of Paracetamol toxic impurities
Etoricoxib (ETO), Paracetamol (PCM), and two toxic impurities for Paracetamol impurity K (4-aminophenol (PAP)) and impurity E (para-hydroxy acetophenone (PHA)) were separated using a simple and selective HPLC method that was tested for the first time. PCM is a commonly used analgesic and antipyretic medication that has recently been incorporated into COVID-19 supportive treatment. Pharmaceuticals containing PCM in combination with other analgesic-antipyretic drugs like ETO help to improve patient compliance. The studied drugs and impurities were separated on a GL Sciences Inertsil ODS-3 (250 × 4.6) mm, 5.0 µm column, and linear gradient elution was performed using 50 mM potassium dihydrogen phosphate adjusted to pH 4.0 with ortho-phosphoric acid and acetonitrile as mobile phase at 2.0 mL/min flow rate at 25 °C and UV detection at 220 nm. The linearity range was 1.5–30.0 µg/mL for ETO and PCM while 0.5–10.0 µg/mL for PAP and PHA, with correlation coefficients (r) for ETO, PCM, PAP, and PHA of 0.9999, 0.9993, 0.9996, and 0.9998, respectively. The proposed method could be used well for routine analysis in quality control laboratory.
Introduction
The global spread of COVID-19 has resulted in an unprecedented disaster. In the early stages, the most common symptom is fever due to the onset of a COVID-19-mediated cytokine storm. NSAIDs are among the most extensively used medicines owing to their efficacy in reducing pain and inflammation and their inclusion in the WHO's Model List of Essential Medicine [1].
The presence of impurity in a drug product is mostly a quality issue, as it may compromise the drug's efficacy and may cause potential adverse effects on patients [12].
Each impurity found in a pharmaceutical product must be extensively inspected, both qualitatively and quantitatively, and undergo toxicological testing if necessary [13]. Accordingly, it is necessary to develop a technique for determining pharmaceuticals as well as their impurities at minute levels that may be harmful and have toxic effects.
Both PCM and ETO were determined by several reported methods, such as HPLC [49][50][51][52][53], HPTLC [54], and spectrophotometric methods [55]. The published methods disregarded the possible impact of PCM impurities such as PHA (hepatotoxic) and PAP (nephrotoxic and teratogenic toxicity), despite the toxicity of these impurities. Also, according to the literature, there is no chromatographic method for determination of PCM and ETO in the presence of PCM impurities.
This work aims to develop, optimize, and validate a simple, sensitive, and selective RP-HPLC technique to be the first method for the simultaneous determination of PCM, ETO, and PCM potential impurities in bulk material and their pharmaceutical formulation.
Pure standard
Pure standard ETO and PCM were donated by SIGMA Pharmaceutical Industries, Cairo, Egypt, and according to the company's analytical certificate, the purity of ETO and PCM was 99.5% and 99.94%, respectively. PAP and PHA were purchased from Sigma-Aldrich, and the purity of PAP and PHA was certified to be 99.73% and 99.61%, respectively.
Reagents
All chemicals and solvents employed in this experiment were of analytical grade. HPLC-grade acetonitrile and methanol were purchased from Sigma-Aldrich, Belgium. Potassium dihydrogen phosphate was supplied by El-NASR Pharmaceutical Chemical Co. (Abu-Zabaal, Cairo, Egypt). Double-distilled water (Otsuka Pharmaceutical Co., Cairo, Egypt) was used. Phosphate buffer (pH = 4) was prepared by dissolving about 6.8 gm of potassium dihydrogen phosphate in 1000 mL of doubled distilled water; the pH was adjusted using orthophosphoric acid (Sigma-Aldrich, Switzerland).
Stock and working standard solutions
In four separate 25 mL volumetric flasks, 25 mg of ETO, PCM, PAP, and PHA were accurately weighed before being dissolved in 15 mL of methanol and sonicated for 10 min. The volume was raised to the mark with methanol, yielding a final concentration of 1.0 mg/mL. Then, from their standard stock solutions, 5, 5, 1.3, and 1.3 mL of ETO, PCM, PAP, and PHA, respectively, were transferred into four separate 50 mL volumetric flasks and the volume was raised to the mark with methanol to obtain standard working solutions of 100, 100, 26, and 26 µg/mL for ETO, PCM, PAP, and PHA, respectively.
Chromatographic conditions
The components were separated using a GL Sciences Inertsil ODS-3 column (250 × 4.6) mm, 5.0 µm with a linear gradient elution of 50 mM potassium dihydrogen phosphate adjusted to pH 4.0 with ortho-phosphoric acid (solvent A) and acetonitrile (solvent B). The applied gradient program is shown in Table 1, solvents were filtered through a 0.45 µm Millipore membrane filter and degassed ultrasonically for 15 min before being injected into the HPLC system at a flow rate of 2.0 mL/min. The PDA detector's wavelength was 220 nm, and all chromatographic separations were performed at room temperature (25 ± 2 °C).
Construction of calibration graphs
Into 10 mL volumetric flasks, aliquots of standard working solution were diluted with the mobile phase (50:50 v/v, phosphate buffer: acetonitrile) to achieve concentrations ranging from 1.5 to 30 μg/mL for ETO and PCM and 0.5-10 μg/mL for PAP and PHA. Diluted standard solutions of varying concentrations were subsequently injected by auto-sampler (80 µL volume) in triplicates into the HPLC system and chromatographed under the above-mentioned chromatographic conditions. Using a PDA detector, chromatographic peaks were obtained at 220 nm.
Analysis of tablet formulation
Ten tablets were weighed to determine the mean weight, then they were finely ground. The weight of the crushed powder corresponding to 18.5 mg of ETO and 100 mg of PCM was accurately weighed out and dissolved in 50 mL of the mobile phase (50:50 v/v, phosphate buffer: acetonitrile) in a 100 mL volumetric flask. After 15 min of sonication, the volume was completed to the mark with the same solvent. After that, the solution was filtered via a 0.45 µm membrane filter, yielding an initial stock solution claimed to contain 0.18 mg/mL of ETO and 1.0 mg/mL of PCM. The obtained solution was further diluted using the mobile phase to reach the final concentration 1.8 µg/ mL and 10 µg/mL for ETO and PCM respectively, then injected in triplicate. The separation was accomplished using the chromatographic conditions described above. The concentrations of the listed drugs were determined using the calculated regression equations. We applied a standard addition technique to further evaluate the suggested method's accuracy.
Method validation
The proposed method was validated in accordance to ICH guidelines [56].
Results and discussion
The current work intends to develop and validate the first chromatographic method for determining PCM and ETO simultaneously as well as PCM toxic impurities.
Method development and optimization
The chromatographic settings were optimized to produce a good resolution of the investigated components with sharp symmetric peaks in a short run time.
Following an examination of different solvent compositions, methanol was chosen for preparing the stock and working standard solutions since all standards displayed acceptable solubility at the examined concentrations. Components of interest were separated using the suggested HPLC gradient elution method combined with UV detection at 220 nm. Figure 2 shows typical chromatograms of well-defined symmetrical peaks for PAP, PCM, PHA, and ETO mixtures.
Various experimental factors which affect separation were studied, including:
Choosing a suitable wavelength
We adjusted the PDA detector at several wavelengths to find the optimal one regarding the sensitivity and peak shape of the components under study, and all compounds had reasonable UV absorption at 220 nm. As a result, the 220 nm wavelength was selected for the investigation and quantification of ETO, PCM, and impurities of PCM.
Selection of the column
The HPLC column was chosen after trying several packing materials. C8, and C18 columns such as GL Sciences Inertsil ODS-3 column (250 × 4.6) mm, 5.0 µm and Kinetex C 8 (4.6 × 100 mm, 5 µm; Phenomenex, USA), were tried. Employing a C8 column, some of the components (polar components like PAP and PCM) were retained in the column with a long separation time, which was most likely due to the high polarity of the C8 column. The separation was enhanced and the best results with excellent sharp peaks were obtained by utilizing a GL Sciences Inertsil ODS-3 column (250 × 4.6) mm, 5.0 µm instead of C8 columns. Furthermore, the influence of column temperatures ranging from 25 to 40 °C was examined.
However, no enhancement in terms of analysis time was found upon increasing column temperature due to the low viscosity of the mobile phase. As a result, no significant variations in retention times were seen over the temperature range investigated. Finally, the temperature was kept at 25 °C.
Selection of mobile phase
The examined substances show a significant difference in lipophilicity (log P) of 0.47, 0.51, 1.23, and 2.79 for PAP, PCM, PHA, and ETO, respectively.
At the beginning of the study, isocratic elution was used to separate the four components using varied ratios of water/methanol, water/ethanol, and water/acetonitrile as mobile phases.
In these trials, there was either inadequate separation or overlapping peaks, particularly PCM and PAP peaks, as their polarity is nearly the same, while ETO took more time to be separated. So, we shifted to gradient elution with the same mobile phases, acetonitrile was more suitable for the separation of the studied components than methanol and ethanol, where acetonitrile is less polar than methanol and ethanol. In addition, the UV cut-off of acetonitrile (190 nm) is less than methanol (210 nm) and ethanol (210 nm).
Buffer was tried instead of water and pH was varied from 2 to 8, using phosphate buffer, pH 4.0 resulted in an obvious improvement giving sharp peak shapes and excellent separation of the mixture simultaneously with symmetric peaks in short run time.
Finally, a linear gradient elution using 50 mM potassium dihydrogen phosphate adjusted to pH 4.0 with ortho-phosphoric acid (solvent A) and acetonitrile (solvent B) was conducted as described in Table 1.
The optimal resolution, unambiguous baseline separation with adequate retention times, and symmetric peaks of the investigated drugs were obtained. UV detection was carried out at 220 nm. A good resolution was achieved with retention times at t R of 1.728 ± 0.01, 3.413 ± 0.01, 5.003 ± 0.01, 7.275 ± 0.01 min for PAP, PCM, PHA, and ETO, respectively, as shown in Fig. 2. Table 2 Parameters required for the proposed HPLC method's system suitability testing a Resolution (R s ) = 2(t RB -t RA ) / (W B+ W A ), where t R is the retention time and w is the peak width calculated for each of the two successive peaks b Selectivity (α) = k' 2 /k' 1
Effect of flow rate
Different flow rates of the mobile phase were tried, ranging from 1.0 to 2.0 mL/min, to separate the analytes' peaks from the mobile phase. The best flow rate for the effective elution of the drugs was 2.0 mL/min, giving better peak shape and a shorter retention time for all analytes while keeping acceptable peak resolution.
System suitability testing
To assess the performance of the operating system for the required separation, Table 2 shows the results of system suitability parameters, and satisfactory results were obtained according to USP pharmacopoeia [57], demonstrating perfect baseline separation of the separated peaks and high selectivity of the suggested method.
Robustness
The robustness of the HPLC technique was evaluated by analyzing the influence of slight modifications on the chromatographic conditions, such as the percentage of buffer ± 1% in the mobile phase components, the flow rate of the mobile phase (2.0 ± 0.1 mL/min), and pH (4 ± 0.2). Even minor changes to the perfect conditions had no discernible effect on retention times, tailing factor, and resolution of the examined components, which was proved by the low %RSD values, indicating the reliability of the proposed method during routine use (Table 3).
Method validation
The proposed method was validated in accordance to ICH guidelines [56], as shown in Table 4.
The suggested method's calibration curves were generated to illustrate the relationship between the mean peak area and the corresponding concentration in range of 1.5-30 µg/mL for ETO and PCM and 0.5-10 µg/mL for PAP and PHA, as shown in Fig. 3. The regression equations have been computed and results are shown in Table 4.
The accuracy of the proposed method was assessed across the required range by analyzing three different concentrations of pure samples in triplicate, and the mean of the percentage recoveries ± SD was calculated (100.21 ± 0.973, 100.58 ± 0.858, 98.17 ± 1.362, and 99.32 ± 1.163) for ETO, PCM, PAP, and PHA, respectively, confirming accuracy of the method as shown in Table 4.
Repeatability and intermediate precision were also investigated, three different concentration levels within the specified range, either within the same day or on three successive days to investigate intra-day and interday precision, respectively. As proven by the low %RSD values, the suggested analytical procedure yielded data with acceptable precision ( Table 4).
Limit of detection (LOD) and limit of quantitation (LOQ) were calculated for ETO, PCM, PAP, and PHA based on the standard deviation of the intercept (SD) and the slope obtained from the calibration curves of each component. The low LOD values (0.304, 0.397, 0.113, and 0.062) were obtained, demonstrating the proposed method's great sensitivity for ETO, PCM, PAP, and PHA, respectively, as shown in Table 4.
Assay of pharmaceuticals dosage form
The proposed method successfully determined ETO and PCM in Intacoxia-P tablets. The standard addition technique was employed to validate the proposed method for determining ETO and PCM selectively in the presence of formulation excipients and additives, and good results were obtained (Table 5; Fig. 4).
Statistical comparison
Results obtained from the analysis of pure ETO and PCM were statistically compared with those obtained by the reported HPLC method [49]. The comparison revealed that the calculated F and student's t-test values are less than the tabulated ones, revealing no significant difference between the proposed and reported method as shown in Table 6. The proposed method was compared to the other published methods [49][50][51][52][53] and the findings showed that the proposed method was more sensitive than the published ones. The proposed method excels the published ones as it separates and quantifies PCM toxic impurities as well as PCM and ETO mixtures, which was not published before, as shown in Table 7.
Conclusion
A validated, robust, precise, accurate, and selective gradient RP-HPLC method was used to determine PCM and ETO in pharmaceutical preparations without interference from PCM impurities. The methodology was validated in agreement with the ICH recommendations. The results show that using a linear gradient system with respect to the mobile phase allows the separation of studied drugs and impurities with high resolution and relatively short analysis time. The proposed method was shown to be suitable for use in quality control laboratories for determining PCM in pure form or pharmaceutical dosage forms with ETO. | 2022-12-02T15:02:34.392Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "b30a8bbca5382ef8aef2d7586be12c56223a66fc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "b30a8bbca5382ef8aef2d7586be12c56223a66fc",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
372059 | pes2o/s2orc | v3-fos-license | Influence of Pyrolysis Temperature on Rice Husk Char Characteristics and Its Tar Adsorption Capability
A biomass waste, rice husk, was inspected by thermoanalytical investigation to evaluate its capability as an adsorbent medium for tar removal. The pyrolysis process has been applied to the rice husk material at different temperatures 600, 800 and 1000 °C with 20 ° C/min heating rate, to investigate two topics: (1) influence of temperature on characterization of rice husk char and; (2) adsorption capability of rice husk char for tar removal. The results showed that subsequent to high temperature pyrolysis, rice husk char became a highly porous material, which was suitable as tar removal adsorbent with the ability to remove tar effectively. In addition, char characteristics and tar removal ability were significantly influenced by the pyrolysis temperature.
Introduction
In agricultural countries, lots of agriculture residues or biomass wastes, such as rice husk and woods, are produced every year.The world annual production of rice is more than 540 million metric tons [1].These biomass wastes are one of the main assets for renewable energy.Consequently, there are numerous prominent technologies to transform biomass into energy.The worldwide well renowned technologies are the use of thermal processes such as combustion, gasification or pyrolysis.Pyrolysis is a decomposition process of biomass at high temperature in the absence of oxygen.In the end, after OPEN ACCESS transient high thermalization, producer gas, a carbon-rich residue called biomass char and tar will be produced.The proportion of these products depends on the operating conditions [2].The producer gas has the ability to be used for various functions such as chemical production, heat resource, power generation, etc. Biomass char can also be used as a potential resource in diverse industries, depending on their characteristics, while the excess tar must be eliminated from the producer gas for downstream applications so the problem of tar blockage in pipes and engines can be prevented.
The biomass char is a solid carbonaceous residue with a high content of fixed carbon, which can be used directly as a fuel, fertilizer or precursor for activated carbon production [3].One of the options for char utilization, which has been widely reported, is for adsorption purpose.Thermal processes will develop pores on the biomass char surface, which make char capable of acting as an adsorption medium material.Throughout the process, volatile matters are generated, while the physical nature of char is extensively altered.The raw biomass properties really influence the chemistry of char formation, with the pyrolysis temperature and the heating rate being the main operation parameters that have a strong influence on char's structure.
There are number of studies dealing with the relationship between the pyrolytic conditions and the char structure [1,[3][4][5][6].The operational parameters such as the heating rate, the reactor temperature and the residence time play the most significant roles in the operational control for the pyrolysis process.These factors influence both the product distribution and the product characteristics.
Various biomasses were pyrolyzed in a packed bed reactor at 500 °C with a solid residence time of 1 h [4].The pore evolution and development were studied by BET and SEM characterization.The chars were seen to have surface areas as high as 600 m 2 /g and were recommended for cheaper carbon adsorbent production.Rice husk char was studied in a fixed bed pyrolyzer at 200-650 °C at an interval of 50 °C with 10 °C/min heating rate, aiming at determining the characteristics of the charcoal formation and its applicability as a solid fuel [1].The relationship between gas composition/char properties and the pyrolysis temperature of rice husk was also analyzed [5].The results showed that the char yield decreased in the temperature range from 600 to 1000 °C.The maximum porosity appeared at 900 °C.Another study also showed that the porosity increased gradually during a rapid pyrolysis reaction process [6].Likewise, when different biomass samples (almond shell, walnut shell, almond tree prunings and olive pits) were subjected to the thermoanalytical investigation conditions to evaluate their thermal behaviour at the pyrolysis temperature of 600 °C with a residence time of 1 h [3] it was reported that at 600 °C, most of the volatile matters were removed.The pyrolysis chars were further subjected to a steam gasification for activated carbon production.The rate of the thermal decomposition of the parent material played a determinant role on the porosity of the activated carbon produced and the pore sizes tended to be broader (greater volume of meso and macropores) with slow pyrolysis in the first stage.Activated carbons produced from almond tree prunings were recommended as the best solution for adsorption purposes.Other trees were also applicable for the gas adsorption application.The retention time was found to affect the phenol adsorption capacity and the modification-free carbonaceous materials (bio-chars) and bio-chars were suggested to be used as adsorbent rather than fuel for their greater economic prospect [7].
Seeing that the main operating parameters, which affect the conversion of biomass into char phase, are the pyrolysis temperature and the heating rate, this research investigates the influence of the pyrolysis temperature on the characteristics of rice husk char at 600, 800 and 1000 °C with a 20 °C/min heating rate and 1 h holding time at the target temperature.
Furthermore, the utilization of biomass char for tar removal adsorption purposes has a big advantage, considering the universal need for a cheap source of tar adsorbents for various biomass industries.In our previous paper [8], gasified rice husk char was proved to be capable of tar removal.Moreover, many researchers have reported their works on the tar removal performance of biomass char in gasification processes, which was also reviewed in the previous papers [8][9][10][11][12], confirming that rice husk char is capable of tar removal.In this research, the tar removal ability of each char produced from different pyrolysis temperature was investigated.Therefore, the aim of this work can be summarized in two topics: (1) influence of the pyrolysis temperature on characterization of rice husk char at 600, 800 and 1000 °C and; (2) adsorption capability for tar removal of rice husk char produced under different pyrolysis temperatures of 600, 800 and 1000 °C.
Rice Husk Material
Rice husk feedstock obtained from Thailand was prepared by drying in an oven at 105 °C for 8 h to obtain perfect moisture elimination before packing in the pyrolyzer.The characterization of rice husk feedstock is shown in Table 1.
Rice Husk Char Preparation
The pyrolysis process was performed in a batch-type lab-scale equipment facility with feeding of nitrogen at 1.5 L/min as shown in Figure 1.The reactor was a made-to-order quartz glass unit (heat resistant to 1200 °C) and was covered with a heater furnace.The sample was installed in the reactor before heating up at the heating rate of 20 °C/min to reach the target temperatures of 600, 800 and 1000 °C , then kept at the target temperature for 1 h holding time.Figure 2 shows the pyrolysis time intervals performed in this experiment and Table 2 lists the experimental conditions.The rice husk char was screened at 5-10 mm and dried in an oven at 105 °C for 8 h before being used as an adsorbent in the tar removal experiments.Rice husk chars pyrolyzed at three different temperatures were studied for their tar removal ability by an adsorption process.Adsorption studies were performed using a fixed-bed type adsorber, which was installed downstream of the reformer.The experimental setup is shown in Figure 3 and the setup conditions are shown in Table 3.
Rice husk feedstock (screened at 0.125-0.5 mm) was fed into the screw feeder at the feed rate of 0.6 g/min with a nitrogen carrier gas flow of 1.5 L/min.The feedstock was introduced into the pyrolyzer (SUS310 stainless steel; inner diameter 30 mm, height 280 mm), which combined with a reformer (SUS310 stainless steel; inner diameter 25 mm, height 1,300 mm).The pyrolyzer and the reformer were thoroughly controlled by an electric heater at 800 °C.
The pyrolysis gas released at the bottom of the reformer was introduced to the adsorption bed through a high temperature resistant tube connector without any additional heating.The adsorption bed was kept at ambient temperature (25-28 °C) and was filled with biomass char, which was prepared under each different temperature (600, 800 and 1000 °C), with 100 mm bed height.At the exit of the adsorption bed, a tar measurement line was installed, consisting of ten impingers, each was filled with 100 mL of isopropanol and kept in cold baths as shown in Figure 3.After passing through the adsorption bed, the residual tar in the pyrolysis gas was collected by both condensation and absorption in the isopropanol solvent.
The pyrolysis gas was sampled at the flow rate of approximately 0.8 L/min for 48 min.After sampling, all of isopropanol sampling solvent in each impinger bottle was mixed together, filtrated and dried by a standard rotary evaporator in a water bath kept at 40 °C.Then, the flask was weighed accurately and the amount of residue was determined, which was heavy tar.This measured heavy tar was defined as gravimetric tar.This tar measurement method has been well described in the previous work [8].The ash product remained at the bottom of the pyrolyzer throughout the sampling time of 48 min.Char samples were subjected to thermal gravimetric analysis using a Shimadzu DTG-50 (Shimadzu Corp., Nakagyo-ku, Kyoto, Japan), simultaneous DTA-TG instrument.The analysis was divided into two stages whose conditions are shown in Table 4. Remark: Nitrogen were used until reaching the target temperature of 900 °C and nitrogen was changed to air after passing 7 min at this target temperature.
Surface Characterization
The analysis of the specific surface area was carried out using a BelsorpII high precision surface area and pore size analyzer instrument (BEL Japan, Inc., Osaka, Japan), using the Advanced Free Space Measurement (AFSM) measurement principle.Prior to the measurements, the samples were pretreated in order to remove the moisture by heating up to 150 °C and holding for 2 h under vacuum.During the measurements, the pressure was raised under constant temperature and the physical adsorption of nitrogen gas in the sample was performed in order to measure the adsorption isotherm of nitrogen in the sample.The specific surface area values of the samples were calculated based on the theory of BET from the data of nitrogen adsorption.
Thermal Effects on Characterizations of Rice Husk Char
In biomass pyrolysis processes, the target product is normally biomass char or biochar.The major factors that affect the characteristics of produced char are the pyrolysis temperature, the residence time at the target temperature and the heating rate [13].Char is created mostly from the thermal decomposition of lignin and some extractive part of biomass, while the volatile matter is transformed into the gas phase and minerals in the biomass are left as ashes [14].Hence, at the same heating rate and the residence time, the pyrolysis temperature is the most influencial factor for the product distribution.
In Figure 4, characterization of rice husk char produced at each pyrolysis temperature is presented.It can be seen clearly that, with higher pyrolysis temperature, more volatile matters have been forcibly expelled out of the char particles and less volatile matters are left in the particle form with 21.94, 11.72 and 5.03 wt% at 600, 800 and 1000 °C pyrolysis temperature, respectively.At the same time, higher pyrolysis temperature resulted in higher fixed carbon content.The fixed carbon content of the char is the carbon found after volatile matters are evaporated from the biomass char.Fixed carbon is determined by removing the mass of volatiles.Therefore, at higher pyrolysis temperature, more volatiles have been removed, which resulted in less volatile matters and more fixed carbon in the char particle with 26.37, 34.33 and 38.88 wt% at 600, 800 and 1000 °C pyrolysis temperature, respectively.
Thermal Effect on Specific Surface Area of Rice Husk Char
As stated above, the pyrolysis process was carried out during 1 h holding time with 20 °C/min heating rate.These constant conditions were maintained in order to compare the results obtained from pyrolysis temperature variation at 600, 800 and 1000 °C .Figure 5 presents BET specific surface area values of rice husk and rice husk chars.It is apparent that, after high thermal decomposition by the pyrolysis process, rice husk char has developed pores with dramatically increased the specific surface area.The specific surface area values varied significantly at different pyrolysis temperatures.Rice husk material has very low specific surface area of 2.2 m 2 /g while rice husk char pyrolyzed at 600 °C showed the highest specific surface area of 141 m 2 /g.It decreased to 117 and 46 m 2 /g with the increase in the pyrolysis temperature to 800 °C and 1000 °C, respectively.
In general, surface area of biomass char increases with the pyrolysis process, due to the generation of porosity.Hence, since this experiment involved slow pyrolysis, more micropores were likely to be created compared to macropores [5].Nevertheless, the consequence of the carbonization step at a high temperature is as follows: too high a temperature will damage the development of porous structures in the char and the walls of the pores become so thin that they collapse and this causes a reduction in the available surface area [15].Therefore, the chars were found to have decresed specific surface area when the temperature was increased, as reported by other researchers [16][17][18][19][20][21].Fu et al. [16] and Pastor-Villages [22] also stated that above 500 °C, high temperature may cause the occurrence of the structural shrinkage.Structural ordering and micropore coalescene during pyrolysis are responsible for the decrease in the surface area observed at too high temperatures, indicating the thermal annealing and thermal deactivation of the chars.Too high temperature led to deformation of particles resulting in smooth surfaces and large cavities [16,17,23,24].
The ash content is also examined.When the temperature increased, the ash content increased, which may sinter and block pores.In addition, ash does not contribute significantly to surface area and its presence can reduce the surface area [25].Moreover, from Figure 2, it can be seen that at 1000 °C, the operating time is longer than that at 800 °C and 600 °C, which caused the char to decrease in the surface area which could be explained by the fact that continuing development of pyrolysis is detrimental to the pore structure of the char [6].
Tar Removal Ability Investigation
The main property of rice husk char that is focused on in this research is its adsorption capacity for tar removal.Figure 6 shows the concentration of heavy tar in the pyrolysis gas before (described as-no cleaning) and after the adsorption bed.The adsorption bed was packed with three types of adsorbent, which were rice husk chars produced under the pyrolysis temperatures of 600, 800 and 1000 °C.It is clearly seen that the heavy tar concentration in the pyrolysis gas significantly decreased after adsorbtion by rice husk char.The transfer lines were high temperature resistant tubes without any additional heaters.The tar condensation inside the transfer line was checked by the weight change of the tubes before and after the experimental operation.It was found that the weight difference was too small and was thus neglected.
In the present results, the 800 °C char showed the optimum tar removal performance whereby tar concentration could be reduced from 36.9 to 4.6 g/m 3 , which corresponds to 87.5% tar removal, whereas 600 and 1000 °C char presented the performances of 82.9% and 81.6% tar removal, respectively.The adsorption capacity of rice husk char is related to the specific surface area and the fixed carbon content of the char.
More specific surface area means more channels for the tar molecules to be adsorbed physically by molecular attraction.At the same time, the fixed carbon content also plays an important role.Carbon surface will hold tar molecules by the weak force known as van der Waals force.Therefore, more fixed carbon content means large carbon surface for tar to be adsorbed.The 600 °C char had less fixed carbon content, while 1000 °C char had very low specific surface area, resulting in the 800 °C char being the most favorable char for tar adsorption in this research.
In Figure 6, the percentage of the tar removal in the char bed is noticeable.The main mechanism for this tar removal was adsorption.However it was found that some part of the heavy tar condensed at the bottom of the char bed.Therefore, tar removal by rice husk char was not only by adsorption of tar molecule in the pores of the char material, but also in the form of condensation of heavy tar molecules in the hot gas when passing through the adsorption bed at ambient temperature.The char at the bottom of the adsorption bed, at the gas inlet, was found to be a bit soaked by the condensed heavy tar.Nevertheless, most of the char was still dry after tar adsorption, which indicates that the main tar removal mechanism should be adsorption.This observation suggests that char adsorbent is more appropriate for light tar in the dry gas, to prolong the lifetime of the char adsorbent.Morover, this phenomenon assured that a heavy tar removal unit such as an oil scrubber must be installed before the char adsorption bed to remove heavy tars and moisture from the pyrolysis gas. Figure 6.Concentration of heavy tar in the pyrolysis gas before (described as-no cleaning) and after the adsorption bed of each type of char.
Analysis of Rice Husk Char after Tar Adsorption
Table 5 illustrates the proximate analysis and the specific surface area of each rice husk char adsorbent before and after tar adsorption.It can be noticeably seen that after tar adsorption, rice husk chars have more volatile matters and less fixed carbon, due to tar, which is volatile matter, has been adsorbed in the pores and on the surface of the chars.The 800 °C char showed the most significant volatile matter increase after tar adsorption, from 11.7% to 30.0%, indicating the greatest capacity for tar adsorption among the samples tested.Moreover, the specific surface area of 800 °C char has significately decreased from 117 to 15 m 2 /g after tar adsorption due to the adsorption of the tar molecules on the pore surfaces and the decrease of the surface area of the char particles.
Conclusions
Rice husks were inspected by thermoanalytical investigation to evaluate their capability as adsorbent media for tar removal.The pyrolysis process has been applied to the rice husk material under different temperatures; 600, 800 and 1000 °C at 20 °C/min heating rate to investigate the influence of temperature.Experimental results have shown that rice husk char could be prepared for the purpose of tar removal.The pyrolysis temperature predominantly affected the properties of the char produced.Two properties of char that influence tar adsorption capacity were fixed carbon content and the specific surface area.The 600 °C char had less fixed carbon content, while 1000 °C char had very low specific surface area, resulting in 800 °C char being the most favorable char for tar adsorption in this research.
Figure 3 .
Figure 3. Experimental setup for tar removal investigation.
Figure 4 .
Figure 4. Characterization of rice husk char produced at each temperature (wt% dry basis).
Figure 5 .
Figure 5. BET specific surface area values of rice husk and rice husk chars.
Table 1 .
Characterization of rice husk feed stock.
Table 2 .
Experimental conditions for char preparation.
Table 3 .
Tar removal experimental conditions.
Table 5 .
Proximate analysis of each rice husk char adsorbent before and after tar adsorption. | 2016-03-01T03:19:46.873Z | 2012-11-23T00:00:00.000 | {
"year": 2012,
"sha1": "7e8e2d35eaa6b59c139c97898570d0f694850940",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/5/12/4941/pdf?version=1426589471",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7e8e2d35eaa6b59c139c97898570d0f694850940",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
6803540 | pes2o/s2orc | v3-fos-license | Hybrid Optimized Back propagation Learning Algorithm For Multi-layer Perceptron
Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method .This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron.[13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties&is robust&efficient in practice.
INTRODUCTION
Brain is the central processing unit of .all the living intelligent beings. Brain is made by simple processing elements called neurons. These neurons reside in brain as a massively interconnected network. Inspired by this biological neural system artificial neural network model is proposed. Over past fifteen years, a view has emerged that computing based on models inspired by our understanding of the structure and function of the biological neural networks may hold the key to the success of solving intelligent tasks by machines [1] A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: Knowledge is acquired by the network through a learning process and interneuron connection strengths known as synaptic weights are used to store the knowledge [2].
Several models of artificial neuron has been proposed, some major models are like Mc-Culloch-Pitts model [3], Perceptron model [4], Adaline model etc. [5]. There are a handful of learning laws are also available some most important learning laws used for neural network learning are Perceptron learning law [6], Hebb's learning law [1], Delta learning law [2], Widrow and Hoff LMS learning law [1], Correlation learning law [1], Instar learning law [1] etc.
Back propagation learning method for multi-layer perceptron network is extensively used in last few decades in many fields of science and technology.
Main task of this back propagation learning algorithm can be divided into two subobjectives (a) feed forward computation and (b) back propagation of error to minimize the total learning error of the network. In this method error is controlled by tuning those synaptic weight values defined between different neural nodes. There are different major practices to optimize those weight vectors to minimize the total error of the network system, most common one among those practices is use gradient descent method in back propagation learning to optimize weight vector and minimize error accordingly. There are some major disadvantages of this gradient descent approach, one of them is stuck into local minima ,which can be mostly avoided by using a learning rate but that sometime may cause serious problem of overshooting, there also another problem of very slow convergence of the learning algorithm which severely depends upon choosing right value for learning rate [17]. For these reasons there are some more methods available to use in aid of standard back propagation learning, one of them is using Quasi-Newton optimization approach for that weight vector with respect to weight vector [18] [19]. In this paper a hybrid back propagation learning method which uses quasi-newton optimization for optimized weight updating is discussed.
BFGS Method is a classical quasi-Newton method, and also is one of the most effective algorithms of the unconstrained optimization problems at present [7]. The BFGS algorithm (Nocedal and Wright, 1999) was developed independently by Broyden, Fletcher, Goldfarb, and Shanno. The basic principle in quasi-Newton methods is that the direction of search is based on an n × n direction matrix S which serves the same purpose as the inverse Hessian in the Newton method. This matrix is generated from available data and is contrived to be an approximation of Furthermore, as the number of iterations is increased, S becomes progressively a more accurate representation of , and for convex quadratic objective functions it becomes identical to in n + 1 iterations [20]. By using quasi-newton optimization(BFGS hessian update) in minimization of error of back propagation learning of multilayer perceptron networks it is proved to be very much helpful which is demonstrated in this paper.
Back-Propagation Learning
The popularity of on-line learning for the supervised training of multilayer perceptron has been further enhanced by the development of the back-propagation For illustration of backpropagation learning please consider Figure-1.
Figure-1. Multi-layer perceptron network
There are three major layers in a multilayer perceptron network , first layer in input layer where input signal is fed to the nodes of network (1,2,…,n), second layer is hidden layer(1,2,…,h), and third one is output layer (1,2,…,o). The entire computation of the system is done in hidden layer, from output layer provides output of the said network. There are two more things bhidd and bout, these are bias node connected with hidden layer and output layer respectively. Winp,hidd is the synaptic weight values assigned between input layer and hidden layer of the network, and Whidd,out is the synaptic weight values assigned between hidden layer and the output layer of the network. Controlling and minimization of total error of the network is done by tuning these weight values. In this paper error is as mean square error [9] between original output .and network simulated output .
Back-Propagation Algorithm (Gradient Descent)[10]
Initialize each weight to some small random value. Here the transfer function is sigmoid transfer function, it is used for its continuous nature. is the learning rate & is the gradient.
Quasi -Newton Method
The main objective of this method is to find the second order approximation of the minimum of a function. f(x). The Taylor series of f(x) is given by Where is the gradient of the objective function f(x) and H is the hessian matrix. The gradient of this approximation with respect to is defined by Quasi-Newton method is based on Newton method which finds the stationary point of a function where gradient of that objective function is zero. Now setting this gradient to zero, In practice computing the hessian that many time for a given objective function is very much expensive in the terms of memory and time so BFGS update formula is used to approximate hessian H by B matrix. This hessian approximation B has to satisfy the following equation, The initial value of B is approximated by =I , the updated value of is calculated by applying Newton's step calculation using which is current approximation of the hessian matrix [11] Where is chosen to satisfy the Wolfe condition [12].
is the gradient at new point and Is necessary for computing updated hessian approximation .
BFGS update formula is used in this paper for updating the approximate hessian matrix. The equation is given by, In this paper this BFGS update equation is used for successive weight update, here objective function is defined by the first order partial derivative of the error cost function with respect to the corresponding weight. So, the proposed backpropagation algorithm is,
Back-Propagation Algorithm (Quasi-Newton with BFGS Update)
Initialize each weight to some small random value. •
EXPERIMENTAL RESULT
Proposed back-propagation learning algorithm is used for training purpose of a multi-layer perceptron network which is used for universal function approximation. Its performance is measured in terms of mean square error of training and testing phase. This network is used to approximate two functions ,
This two dimensional function computes
With domain -4.5 .
Here No of variables are two. In testing randomly generated values are used for these two variables then using said function the corresponding original function value is calculated then a multi-layer perceptron network learnt with proposed algorithm is used to approximate this function.
Results are shown below, The test error is: 1.3954% (Simulated data are from test data).
The training error is: 0.10709% (Simulated data are from training data).
Figure-3.Plot of MLP simulated Beale function data.
From these two plots one can say that the original function data and the simulated one are very much close enough. Figure From these two plots one can say that the original function data and the simulated one are very much close enough. Figure-8 and Figure-9 gives us clear view of regression and performance of the MLP network.
CONCLUSION
In this paper, a hybrid optimized back propagation learning algorithm is proposed for successful learning of multilayer perceptron network .This learning algorithm, utilizing an artificial neural network with the quasi-Newton algorithm is proposed for design optimization of function approximation. The method can determine optimal weights and biases in the network more rapidly than the basic back propagation algorithm or other optimization algorithms. Two modifications to the classical approach of the Quasi-Newton method have been presented. It was shown that the hypotheses supporting those methods are relevant and desirable in terms of convergence properties. It represents a clear gain in terms of computational time without a major increase in memory space required, making the approach suitable for large scale problems. There is also no need to adjust parameters, as in the back-propagation algorithm, which makes our algorithm very easy to use. | 2012-12-07T18:47:40.000Z | 2012-12-07T00:00:00.000 | {
"year": 2012,
"sha1": "08fe4b99c951f96c5aea1ecc03ce08eb9b5ddd5f",
"oa_license": null,
"oa_url": "https://doi.org/10.5120/9749-3332",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2be353fd09e7a61a7cb61271d8cb3348e17056b4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
247783353 | pes2o/s2orc | v3-fos-license | Serum Homocysteine Level Is Positively Correlated With Serum Uric Acid Level in U.S. Adolescents: A Cross Sectional Study
Background Physiologically, the levels of homocysteine (Hcy) and serum uric acid (SUA) are closely related; however, clinical studies on the relationship between Hcy and SUA have drawn different conclusions and have not analyzed this association among adolescents. This study therefore aimed to evaluate the relationship between Hcy and SUA levels among adolescents. Methods In this study, we performed a cross-sectional analysis of data from the National Health and Nutrition Examination Survey for the period 1999–2006, which included 5,404 adolescents aged 12–19 years. An elevated SUA level was defined as ≥5.5 mg/dL. Multivariate logistic regression and multivariate linear regression models were also applied in this study. Results The mean concentrations of Hcy and SUA were 6.0 μmol/L and 5.0 mg/dL, respectively, and 33.6% of the participants had SUA levels of ≥5.5 mg/dL. There was a dose–response relationship between Hcy and SUA, and Hcy was linearly positively correlated with SUA. The β value [95% confidence interval (CI)] for SUA in the fully adjusted model was1.43 (95% CI: 1.18, 1.68). The multivariate logistic regression model showed that per 1 increment in log-transformed Hcy, the risk of elevated SUA levels increased by 8.80 times (odds ratio, 8.80, 95% CI: 4.25, 18.20). Subgroup analyses showed that the relationship between Hcy and SUA was significantly different according to sex, age, body mass index (BMI), and estimated glomerular filtration rate (eGFR) stratification (P for interaction <0.05). Conclusion Hcy levels were positively correlated with SUA levels and elevated SUA levels among U.S. teenagers, and this effect was more significant among boys aged ≥17 years and among people with lower BMI and eGFR.
manifestations of CVDs in adulthood, there is evidence that these diseases may begin in childhood and adolescence (4). An observational study showed that more than 50% of children with hereditary homocysteinemia died due to premature vascular diseases and concluded that Hcy is a causal risk factor for CVDs in children (5). Previous studies have also shown that supplementation of B vitamins such as folic acid can reduce Hcy level and prevent the occurrence and development of cardiovascular diseases (6)(7)(8)(9). It is essential to identify early changeable risk factors for CVD among adolescents to prevent the occurrence and development of CVD in adulthood.
Serum uric acid (SUA), like Hcy, is a risk factor for CVD. Most studies have shown that an increase in the SUA level plays a vital role in CVD occurrence (10)(11)(12). SUA causes endothelial dysfunction, thus increasing oxidative stress and causing microvascular diseases, which can induce the proliferation of vascular smooth muscle cells and reduce the bioavailability of endothelial nitric oxide (13). According to relevant studies, an elevated SUA level is defined as ≥5.5 mg/dL (14,15). The potential mechanism underlying the relationship between uric acid and Hcy is as follows: the MeT cycle occurs in the human body; that is, MeT can be converted into S-adenosylhomocysteine (SAH), which can then be converted into Hcy and adenosine (16), Hcy receives methyl MeT regeneration (17), and adenosine can be metabolized into uric acid (18). Hcy and uric acid levels could be positively correlated according to the above physiological mechanisms. However, previous studies on the associations between Hcy and SUA are scarce, and most of them were carried out among healthy adults, patients with arteriosclerosis vascular disease (ASCVD), diabetes patients, and gout patients (19)(20)(21)(22)(23). Moreover, the above studies have reported inconsistent results on the association between Hcy and SUA.
To explore the above problems, this study used data from the National Health and Nutrition Examination Survey (NHANES) for the period 1999-2006 to evaluate the relationship between Hcy and SUA among US adolescents.
Study Population and Design
The NHANES is a population-based cross-sectional survey that collects information on the health and nutrition of American families. The project includes two parts: an in-home interview and physical examination. The survey was conducted at the participants' homes. The NHANES agreement was approved by the Review Committee of the National Center for Health Statistics Research Ethics. All adult participants provided written informed consent, and those under 18 years of age were required to submit the consent of their parents or guardians. The NHANES adopts a stratified multistage sampling design to obtain representative samples of American residents (24,25). More detailed information can be obtained from https://www.cdc.gov/ nchs/nhanes/index.htm. The NHANES dataset is available at DataDryad https://doi.org/10.5061/dryad.d5h62.
The data for this study were obtained from the NHANES database for the period 1999-2006. Fasting blood samples were collected from participants aged 12-19 years, including 8,374 teenagers. We excluded participants with missing Hcy values (n = 2,706), SUA values (n = 84), and dietary vitamin B12 intake (n = 180). Finally, 5,404 people were included in the final analysis (Supplementary Figure 1).
Data Collection
A questionnaire survey, anthropometric measurements, and fasting blood sample collection were conducted by professionally trained researchers in participants' homes following a standardized protocol. The questionnaire included questions on demographic characteristics such as sex, age, race, educational attainment (less than high school, high school, and high school or above), and dietary nutrition intake (vitamin B12 and vitamin B6 intake). Race/ethnicity included non-Hispanic whites, non-Hispanic blacks, Mexican Americans, other Hispanics, and other races. Anthropometric indicators included height, weight, and blood pressure (BP). Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). After 8 h of fasting, venous blood samples including fasting blood glucose (FBG), total cholesterol (TC), triglyceride, serum creatinine, blood uric acid (SUA), blood urea nitrogen (BUN), C-reactive protein (CRP), serum vitamin B12, serum folic acid, aspartate aminotransferase (AST), alanine aminotransferase (ALT), and gamma-glutamyl transferase (GGT) were collected. A Zeeman background-corrected multielement atomic absorption spectrometer was used to measure the blood levels. The CRP level was measured by latex-enhanced nephelometry on a Dade Behring Nephelometer II Analyzer System (BNII). Using the Jaffe kinetic alkaline picrate method, serum creatinine levels were measured using a Roche Hitachi 917 or 704 multichannel analyzer in 2001 and Beckman Synchron LX20 in 2002. According to the advice of the National Health and Nutrition Examination Survey, the serum creatinine level was calibrated and standardized with a gold standard method. The formula for estimated glomerular filtration rate (eGFR) is different in different groups of people, and the Schwartz formula is used to calculate eGFR in adolescents. Males: eGFR = 0.7 × (height in cm)/(serum creatinine in mg/dL); Females: eGFR = 0.55 × (height in cm)/(serum creatinine in mg/dL) (26,27).
Exposure Variable and Outcomes
The exposure variable in this study was Hcy, and two measurement methods were used to detect Hcy levels. In 2001, Abbott Homocysteine IMX (Hcy) Assay (Abbott Diagnostics, Abbott Park, IL, United States) was used, while during 2002-2006, Abbott Axsym System (Abbott Diagnostics, Abbott Park, IL, United States) was used. The long-term coefficient of variation of the NHANES from 2001 to 2006 was 3-5% of the total Hcy concentration. A cross-study of the two methods showed no significant difference between the two methods. Details of these testing and quality standards can be found at https://cdc.gov/ nchs/nhanes. The outcome variables were SUA and elevated SUA levels. SUA was measured using a Roche Hitachi 917 or 704 multichannel analyzer in 2001 and Beckman Synchron LX20 in 2002 using the colorimetric method. The distribution of uric acid results in laboratories in different periods was compared, and no significant difference was observed. Although there is no standard definition of hyperuricemia among adolescents, previous research reports show that an SUA level of ≥5.5 mg/dL was related to the risk of hypertension (14,15). Therefore, in this study, we defined an SUA level of ≥5.5 mg/dL as elevated SUA.
Statistical Analysis
The analysis was conducted according to the Centers for Disease Control and Prevention. 1 Because the distribution of Hcy levels is skewed, the log-transformed Hcy (LgHcy) analysis of Hcy was carried out in our study. The data are expressed as mean ± SD or proportions. We used the suggested weighting method in the data analysis, considering the significant differences. Multivariate linear regression analysis and multivariate logistic regression analysis were used to evaluate the correlation among LgHcy, SUA, and elevated SUA levels. In addition, we ensured the robustness of the data analysis. We converted Hcy into tertiles and calculated the P-value. Regression analysis established three models: Model 1 was adjusted for age, sex, BMI, SBP, and DBP. Model 2 was adjusted for all covariables in Model 1 plus adjusted for non-Hispanic white, non-Hispanic black, Mexican American, other Hispanic, other races, and education. Model 3 was adjusted for all covariates in Model 2 plus adjusted for FBG, triglycerides, TC, eGFR, BUN, CRP, serum folate, serum vitamin B12, ALT, AST, GGT, vitamin B12 intake, and vitamin B6 intake. To test for the significant associations, the generalized additive model and fitting smooth curve (penalty spline method) were used to further explore the shape of their dose-response relationship. In subgroup analysis using hierarchical logistic regression analysis, possible modifications of the association between LgHcy and SUA were also evaluated for variables including sex (male vs. female), age (<17 vs. ≥17 years), BMI (<20.5 vs. 20.5-24.5 vs. ≥25 kg/m2), education attainment (less than high school vs. high school or higher), serum vitamin B12 (<452 vs. 452-623 vs. ≥623 pg/mL), serum folate (<10.1 vs. 10.1-14.3 vs. ≥14.3 ng/mL), and eGFR (<138 vs. 138-171 vs. ≥171 mL/min/1.73 m2).
All analyzed data were analyzed using the statistical software packages R 2 and Empower (R) (Boston X & Y Solutions, Boston, MA, United States). 3 Differences were considered statistically significant at p < 0.05.
Baseline Characteristics
Based on the inclusion and exclusion criteria, 5,404 teenagers were included in this analysis, and the average age of participants in this study was 14.98 ± 2.01 years. Among these participants, 50.48% were boys, 25.44% were non-Hispanic white, 31.25% were non-Hispanic black, 35.47% were Mexican American, 4.13% were other Hispanic, and 3.70% were other races. The mean (SD) concentrations of Hcy and SUA were 6.0 (2.6) µmol/L and 5.0 (1.3) mg/dL, respectively, and 33.6% of the participants had SUA ≥ 5.5 mg/dL. The clinical characteristics of the study subjects are presented in Table 1. We found no significant difference in other Hispanic, other races, vitamin B12 intake, vitamin B6 intake, FBG, TC, triglyceride, and CRP in different Hcy groups. Compared with the other two groups, the participants in the group with higher Hcy were primarily males and older, with a higher proportion of non-Hispanic whites and blacks; higher education level; and higher levels of SUA, BUN, serum vitamin B12, AST, ALT, and GGT, but lower levels of SBP, DBP, eGFR, and serum folate (all P < 0.05).
Association of Serum Homocysteine With Serum Uric Acid
As shown in Figure 1A, there was a dose-response relationship between Hcy and SUA, and the results showed that Hcy was linearly positively correlated with SUA. The β value and 95% CIs for SUA in the three models are listed in Table 2. In model 1, the level of SUA increased by 1.65 mg/dL for each increased unit of Hcy. After further adjustment for age, sex, BMI, SBP, DBP, non-Hispanic white, non-Hispanic black, Mexican American, other Hispanic, other races, education, and other confounding factors, the results showed that the level of SUA increased by 1.62 mg/dL for each increasing unit of Hcy. In the fully adjusted model 3, the results showed that the positive correlation between Hcy and SUA remained stable. To verify whether the results are stable, we further used Hcy as a classification variable and observed its relationship with SUA. In the fully adjusted model 3, taking T1 of LgHcy as the reference, the estimated β of SUA in T2 and T3 participants increased by 0.14 mg/dL (95% CI: 0.08, 0.21) and 0.39 mg/dL (95% CI: 0.31, 0.46), respectively. Table 3 presents the relative odds of having elevated SUA levels. As shown in Figure 1B, Hcy level was positively correlated with the risk of elevated SUA in adolescents. As shown in Table 3
Subgroup Analyses by Potential Effect Modifiers
We performed a further stratified analysis to evaluate the effect of LgHcy on SUA in different subgroups. As shown in Figure 2, the relationship between LgHcy and SUA was significantly different according to sex, age, BMI, and eGFR stratification (P for interaction <0.05). However, the positive association between LgHcy and SUA was consistent in the following subgroups:
DISCUSSION
In this large, representative multi-ethnic cross-sectional study based on U.S. adolescents, it was shown that Hcy levels are positively correlated with SUA levels and elevated SUA levels.
In addition, subgroup analysis showed stronger associations between Hcy and SUA among boys aged 17 years or older and teenagers with low BMI and eGFR.
There are few previous studies on the associations between Hcy and SUA, and most of them were carried out among healthy adults, patients with ASCVD, diabetes patients, and gout patients (19)(20)(21)(22)(23). However, the above studies have presented inconsistent conclusions with regard to the association between Hcy and SUA. Boras et al. (20) explored the relationship between Hcy levels and SUA in 52 patients with type 2 diabetes mellitus complicated with acute myocardial infarction, and the results showed that Hcy was positively correlated with SUA. Kiseljaković et al. (19) conducted a cross-sectional study of 99 patients with ASCVD and 40 healthy participants. The average ages of the ASCVD group and the healthy group were 53.62 ± 1.17 years and 57.49 ± 1.71 years, respectively. The results revealed that the Hcy level was positively correlated with SUA independently in both the ASCVD and healthy groups. A cross-sectional study conducted by Shih et al. (21)using the data of community physical examination in Taiwan Province in 2019 included 396 middle-aged and elderly patients aged 50-85 years. The were not related to SUA levels in gout patients (γ = −0.002, P = 0.988) (22). The inconsistency in the above conclusions may be due to the different confounding factors of the study population and adjustment. It is well known that adults may be more prone to developing diabetes, gout, ASCVD, or bad habits such as smoking and drinking. Even if these are adjusted as confounding factors, the relationship between Hcy and SUA will be affected or concealed in the disease. However, in our study, we reported the relationship between Hcy levels and SUA among teenagers. This is an ideal population for evaluating the relationships between the two parameters. This study found that although the level of Hcy in adolescents is low, it has a positive correlation with SUA. Further research is needed to determine the optimal Hcy level in adolescents.
Because there is a MeT cycle in the human body, MeT can be converted into SAH, which can be converted into Hcy and adenosyl (16), and Hcy accepts a methyl group to regenerate MeT (17). Adenosine can be metabolized into uric acid (18); therefore, SUA levels in the human body can indirectly reflect the level of Hcy. The kidneys simultaneously excrete uric acid and Hcy at the same time. If the level of eGFR decreases, then Hcy will accumulate in the body, and this accumulation of Hcy will lead to kidney damage and further decrease in the eGFR level (28,29). Loeffler et al. (14); therefore, SUA levels in the human body can indirectly reflect the level of Hcy. The kidneys simultaneously excrete uric acid and Hcy at the same time. If the level of eGFR decreases, then Hcy will accumulate in the body, and this accumulation of Hcy will lead to kidney damage and further decrease in the eGFR level (30); compared with women, the level of Hcy in men is higher because of hormones (31,32). As an index of obesity, BMI is closely related to metabolic factors such as uric acid and Hcy. However, we found that the relationship between Hcy and uric acid was more significant in people with a lower BMI. According to a recent study, the concentration of Hcy is negatively correlated with BMI (33); therefore, the level of Hcy is lower in people with higher BMI, and in people with higher BMI, with an increase in the Hcy level, the increase in SUA level is smaller.
This study had both advantages and limitations. The advantages of this study are as follows: First, this study is the first to explore the relationship between Hcy and SUA among American teenagers with low Hcy levels. Second, we adjusted for the most potential confounding factors and effect correction factors. Finally, to reduce the contingency in data analysis and enhance the robustness of the results, we treated independent variables as continuous and classified variables. However, the limitations are as follows: First, this is an observational crosssectional study; hence, we cannot infer a causal relationship between the two parameters. Therefore, further prospective follow-up studies are needed to confirm the conclusions of this study. Second, this study collected Hcy data only once at the baseline, and multiple tests may make the results more accurate. Third, because of the problem of data collection, diets that affect the SUA level, for example, alcohol, meat, coffee, fruits and vegetables, and dairy products, have not been adjusted for. However, we adjusted for many confounding factors in this study. Finally, this study was conducted among American teenagers, and whether its conclusions can be extended to other groups remains to be discussed.
CONCLUSION
In conclusion, there is a positive correlation between Hcy levels and SUA levels among U.S. teenagers, and this effect is more significant among boys aged ≥17 years and among people with lower BMI and eGFR.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: https://www.cdc.gov/nchs/nhanes/index.htm.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Research Ethics Review Board of the National Center for Health Statistics. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
YS participated in literature search, study design, data collection, data analysis, data interpretation, and wrote the manuscript. ZW, JW, and ZC conceived the study, and participated in its design, coordination, data collection, and analysis. PL participated in study design and provided the critical revision. All authors read and approved the final manuscript. | 2022-03-30T13:55:27.407Z | 2022-03-29T00:00:00.000 | {
"year": 2022,
"sha1": "8b61d498826cb33da2f9967ba918432007e3f625",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "8b61d498826cb33da2f9967ba918432007e3f625",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1450168 | pes2o/s2orc | v3-fos-license | Assessment of Available and Stable Fluoride in Four Widely-Used Toothpastes in the Iranian Market
Objective: Presence of available and stable fluoride in a dentifrice formulation is a major requirement for an anti-caries effect. Although the available fluoride concentration in Iranian dentifrices has been reported in previous studies, there is little information on its stability; which is dependent upon dentifrice formulation. This study was done to assess the fluoride ion concentration and stability in four widely used dentifrices in Iran. Materials and Methods: In this analytical study, three samples of each brand of dentifrice (Nasim, Pooneh, Crest, and Signal) were purchased. Total fluoride (TF) and total soluble fluoride (TSF) concentrations were determined by ion specific electrodes. Data about TF were analyzed by one-way analysis of variance (ANOVA). Kruskal-Wallis and Mann-Whitney tests were used for nonparametric data (TSF). Results: All dentifrices had more than 1000 ppm of fluoride ions. TSF in Crest was significantly higher than in other dentifrices (P<0.0001) and was over the maximum permitted dose. Conclusion: The TF concentration in Iranian toothpastes was sufficient to prevent caries.
INTRODUCTION
The effect of fluoride (F) in caries prevention is well known. F-dentifrices are the most widely-accepted anti-caries substances [1]. It has been reported that F-containing toothpaste is the main factor responsible for caries reduction in developed countries [2, 3].
Thus, F-containing dentifrices are suggested for all populations [4]. Solubility of F in its ionic (free) form (F-of NaF) or ionizable form (MFP) ensures its bioavailability and abundance in the mouth. This organized and bioavailable active form of F guarantees anti-caries activity [5,6].
In order to have a high bioavailability, chemical constituents, type of F and type of abrasive substances in the dentifrice are important [7]. F-dentifrices contain sodium F (NaF), mono fluoro-phosphate (MFP) or their combination. The abrasive substances i.e. aluminum (Al) and Ca ions can reduce the amount of fluoride in the presence of sodium fluoride (NaF) [8].
In MFP molecule, F is conjugated with phosphate by a covalent bond; however, this bond is not stable and can release F ions; which react with Ca ions in Ca-based dentifrices [9]. This reaction results in production of CaF 2 ; which is insoluble and does not have remineralization [10] or anti-caries effects [11]. It has been proven that in MFP-containing dentifrices, linking of free F ions with Ca-based abrasives can inactivate F ions [8]. Thus, use of insoluble silicon dioxide (SiO2) or heated pyrophosphate as abrasives has been proposed [12]. In order to have an anti-caries effect, a dentifrice should contain at least 1000 ppm of bioavailable F [13,14]. The minimal amount of free F ions in a dentifrice should not be less than 60% of its total F content [10][11][12][13][14][15], and this amount should not exceed 1500 ppm [16].
Since a comprehensive study about fluoride availability and stability in widely used dentifrices in Iran is not available, the aim of the present study was to evaluate the available F concentration in 4 widely used dentifrices; which contain different types of fluoride compounds and abrasives.
MATERIALS AND METHODS
Study design: In this analytical study, TF and TSF concentrations were evaluated in four widely-used adult dentifrices in Iran. Table 1 shows the assessment data of the dentifrices. Three dentifrices of each brand were obtained and 2 cm was extracted from each tube. Then each tube was divided into 3 equal parts (top, middle and down) and each part was stored in a plastic box. In order to carry out a blind analysis, a 3-letter code was randomly allocated to each sample. The TF and TSF concentrations were measured under equal laboratory conditions (temperature 20º C). An ion specific electrode was used for all analyses.
Determination of TF and TSF concentrations:
The concentration of TF and TSF was determined as explained in previous studies [17]. Four grams of deionized water (DDW) was used to homogenize 1 gram (±0.01) of dentifrice; and for determination of TSF concentration 0.1 gram (±0.01) of each dentifrice was weighed in scaled plastic centrifuge tubes. All tubes were homogenized in 0.9 g of DDW.
After 20 seconds of shaking, a suspension of homogenous solution was produced. In the next step the suspension was centrifuged (3000×g, 10 min, room temperature) and the conjugated fluoride was removed. Then, the supernatant was transferred to a plastic tube and 9.9 cc DDW was added. Next, 0.25 cc of 2M HCl was added to both TF and TSF samples. The mixture was located at 50c (50 degrees of centigrade) for 10 minutes and neutralized by adding 0.5 ml of 1M NaOH plus 1 ml of TISABII (0.75M Acetate buffer, pH=5 containing 1M NaCl and 0.4% CDTA). The analysis was performed by using a fluoride selective electrode (Thermo Orion, USA, 9609) and an Ion Meter (Thermo Orion, USA, 720A).
Statistical analysis:
The mean and standard deviation of TF and TSF concentrations in dentifrices were calculated and analyzed with SPSS version 11 software. One-way ANOVA was employed to analyze TF concentration in all groups, after confirming the normality of data distribution. The non-parametric Kruskal-Wallis and Mann-Whitney tests were used for the comparison of TSF between groups. P-value less than 0.05 was considered significant.
RESULTS
In order to perform a true evaluation of TF and TSF in the four mentioned dentifrices, three tubes of each brand were provided and three samples were taken from each tube. The results of Kruskal-Wallis and Mann-Whitney tests revealed that the amount of TSF in Crest dentifrice was significantly more than in other dentifrices (P<0.001, Table 2, Figure 1). Comparative evaluation of the other dentifrices: Pooneh-Signal (P=0.233), Pooneh-Nasim (P=0.823) and Signal-Nasim (P=0.727) showed no significant difference regarding this variable. Comparison of both TF and TSF concentrations with limits set by the International Standard Institute revealed that all understudy dentifrices followed these standards except for Crest. It is notable that the amount of TF (1605.45 ppm) and TSF (2326.89 ppm) in Crest was above the maximum standard limit (1500 ppm).
DISCUSSION
Using fluoridated dentifrices is the most common method of dental caries prevention; it has caused a reduction in the prevalence of dental caries in all countries [18]. Two essential requirements need to be met in a fluoridated dentifrice namely availability and stability of F. The abrasives used in dentifrices play an essential role in inactivating F ions [17,[19][20]. The inactivation of F may lead to formation of a low soluble product with decreased anti-caries effect [21]. In our study, the main abrasive was silica in all dentifrices except for Pooneh; which contained a Ca-based abrasive. Signal and Nasim contained Ca-based abrasives in addition to Silica (Table 1).
TSF concentration was the highest in Crest (which only contains silica abrasive) and the lowest in Pooneh (which only contains a Cabased abrasive). The fact that TSF concentration in Ca-containing dentifrices was less than the amount declared by their manufacturers indicates that Ca-based abrasives decrease F stability. In agreement with our findings, Filho et al, [17] and Cury et al. [21] showed that incompatibility between the abrasive agent (usually CaCO 3 ) and F type (usually MFP) leads to lower concentrations of TSF.
A study by Condeh et al. showed that silicabased dentifrices either in combination with MFP or NaF had a greater amount of soluble F/TSF. These results verify our findings.
On the other hand, the higher concentration of TSF in Crest led us to suppose that with regard to F availability, NaF is better than MFP and SMFP; albeit this advantage of Crest may not be solely due to the presence of NaF, and the compatibility between NaF and silica may also play a role in this regard. NaF-containing dentifrices are often formulated with silica. Similar findings were obtained by previous studies [22]. However, others showed no significant difference between MFP and NaF [23]. In another study by Arnold et al, [24] F availability and the remineralization effect of SMFP were shown to be greater than those of NaF. In our study, we found that TF and TSF concentrations in Nasim, Pooneh and Signal dentifrices were within the standard limit (1000-1500 ppm); but these concentrations were slightly higher than the maximum permitted limit in Crest. Thus, all mentioned dentifrices had optimal anti-caries effect and none of them except for Crest can lead to F overdose. In contrast, Hassanzadeh et al [25] in 2004 demonstrated that F ion concentration in most of the Iranian and some of the foreign made dentifrices was less than the required threshold for having anti-caries effect. But the amount of soluble F ion in Crest and Signal was above this threshold.
Product Name TF Mean (±SD) TSF Mean (±SD)
One of the limitations of our study was the small number of evaluated dentifrices which were different both in type of abrasive and F compound; thus, separate evaluation of the effects of different types of abrasives and F compounds on F stability was not feasible. Two factors may be responsible for the superiority of TSF over TF in some dentifrices in this study and other similar surveys [21] namely 1) Presence of non-homogenized parts of a dentifrice due to separate sampling from different parts of the tube and 2) A multistage laboratory system causing more confounding factors.
With regard to the constraint of such studies and the small number of studies in this field, further surveys with larger sample size and more precise laboratory techniques are proposed.
CONCLUSION
The minimum amount of soluble fluoride required for anti-caries effect was available in the dentifrices evaluated in this study. Furthermore, the stable form of F had a higher concentration in silica/NaF-containing dentifrice i.e. Crest compared to the Ca/ MFP containing dentifrices (Pooneh, Nasim, and Signal). | 2017-06-18T00:33:36.015Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "80c044d12f0a49c0df2f4d0522fab07d324c69e9",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "80c044d12f0a49c0df2f4d0522fab07d324c69e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13454020 | pes2o/s2orc | v3-fos-license | Mathematical Innovations of a Modern Topology in Medical Events
The purpose of this paper is to introduce a new topology called Rough Topology in terms of rough sets and prove that rough topology can be used to analyze many practical/real life problems. Using this concept, we find the deciding factors for the most common diseases chikungunya and diabetes.
Introduction
Rough set theory, introduced by Zdzislaw Pawlak, is a mathematical tool for representing, reasoning and decision making in the case of uncertain information. This theory deals with the approximat ion of sets or concepts by means of equivalence relations and is considered as one of the first non-statistical approaches in data analysis. Several interesting applications of the theory have come up, in part icular, in Artificial Intelligence and Cognitive Sciences. The main advantage of rough set theory in data analysis is that, it does not require any preliminary or additional informat ion of the data. The main d ifference between rough sets and fuzzy sets is that the rough sets have precise boundaries whereas fuzzy set theory is generally based on ill-defined sets of data, where the bounds are not precise and hence fu zzy pred ictions tend to deviate from exact values. The lower and upper approximat ions of a set are analogous to the interior and closure operations in a topology generated by data. In this paper, we have introduced a new topology called rough topology in terms of lower and upper appro ximations of a rough set and we have applied the concept of topological basis to find the deciding factors for chikungunya and diabetes.
Preliminaries
Definiti on 2.1 [6]: Let U be a non-empty finite set of objects called the universe and R be an equivalence relation on U named as the indiscernibility relat ion. The pair (U,R) is called the appro ximation space. Let X be a subset of U.
i) The lo wer appro ximat ion of X with respect to R is the set of all objects, which can be for certain classified as X with respect to R and it is denoted by R *( X). That is, ii) The upper appro ximat ion of X with respect to R is the set of all objects, wh ich can be possibly classified as X with respect to R and it is denoted by R*(X). That is, R* iii) The boundary region of X with respect to R is the set of all objects, which can be classified neither as X nor as not-X with respect to R and it is denoted by B R X. That is, B R (X)= R*(X) -R * (X).
The set X is said to be rough with respect to R if R*(X) ≠ R * (X). That is, if B R (X) ≠ φ.
Proposition 2.2 [6]: If (U, R) is an appro ximation space and X and Y are subsets of U, then and R*(X) ⊆ R*(Y) whenever X ⊆ Y viii) R * (X C )= [R*(X)] C and R*(X C ) = [R * (X)] C ix) R * R * (X)= R*R * (X) = R * (X) x) R* R*(X)= R * R*(X)= R*(X) Re mark 2.3 : R*: P(U) → P(U) satisfies the Kuratowski closure axio ms that i) R* (φ) = φ ii) X ⊆ R*(X) iii) R*(X ∪ Y) = R* (X) ∪R*(X) iv) R* R*(X) = R*(X) for all subsets X and Y of U If F = {X ⊆ U / R*(X) = X} ,using conditions (i) to (iv), we see that φ and U are in F; X Y ∈ F whenever X and Y are in F and X α ∈ F for all X α in F. Therefore, the family T, of comp lements of members of F is a topology on U. Thus, F is the family of T-closed sets. Also, Cl(X) = R*(X). Therefore, R* is the Kuratowski's closure operator.
Remark 2.4: Since R * : P(U) → P(U) satisfies the fo llowing properties that i) R * (U) = U ii) R * (X) ⊆ X iii) R * (X ∩ Y) = R * (X) ∩ R * (Y) iv) R * R * (X) = R * (X) for all subsets X and Y of U, the operator R * is the interior operator.
Rough Topology
In this section we introduce a new topology called rough topology in terms of the lo wer and upper appro ximations.
Remark 3.1: Let U be the universe of objects and R be an equivalence relation on U. For X ⊆ U, we define τ R = {U , φ, R * X, R*(X), B R X}, where R*(X), R * (X) and B R (X) are respectively the upper approximat ion, the lower appro ximation and the boundary region of X with respect to R. We note that U and φ ∈ τ R . Since R * (X) ⊆ R*(X), Definiti on 3.2: Let U be the universe, R be an equivalence relation on U and τ R = {U,φ, R * (X), R*(X), B R (X)} where X ⊆ U. τ R satisfies the following axio ms: i) U and φ ∈ τ R .
ii) The union of the elements of any subcollection of τ R is in τ R .
iii) The intersection of the elements of any finite subcollection of τ R is in τ R .
τ R forms a topology on U called as the rough topology on U with respect to X. We call (U, τ R , X) as the rough topological space. Proof: ii) Consider U and R * (X) fro m B. Let W = R * (X). Since U ∩ R * (X) = R * (X), W ⊂ U ∩ R * (X) and every x in U ∩ R * (X) belongs to W. If we consider U and B R (X) fro m B, taking W = B R (X), W ⊂ U ∩ B R (X) and every x in U ∩ B R (X) belongs to W, since U ∩ B R (X) = B R (X). And when we consider R * (X) and B R (X), R * (X) ∩ B R (X) = φ. Thus, B is a basis for τ R .
Definiti on 3.5: Let U be the universe and R be an equivalence relation on U. Let τ R be the rough topology on U and β R be the basis for τ R . A subset M of A, the set of at-tributes is called the core of R if β M ≠ β R-(r) for every r in M. That is, a core of R is a subset of attributes which is such that none of its elements can be removed without affecting the classification power of attributes.
Rough Topology in Chikungunya
Here we consider the problem of Chikungunya, a disease that is transmitted to humans by virus-carrying Aedes mosquitoes. There have been recent breakouts of CHIKV associated with severe illness. It causes fever and severe joint pain. Other symptoms include muscle pain, headache and nausea. Initial symptoms are similar to dengue fever. It is usually not life threatening. But the joint pain can last for a long time and fu ll recovery may take months. Usually patient gets lifelong immun ity fro m infection and hence re-infection is very rare. In recent decades the disease has spread to Africa and Asia, in particu lar, the Indian subcontinent.
Consider the following informat ion table giving data about 8 patients. The colu mns of the table represent the attributes (the symptoms for chikungunya) and the rows represent the objects (the patients). The entries in the table are the attribute values. The patient P 5 is characterized by the value set (Joint pain, No), (Headache, Yes), (Nausea, Yes), (Temperature, High) and (Chikungunya, No), which gives information about the patient P 5 . In the table, the patients P 1 , P 2 , P 3 , P 6 , P 7 and P 8 are indiscernible with respect to the attribute 'Joint pain'. The attribute 'Joint pain' generates two equivalence classes, namely, {P 1 ,P 2 ,P 3 ,P 6 ,P 7 ,P 8 } and {P 4 ,P 5 }, whereas the attributes 'Joint pain' and 'Headache' generate the equivalence classes {P 1 , P 6 , P 7 , P 8 }, {P 2 ,P 3 },{P 4 } and {P 5 }. The equivalence classes for the attributes Joint pain, Headache, Nausea and Temperature are {P 1 }, {P 2 ,P 3 },{P 4 },{P 5 },{P 6 ,P 8 } and {P 7 }. For the set of patients having chikungunya, the lower appro ximation = {P 1 ,P 6 ,P 8 } and the upper approximation = {P 1 , P 2 , P 3 , P 6 , P 8 } and hence the boundary region = {P 2 , P 3 }. Hence the patients P 2 and P 3 cannot be uniquely classified in v iew of the available knowledge. The patients P 1 , P 6 and P 8 display symptoms which enable us to classify them with certainty as having chikungunya. In our case, the symptoms Jointpain, Headache, Nausea and Temperature are considered as the condition attributes and the disease chikungunya is considered as the decision attribute. Not all condition attributes in an informat ion system are necessary to depict the decision attribute before decision rules are generated. It may happen that the decision attribute depends not on the whole set of condition attributes but on a subset of it and hence we are interested to find this subset which is given by the core. Here U = {P 1 ,P 2 ,...,P 8 }.
Observation: Fro m both cases we conclude that 'Jointpain' and 'Temperature' are the key attributes necessary to decide whether a patient has chikungunya or not.
Rough Topology in Diabetes
Diabetes is a group of metabolic diseases in which a person has high blood sugar, either because the body does not produce enough insulin, or because cells do not respond to the insulin that is produced. In diabetes, glucose in the blood cannot move into cells, so it stays in the blood. This not only harms the cells that need the glucose for fuel, but also harms certain organs and tissues exposed to the high glucose levels. This high blood sugar produces the classical symptoms of polyuria (frequent urination), weight loss and polyphagia (increased hunger).
Consider the following table g iving informat ion about six patients X is taken as the set of patients not having diabetes, then again CORE(R) = {F}.
Observation: Since the core of R has F as its only element, 'Frequent Urination' is the key attribute that has close connection to the disease diabetes .
The procedure applied in the above two cases can be put in the form of an algorith m as fo llo ws: Algorithm: Step 1: Given a fin ite universe U, a finite set A of attributes that is divided into two classes, C of condition attributes and D of decision attribute, an equivalence relation R on U corresponding to C and a subset X of U, represent the data as an information table, co lu mns of which are labeled by attributes, rows by objects and entries of the table are attribute values.
Step 2 : Find the lower appro ximat ion, upper approximation and the boundary region of X with respect to R.
Step 3 : Generate the rough topology τR on U and its basis β R .
Step 4: Remove an attribute x fro m C and find the lo wer and upper approximat ions and the boundary region of X with respect to the equivalence relation on C -(x).
Step 5: Generate the rough topology τ R -(x) on U and its basis β R-(x).
Step 6: Repeat steps 3 and 4 fo r all attributes in C.
Step 7 : Those attributes in C for which β R-(x) ≠ β R form the core (R).
Conclusions
In this work, we have shown that real world problems can be dealt with the rough topology. The concept of basis has been applied to find the deciding factors of a recent outbreak 'Ch ikungunya' which had been reported especially, in South India and a chronic disease 'Diabetes'. We could find that Joint pain and Temperature are the deciding factors for chikungunya and frequent urination is the only deciding symptom for diabetes. It is also seen that fro m a clin ical point of view, the rough topological model is on par with the med ical experts with respect to the diseases analyzed here. The proposed rough topology can be applied to more general and complex information systems for future research. The rough set model is based on the original data only and does not need any external in formation, unlike probability in statistics or grade of membership in the fuzzy set theory. It is also a tool suitable for analyzing not only quantitative attributes but also qualitative ones. The results of the rough set model are easy to understand, while the results fro m other methods need an interpretation of the technical parameters. Thus it is advantageous to use rough topology in real life situations. | 2019-04-20T13:12:41.963Z | 2012-08-09T00:00:00.000 | {
"year": 2012,
"sha1": "0042ffbe329dd7761dc1def52ac7693586bb0f93",
"oa_license": "CCBY",
"oa_url": "http://www.sapub.org/global/showpaperpdf.aspx?doi=10.5923/j.ijis.20120204.01",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3c2616f594f417577212939776cd75c4cdb5013d",
"s2fieldsofstudy": [
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
207912102 | pes2o/s2orc | v3-fos-license | How should systematic reviewers handle conference abstracts? A view from the trenches
Background While identifying and cataloging unpublished studies from conference proceedings is generally recognized as a good practice during systematic reviews, controversy remains whether to include study results that are reported in conference abstracts. Existing guidelines provide conflicting recommendations. Main body The main argument for including conference abstracts in systematic reviews is that abstracts with positive results are preferentially published, and published sooner, as full-length articles compared with other abstracts. Arguments against including conference abstracts are that (1) searching for abstracts is resource-intensive, (2) abstracts may not contain adequate information, and (3) the information in abstracts may not be dependable. However, studies comparing conference abstracts and fully published articles of the same study find only minor differences, usually with conference abstracts presenting preliminary results. Other studies that have examined differences in treatment estimates of meta-analyses with and without conference abstracts report changes in precision, but usually not in the treatment effect estimate. However, in some cases, including conference abstracts has made a difference in the estimate of the treatment effect, not just its precision. Instead of arbitrarily deciding to include or exclude conference abstracts in systematic reviews, we suggest that systematic reviewers should consider the availability of evidence informing the review. If available evidence is sparse or conflicting, it may be worthwhile to search for conference abstracts. Further, attempts to contact authors of abstracts or search for protocols or trial registers to supplement the information presented in conference abstracts is prudent. If unique information from conference abstracts is included in a meta-analysis, a sensitivity analysis with and without the unique results should be conducted. Conclusions Under given circumstances, it is worthwhile to search for and include results from conference abstracts in systematic reviews.
Background
Systematic reviewers aim to be comprehensive in summarizing the existing literature addressing specific research questions. This generally involves a thorough search for published studies as well as for ongoing or recently completed studies that are not yet published. Ongoing and recently completed studies are often identified through searches of registries, such as ClinicalTrials.gov, and of conference proceedings. While identifying and cataloging unpublished studies from conference proceedings is generally recognized as a good practice during systematic reviews, controversy remains whether to include study results that are reported in conference abstracts. Current guidelines are conflicting. The United States Agency for Health Care Research and Quality (AHRQ), through its Effective Healthcare Program, recommends that searches for conference abstracts be considered, but Cochrane and the United States National Academy of Sciences (NAS) both recommend always searching for and including conference abstracts in systematic reviews [1][2][3]. Our objectives in this commentary are to summarize the existing evidence both for and against the inclusion of conference abstracts in systematic reviews and provide suggestions for systematic reviewers when deciding whether and how to include conference abstracts in systematic reviews.
Main text
Arguments for including conference abstracts in systematic reviews The main argument for including conference abstracts in systematic reviews is that, by doing so, systematic reviewers can be more comprehensive. In our recent Cochrane methodology review, we reported that the proportion of subsequent full publication of studies presented at conferences is low [4]. We examined 425 biomedical research reports that followed the publication status of 307,028 studies presented as conference abstracts addressing a wide range of medical, allied health, and health policy fields. A meta-analysis of these 425 reports indicated that the overall full publication proportion was only 37% (95% confidence interval [CI], 35 to 39%) for abstracts of all types of studies and only 60% (95% CI, 52 to 67%) for abstracts of randomized controlled trials (RCTs). Through a survival analysis, we found that, among the 181 reports that evaluated time to publication, only 46% of abstracts of all types of studies and 69% of abstracts of RCTs were published, even after 10 years. Thus, at best, approximately 3 in 10 abstracts describing RCTs have never been published in full, implying that the voluntary participation and risktaking by multitudes of patients have not led to fully realized contributions to science. We and others argue that the failure of trialists to honor their commitment to patients (that patient participation would contribute to science) represents an ethical problem [5,6].
From a systematic reviewer's perspective, even if the unpublished abstracts were a random 3 in 10 abstracts, restricting a systematic review search to only the published literature would amount to the loss of an immense amount of information and a corresponding loss of precision in meta-analytic estimates of treatment effect. However, publication is not a matter of random chance. Those conducting systematic reviews have long grappled with this problem, known as "publication bias." Publication bias occurs when either the likelihood of, or the time to, publication of a study is impacted by the direction of the study's results [6][7][8][9][10][11][12]. The most frequent scenario for publication bias is when studies with "positive" (or "significant") results are selectively published, or are published sooner, than studies with either null or negative results.
Publication bias can be conceptualized as occurring in two stages: (I) from a study's end to presentation of its results at a conference (and publication of an accompanying conference abstract) and (II) from publication of a conference abstract to subsequent "full publication" of the study results, typically in a peer-reviewed journal article [13]. In the context of publication bias arising during stage II (i.e., if abstracts with positive or significant results are selectively published in full), systematic reviews relying solely on fully published studies can be biased because positive results would be overrepresented. This would lead to a falsely inflated (or biased) estimate of the treatment effect of the intervention being evaluated in the systematic review. Indeed, in our Cochrane methodology review, we found evidence of publication bias in the studies reported in the abstracts [4]. "Positive" results were associated with full publication, whether "positive" was defined as statistically significant results (risk ratio [RR] = 1.31, 95% CI 1.23 to 1.40) or as results whose direction favored the intervention (RR = 1.17, 95% CI 1.07 to 1.28). Furthermore, abstracts with statistically significant results were published in full sooner than abstracts with non-significant results [14][15][16], unearthing another aspect of bias that can arise when a systematic review is performed relatively soon after the completion of a trial(s) testing a new intervention.
Arguments against including conference abstracts in systematic reviews
There are various arguments against including abstracts in systematic reviews. First, identifying relevant conferences, locating their abstracts, and sifting through the often thousands of abstracts can be challenging and resource-intensive. However, EMBASE, a commonly searched database during systematic reviews, now includes conference abstracts from important medical conferences, dating back to 2009 [17]. Inclusion of conference abstracts in this searchable database means searching for conference abstracts is less resourceintensive than in the past. Second, largely driven by their brevity, abstracts may not contain adequate information for systematic reviewers to appraise the design, methods, risk of bias, outcomes, and results of studies reported in the abstracts [18][19][20][21]. Third, the dependability of results presented in abstracts also is questionable [22][23][24], which occurs at least in part because (1) most abstracts are not peer-reviewed and (2) results reported in abstracts are often preliminary and/or based on limited analyses conducted in a rush to meet conference deadlines. The most frequent types of conflicting information between abstract and full-length journal article have pertained to authors or authorship order, sample size, and estimates of treatment effects (their magnitude or, less frequently, direction) [25][26][27][28][29][30][31]. Mayo-Wilson and colleagues examined the agreement in reported data across a range of unpublished sources related to the same studies in bipolar depression and neuropathic pain [21,32]. As part of this effort, they compared abstracts with full-length journal articles and clinical study reports and reported that the information presented in abstracts was not dependable either in terms of methods or results.
What are we missing if we do not include conference abstracts in a systematic review?
Various studies have questioned whether the inclusion of "gray" literature or unpublished study results in a systematic review would change the estimates of treatment effect obtained during meta-analyses. Through "metaepidemiologic" studies, investigators have examined the results of meta-analyses with and without conference abstracts and have reported conflicting, but generally small differences in results [21,24,33]. Evidence from a recent systematic review indicates that the inclusion of gray literature (defined more broadly than just conference abstracts) in meta-analyses may change the results from significant to non-significant or from non-significant to significant, or may not change the results [24,33]. We conducted a similar analysis in our Cochrane methodology review [4]. We were able to do this because some of our included reports that examined full publication of conference abstracts were themselves only available as conference abstracts. Our analysis found that inclusion of reports that were conference abstracts did not change the strength or precision of our meta-analytic results. In our review, it would have been possible to exclude conference abstracts and retain accurate and precise results.
Implications of reasons for non-publication of conference abstracts
The most common reason provided by authors of abstracts for not publishing their study results in full has been reported to simply be "lack of time," and not because the results were considered unreliable or negative [34]. This finding suggests that the identification of an abstract without a corresponding full-length journal article should prompt systematic reviewers to search for additional evidence, such as gray literature sources and/ or contacting the authors. However, a reasonable argument could be made that, when the same information is available in both a published peer-reviewed article and an abstract for a given study, including the abstract in a systematic review would be superfluous and/or illadvised because a likely more comprehensive and dependable source of the information, i.e., the peerreviewed article, is available. Therefore, the presence of a journal article might obviate the need for including a corresponding conference abstract in a systematic review, unless unique outcomes are reported in the abstract.
Considerations when including conference abstracts in systematic reviews
Taken together, the evidence reviewed in this paper (summarized in Table 1) suggests that systematic reviewers should take a more nuanced approach to inclusion of conference abstracts. A simple yes or no to the question "Should we include conference abstracts in our systematic review?" is neither sufficient nor appropriate. One aspect to consider is the scope of the review. For example, will the conference abstracts be used to inform policy based on a cadre of systematic reviews or only used within a single review? Benzie and colleagues evaluated the usefulness of including conference abstracts in a "state-of-the-evidence" review and concluded that including conference abstracts validated the results of a search that included only the published literature [35]. These authors discussed four considerations for basing the decision to include conference abstracts: (1) complexity of the intervention, (2) consensus in the existing literature, (3) importance of context in evaluating the effect of the intervention, and (4) presence of other evidence [35]. Others who have incorporated conference abstracts for decision-making have noted that the lack of, or conflicting results in, published evidence often requires inclusion of conference abstracts [36]. In some instances, results in abstracts can confirm the evidence found in fully published studies, but in other instances, abstracts can provide useful additions to the evidence [37].
When considering the use of conference abstracts in systematic reviews, we largely agree with the recommendations presented in the AHRQ Methods Guide for Comparative Effectiveness Reviews [1]. Although these recommendations generally do not espouse including conference abstracts in systematic reviews, they provide excellent guidance on when including abstracts should be considered: • Reviewers should routinely consider conducting a search of conference abstracts and proceedings to identify unpublished or unidentified studies.
• Consult the TEP [Technical Expert Panel] for suggestions on particular conferences to search and search those conferences specifically.
• Search for conference abstracts of any conference identified by reading the references of key articles.
• We do not recommend using conference abstracts for assessing selective outcome reporting and selective analysis reporting, given the variable evidence of concordance between conference abstracts and their subsequent full-text publications [1].
Our suggestions
Based on the empirical findings summarized in this review and on our experience, we believe that generally relying on conference abstracts is problematic for the various reasons discussed. While meta-epidemiologic studies have shown that inclusion of abstracts does not greatly impact meta-analytic results, it can sometimes make a difference. The dilemma facing a systematic reviewer is to determine when it might. We suggest the following approach (summarized in Fig. 1). If the evidence suggests a sizeable effect, or the absence of one (i.e., with the estimate of effect centered at or near the null), with reasonable precision, searching for conference abstracts may be unnecessary. On the other hand, if the evidence does not show a sizeable effect, is imprecise, or is conflicting, then the resources spent finding and including conference abstracts may be worth it. In other words, if only a single study in full-length form is identified, or if the studies identified are few and small, then conference abstracts should probably be searched and included. We refrain from making specific suggestions for what should be construed as a "sizeable" effect. Magnitudes of effect sizes and thresholds for what is considered relevant can vary considerably across outcomes and across fields and disciplines. We also refrain from making specific suggestions for what should be construed as "reasonable precision" because of the various problems inherent in the use of statistical significance (e.g., arbitrariness, dependence on sample size) and the arbitrary thresholds for precision that use of statistical significance can engender [38][39][40][41].
If abstracts are indeed included in a systematic review, the consistent use of CONSORT reporting guidelines for abstracts [14] would facilitate extraction of information from abstracts. In many cases, however, these reporting guidelines are not followed [42], so we suggest that diligent attempts be made to contact authors of the abstracts and examine study registers, such as Clinical-Trials.gov, and published protocols to obtain all necessary unreported or unclear information on study methods and results. In addition, to examine the impact of including the abstracts, a sensitivity analysis should always be completed with and without conference abstracts.
Conclusions
Based on the available evidence and on our experience, we suggest that instead of arbitrarily deciding to include conference abstracts or not in a systematic review, systematic reviewers should consider the availability of evidence. If available evidence is sparse or conflicting, it may be worthwhile to include conference abstracts. If results from conference abstracts are included, then it is necessary to make diligent attempts to contact the authors of the abstract and examine study registers and published protocols to obtain further and confirmatory information on methods and results. | 2019-11-07T21:39:21.742Z | 2019-11-07T00:00:00.000 | {
"year": 2019,
"sha1": "7b0e4bacc5e2ec34da0cd3f18387537b908d043e",
"oa_license": "CCBY",
"oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-019-1188-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b0e4bacc5e2ec34da0cd3f18387537b908d043e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248823021 | pes2o/s2orc | v3-fos-license | 5G Converged Network Resource Allocation Strategy Based on Reinforcement Learning in Edge Cloud Computing Environment
Aiming at the problem that computing power and resources of Mobile Edge Computing (MEC) servers are difficult to process long-period intensive task data, this study proposes a 5G converged network resource allocation strategy based on reinforcement learning in edge cloud computing environment. n order to solve the problem of insufficient local computing power, the proposed strategy offloads some tasks to the edge of network. Firstly, we build a multi-MEC server and multi-user mobile edge system, and design optimization objectives to minimize the average response time of system tasks and total energy consumption. Then, task offloading and resource allocation process is modeled as Markov decision process. Furthermore, the deep Q-network is used to find the optimal resource allocation scheme. Finally, the proposed strategy is analyzed experimentally based on TensorFlow learning framework. Experimental results show that when the number of users is 110, final energy consumption is about 2500 J, which effectively reduces task delay and improves the utilization of resources.
Introduction
With the continuous development of technologies such as the 5G, the amount of data in various emerging application scenarios has exponentially increased. ere are more and more Internet of ings (IoT) devices in various fields such as telemedicine, smart car driving, and smart cities, so all kinds of computing are everywhere [1]. However, the existing cloud computing models are difficult to manage these large-scale computing resources and perform data analysis.
is is mainly reflected in the following two reasons: First, the transfer of large-scale data to cloud computing center will improve network performance and the computing power of cloud computing infrastructure brings severe challenges [2,3]. e second is that it is difficult for cloud far away from users to meet the stringent requirements of new applications such as autonomous driving on network delay and response speed [4]. us, both computing services and big data sources are undergoing a shift from cloud to edge [5].
Edge computing serves as an intermediate layer between the cloud computing center and user devices. It provides computing resources to users near the edge via a high-speed network by placing the edge server close to user end [6]. Among them, the user device sends computing tasks that originally need to be sent to the cloud or executed locally to edge server for execution to achieve reasonable network resource allocation, which is called computing offloading [7]. Compared with cloud servers and local computing, edge computing can provide faster network response and have more powerful computing capabilities [8].
erefore, computing offloading and reasonable allocation of network resources by a reasonable scheduling algorithm can help users save transmission energy consumption and improve computing efficiency [9].
In the edge computing system, for security and efficiency reasons, the edge server will not open its own computing resource configuration and idle state to each user device, so it is difficult to obtain the detailed status in the system [10,11]. Under the premise of incomplete observation system constraints, task offloading and system optimization problems become more complicated. e intelligent model represented by deep reinforcement learning is an important means to solve such problems [12]. Reference [13] developed a Multi-Agent Reinforcement Learning Network to solve the Q-learning problem based on independent learners, and designed the calculation unloading strategy in IoT through random game. However, the efficiency of resource allocation strategy needs to be further improved. Reference [14] proposed a moving edge computing (MEC) network based on blockchain, which uses blockchain to control the coverage system, and adopts adaptive strategy to generate blocks and realize high-quality resource allocation. Reference [15] used deep Q network (DQN) learning to obtain the best resource allocation scheme in IoT network. However, frequent data interaction brings high network load, which becomes the main obstacle to the training of intelligent offloading models, especially computing offloading models based on deep learning.
Traditional methods also have certain research on computing task offloading and network resource allocation: For example, reference [16] solved the task unloading problem based on differential evolution algorithm, so as to realize the efficient execution of tasks, but it requires higher network bandwidth. Reference [17] designed a random mixed integer nonlinear programming method for the intensive task unloading and resource allocation in MEC, which can realize the rational use of resources, but cannot take into account energy efficiency and service delay. Reference [18] used orthogonal and non-orthogonal multiple access methods, a resource allocation scheme considering energy consumption and efficiency in MEC is formulated, but the overall delay needs to be further reduced. Reference [19] proposed a multiobjective resource allocation method for MEC, which uses Pareto archiving evolution strategy to optimize time cost and load balancing. At the same time, it combined multi-criteria decision-making and sorting preference technology similar to the ideal solution to obtain optimal resource allocation, but 5G integration scheme is not considered.
Aiming at the problem that the large amount of data transmitted in 5G network leads to channel congestion, which affects the real-time performance and energy consumption of communication, a 5G fusion network resource allocation strategy based on reinforcement learning in edge cloud computing environment is proposed. Due to the poor learning effect of basic reinforcement learning in massive data, the proposed strategy proposes a DQN offloading strategy to solve resource allocation of 5G converged networks, which can reduce the time delay. At the same time, the system energy consumption is reduced. Finally, experimental results based on TensorFlow learning framework show that proposed strategy fully considers the time and energy consumption of local and offloading to MEC execution, and solving offloading scheme by reinforcement learning can greatly reduce delay and energy consumption. Moreover, its energy consumption is about 2500J, the time delay does not exceed 7s. DQN has self-learning ability, which continuously learns during the training process to improve the accuracy of decision-making. erefore, it can effectively reduce load and broadband utilization rate.
System Scenario.
e system scenario is shown in Figure 1, consisting of N users, M base stations, and multiple MEC servers. Among them, each user is associated with the nearest base station through the wireless link and sends a task request to it. At the same time, each base station is equipped with an MEC server with multiple CPU cores. erefore, MEC server can process the computing tasks of different users in parallel. It is assumed that user computing tasks are processed by an MEC server, regardless of situation in which computing tasks are forwarded between MEC servers.
Divide the system running time dimension into a number of time slots, and use T � 0, 1, 2 · · · { } to represent the set of time slots for network operation, where the time length of each time slot t is defined as τ. It is assumed that most of the computing tasks of user can be processed and completed in one time slot. Due to the large amount of data, some computing tasks are divided into subtasks for processing [20]. Considering the randomness of task arrival, a two-level queue model is designed to describe the state of computing tasks, namely the user task queue model and MEC server task queue model.
Task Generation
Model. In MEC model, it is assumed that the time interval for mobile users to generate tasks obeys Poisson distribution, and user n generates k n mutually independent tasks, which are defined as K n � 1, 2, · · · , k n . e attribute of task i is defined as where id u represents the identity (id) of user n who generated task i, id i represents the id of task, and sub i represents the time when the user submits the task. d i (bits as a unit) represents the amount of task data, c i (CPU revolutions/bit as a unit) represents the number of CPU revolutions required to calculate one bit of task data, and l i � d i c i . mem i and cpu i , respectively represent the memory and CPU resources required by computing tasks. Users are mobile and may be located near different base stations at different points in time. us, tasks generated by same user may be offloaded to servers in different base stations for processing.
Local Calculation Model.
Mobile users themselves have certain computing capabilities. If the user has sufficient computing resources, then tasks can be processed locally. e computing power of local device n is represented by CPU frequency, which is defined as f n,l . e processing time of task on local calculation model only considers the calculation time. erefore, the local processing time of task i generated by user n is defined as e power and energy consumption of task i processed locally by user n are, respectively, defined as where c is the effective switch capacitance.
2 Computational Intelligence and Neuroscience
Edge Computing Model.
Due to the insufficient computing resources of local devices, a large number of tasks generated by users cannot be processed on local computing model, but some tasks need to be offloaded to edge computing model for processing [21]. When the task is executed on MEC, the transmission time and calculation time need to be considered, and the amount of data returned by the task is very small, so the transmission time does not consider the time-consuming of result return. Before calculating the transmission time, first define the transmission rate from user device n to base station m as v n,m � B log 2 where B is the communication bandwidth; p n is the transmission power of user n. δ 0 is the noise power spectral density of base station m; h n,m is the channel gain between user n and base station m. e he time for task i generated by user device n to be offloaded to server j of base station m for processing is defined as where f m j is the CPU frequency of server j on base station m. Same as time-consuming calculation, the energy consumption of task i generated by user n and offloaded to server j of base station m for processing is defined as where e m j is the energy consumption required to calculate one bit of data.
Optimization Goal.
e optimization goal is to reduce the average response time and total energy consumption of tasks in MEC environment, improve user service quality and save system energy cost [22]. e execution of tasks on computing nodes will be constrained by some network hardware environments [23]. Suppose that the maximum number of tasks that can be executed in parallel on the computing node is Γ, if the number of tasks is less than Γ, new tasks can be received; Otherwise, you need to wait for the resource release task to be executed. In addition, a new task can be processed only when the network resource required by the executing task and the new task is less than the total resources. e mathematical expression of the objective optimization problem is as follows: where q is the number of simultaneous tasks, and C 1 and C 1 are memory and CPU capacity respectively, and T ∞ i is the completion time of task i, E i � E i,n , local computing E i,off , offloa di ng computing . Computational Intelligence and Neuroscience
Solutions Based on Deep
(1) S is the system state collection. For an incomplete observation system, the set used by edge server to describe the system state only includes the basic information of edge server: S � S 1 , S 2 , · · · , S J . Among them, let S J be a 5-tuple. (2) a t n ∈ A is a finite set of actions, that is, the action of calculating offloading. e set includes the user who decides to uninstall at time t, and the user's action at time t is recorded as a t n . When a t n � 0, user n executes locally. When a t n � 1, user n offloads the task to the MEC.
(3) ψ is the state transition matrix, and ψ corresponds to the mapping of S × A × S ⟶ [0, 1]. at is, the probability of transitioning to next state after the end of state S, after the execution of action A.
(4) R is the reward function. When the user needs to uninstall, the uninstall action will get a positive reward. When the decision makes system overload, a negative reward, or penalty, is given.
Reinforcement learning obtains rewards through reward function r t at time t. For some observable system environments, the remote server can only obtain information about tasks that have been offloaded to the remote [24,25]. erefore, it is considered that the amount of calculation saved is regarded as a reward for an offloading behavior. In order to better master the use of system resources, a punitive reward will be set. e punitive reward is set to the negative value of absolute value of current system reward, which ensures that the punitive reward value is always negative [26,27]. e punitive reward is expressed as Markov process corresponds to a sequence of system state transitions, that is, a trajectory sequence Ξ � 〈s 0 , a 0 , s 1 , a 1 , · · ·〉 containing states and actions can be obtained. Strategy π corresponds to the mapping of S × A ⟶ [0, 1]. Deep reinforcement learning maximizes the cumulative reward expectation of Ξ during the training process to find optimal strategy π.
DQN-Based Offload Strategy.
e training process based on DQN offloading strategy is shown in Figure 2.
According to the above figure, the pseudo code of algorithm based on DQN offloading strategy is shown in Algorithm 1.
Based on DQN algorithm, two neural network structures, the current Q-value network and target Q-value network are used. e two have the same neural network structure, but the parameters of their respective structures are different. e definition θ represents the parameters of current Q-value network, and θ ′ represents the parameters of target Q-value network. DQN algorithm fits the action value function Q(s t , a t ; θ) through Q-value network with parameter θ, which is calculated as follows: where χ ∈ [0, 1] is the reward discount factor. en select the optimal action based on value of each action generated by Q-value network: In order to avoid not selecting the optimal local optimal solution when selecting an action, ε-greedy strategy is used to select an action. at is, an action is randomly selected with a small probability of ε, and the optimal action is selected according to (10) with a probability of 1− ε, so as to obtain the reward value r t and next state s t+1 . en put quadruple (s t , a t , r t , s t+1 ) into the experience replay library, and sample a batch of (s t , a t , r t , s t+1 ) into the neural network for training. When action a t is executed, the Q-value corresponding to action space a t is updated, according to Bellman formula: en minimize Loss function to update the parameters of current Q-value network. Loss function represents the predicted value of square error loss between the current Q-value network and target Q-value network. e smaller the value, the better the neural network is optimized. Generally expressed as en the target Q-value network is updated with a delay.
Experiment and Analysis
e platform used in the experiment is Python 3.6, and Tensorflow GPU 1.14 is used for in-depth learning and Table 1.
In addition, the proposed strategy is compared with reference [13], reference [18], and reference [19] to demonstrate its advantages. Among them, reference [13] proposed a Multi-Agent Reinforcement Learning Algorithm for computing offload of Internet of things edge computing network; Reference [18] formulated a resource allocation strategy based on orthogonal and non-orthogonal multiple access schemes; Reference [19] uses Pareto archive evolution strategy to achieve multi-objective resource allocation.
Analysis of Energy Consumption Results.
e relationship between the number of users and energy consumption for the four strategies is shown in Figure 3.
It can be seen from Figure 3 that the energy consumption of each strategy basically shows an upward trend. However, the rise of proposed strategy has slowed down. When the number of users is 110, the final energy consumption is about 2500J. is is because too many users lead to full load of edge computing nodes, so tasks are offloaded to higherperformance cloud data centers, keeping the energy consumption of proposed strategy to a low level. Besides, it comprehensively considers local and offloading energy consumption using DQN to obtain the optimal offloading plan, which can effectively reduce energy consumption. In reference [13], although multi-agent reinforcement learning algorithm is used to obtain the optimal offloading plan, the cloud data center is not considered, so the energy consumption is increasing rapidly. e other two strategies are difficult to handle increased number of users, and the energy consumption is higher, exceeding 3500J.
Analysis of Time Delay Results.
Similarly, the relationship between users and time delay under the four strategies is shown in Figure 4.
It can be seen from Figure 4 that reference [18] preferentially chooses to execute tasks locally to meet the requirements of delay-sensitive tasks. If the computing resources are insufficient, it will turn to high-level devices for offloading, so the delay is almost the lowest, no more than 5s. However, the strategy in reference [19] tends to preferentially offload tasks to edge nodes, and the increase in the number of users will reasonably uninstall some tasks, because the computing resources of edge nodes are in short supply and need to be queued for use, the delay will increase suddenly. As the number of users further increases, tasks are reasonably offloaded, which can alleviate time delay to a certain extent. But because of transmission link, although there is no need to queue up, a lot of time is lost in the transmission process. Even if the task continues to increase, time delay will stabilize in a higher range, about 17s. Input: Target resampling strategy S, Reward function R: S × A × g ⟶ R Begin (1) Initialize replay pool (2) For episode � 0, 1, 2, . . ., m do Initialize a state s 0 and a target g; (3) For t � 0, 1, 2, . . ., T −1 doUse behavior strategies to select actions a t Execute action a t and observe the new state s t+1 (4) End for (5) For t � 0, 1,2, . . ., T −1 do Calculate immediate rewards Put (s t , g, a t , r t , s t ) this experience is stored in the playback pool Resample a batch of target G using the target resampling policy S (6) For g′ ∈ G doCalculate new immediate rewards r′ Put (s t , g′, a t , r t ′ , s t+1 ) this new experience is stored in the playback pool (7) End for (8) End for (9) For t � 0,1,2, . . ., N do Sample some minibatch from the replay pool Calculate the loss function and update the network parameters (10) End for (11) End for End ALGORITHM 1: Pseudo code of offloading strategy based on DQN. Computational Intelligence and Neuroscience However, the delays of reference [13] and proposed strategy are relatively stable. e proposed strategy fully considers the time and energy consumption of local and offloading to MEC execution, and solving offloading scheme by reinforcement learning can greatly reduce delay.
Analysis of Load Balancing
Rate Results. Figure 5 shows the relationship between users and load balancing ratios under the four strategies. It can be seen from Figure 5 that the overall load balancing ratio of reference [18] strategy is relatively high. is is because it focuses on local execution, and task offloading starts from the device with the lowest performance, so as long as the device performing the task is almost fully loaded. Although some pressure was shared between 30 and 70 by offloading to edge nodes, the resources of edge nodes were quickly occupied. However, the strategy in reference [19] tends to be offloaded to MEC server, so the load balancing rate is low. is can maintain a high utilization rate for a relatively large number of edge node clusters with moderate performance. Reference [13] used multi-agent reinforcement learning algorithms for task offloading, but the load balancing rate is low. However, the algorithm performance still needs to be improved compared with DQN, so the load balancing rate of proposed strategy is the lowest, about 0.23. Reasonable utilization of users, MEC and cloud center can greatly reduce the load balancing rate.
Impact of Different Similarity Measurement Methods on Algorithm Execution Efficiency.
According to the pipeline model, bandwidth resource bottleneck is the first dilemma faced in the offloading process. Ensuring the effective use of bandwidth resources, rather than blindly offloading too many tasks, is the key to rational use of system resources. With the increase in the number of users, the four strategies are shown in Figure 6 for network and server usage.
It can be seen from Figure 6 that compared with other strategies, the broadband utilization rate and computing resource utilization rate of proposed strategy is relatively low. Among them, the broadband utilization rate is always between 0.1 and 0.3, and computing resource utilization rate is roughly between 0.2 and 0.45. Since the proposed strategy always occupies a lower bandwidth in the decision-making process, DQN strategy is used to reasonably offload computing tasks, thereby avoiding bottlenecks in network transmission. At the same time, because fewer broadband resources are occupied, higher revenue can be obtained for servers. Reference [13] performed computational offloading based on multi-agent reinforcement learning algorithm. Although the task offloading can be completed well, MEC server has a higher requirement for computing power, so it occupies more computing resources. Reference [18] and reference [19] lacked high-performance processing algorithms and cannot balance Ref. [19] Ref. [18] Ref. [13] Proposed strategy Ref. [19] Ref. [18] Ref. [13] Proposed strategy Load balancing ratio Ref. [19] Ref. [18] Ref. [13] Proposed strategy Computational Intelligence and Neuroscience task offloading.
us, the broadband utilization rate and computing resource utilization rate fluctuate greatly and are at a high value.
Conclusion
With the rapid development of IoT and 5G technology, a series of new applications with computationally intensive and delay-sensitive features such as virtual reality, augmented reality and face recognition continue to emerge. In order to solve the problem of insufficient local computing power, the proposed strategy offloads some tasks to the edge of network, and builds a mobile edge system model with multi-MEC server and multi-user. is model improves the task processing capability of system by solving goal of minimizing. Besides, DQN strategy is used to obtain an offloading plan that minimizes the average response time of system tasks and total energy consumption, so as to allocate computing resources reasonably. e proposed strategy has certain value and significance for theoretical research and practical application. However, due to resource constraints such as mobile devices, servers and base stations, experiments can only be carried out in a simulated environment that is as close to the actual situation as possible. In the future research work, we will further consider conducting physical experiments in a real environment to provide solutions to practical problems.
Data Availability
e data used to support the findings of this study are included within the study.
Conflicts of Interest
e authors declare that there are conflicts of interest regarding the publication of this study. Ref. [19] Ref. [18] Ref. [13] Proposed strategy Bandwidth usage percent Ref. [19] Ref. [18] Ref. [13] Proposed strategy Computing resources usage percent (b) Computational Intelligence and Neuroscience 7 | 2022-05-17T15:04:48.180Z | 2022-05-14T00:00:00.000 | {
"year": 2022,
"sha1": "7f8b55c361a584332d8d6474cb3e558a327881b9",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/6174708.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60a2b750b18d547db20d0f482bd1c68537dd8d33",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238636733 | pes2o/s2orc | v3-fos-license | Combination of a negative pressure suction device and endoscope can accurately locate the bleeding site of refractory epistaxis
Abstract Background Selective endoscopic coagulation of a nasal bleeding vessel is an effective means of treating epistaxis. Precisely locating the bleeding site(s) is critical. Objective To investigate the utility of combining a negative pressure suction device and endoscope in locating bleeding sites of refractory epistaxis. Methods A total of 116 patients with refractory epistaxis, who underwent systematic endoscopic exploration under local anesthesia in the absence of identifiable sites of bleeding were randomizely divided into two groups via negative pressure group (NPG) and control group (CG): The negative pressure suction device combined with an endoscope was used to re-explore the epistaxis. Nasal bleeding was induced using this method to help the operator locate the site of epistaxis accurately; the bleeding was then stopped using electrocoagulation with the suction electrode. The CG was treated with endoscopic re-exploration and selective tamponade. Results Compared with the CG, there were statistically significant differences in length of hospital stay, rebleeding, and postoperative pain and complications (all p < .05). Conclusion and significance Combining a negative pressure suction device and endoscope was a safe and effective technique for accurately locating bleeding sites in patients with refractory epistaxis.
Introduction
Nasal bleeding is an acute and frequently encountered emergency in Otolaryngological Departments and often requires immediately treatment. According to statistics, 60% of individuals will experience nosebleeds in their lifetime, of which 6% will require medical intervention [1][2][3][4]. After admission, patients and their families often experience different degrees of panic [5]. Before endoscopy was introduced to our department, the most common treatment for epistaxis was anterior-posterior nostril tamponade [3,6,7]. Although this yielded an adequate effect, it was also associated with significant pain, such as nasal pain, headach, dry sore throat, and nasal alar injury [7]. In addition, displacement of the packing material may lead to the risk for aspiration [7]. Selective endoscopic coagulation of the bleeding vessel is an effective means of treating epistaxis given the impressive evolution of nasal endoscopy [7][8][9]. Cautery haemostasis for posterior epistaxis under endoscopy is superior to posterior nostril tamponade and vascular embolization for reasons including pain, cost-effectiveness, risk, and overall control of bleeding [7,9]. Refractory epistaxis, however, is generally 'concealed' and known to originate from posterior bleeding [10]. In addition, its high recurrence rate often requires repeated nasal endoscopic surgeries and possible consequent complications such as panic asphyxia and upper airway obstruction, which may be life-threatening [7,8,11].
Emerging treatment options for refractory epistaxis have led major shifts in surgical approaches; more specifically, away from arterial ligation or radiological arterial embolization to electrocoagulation, which is not unresponsive to conventional nasal packing [12]. Electrocoagulation haemostasis under nasal endoscopy has advantages including accurate, rapid haemostasis, good effect, and minimal trauma [7,8]. Whether the site of bleeding can be quickly detected is the key issure for the success of electrocoagulation haemostasis under nasal endoscopy [13]. However, the bleeding site in 6-24% of cases of refractory epistaxis cannot be found using nasal endoscopy [1,3,6,13]. It has been reported that a considerable number of patients undergo selective nasal tamponade due to the inability to locate bleeding sites at the time, resulting in significant pain, and many complications including bradycardia and poor efficacy [12]. This suggests that there be still many cases of epistaxis requiring multiple endoscopic procedures to stop the bleeding, which is related to the ability of otolaryngologists to assess the sites of nasal bleeding under endoscopy due to inconsistent experience and standardized training [14]. During endoscopic exploration of refractory epistaxis, the inferior turbinate and middle turbinate sometimes need to be reversed, thus causing new bleeding; thus, repeated exploration and haemostasis causes more damage to the nasal mucosa, such as nasal adhesion, postoperative pain, and sinusitis. Moreover, it burdens the surgeon with mental pressure. Therefore, it is of high clinical significance to find a method that can assist in locating the site of nasal bleeding [7]. We used a negative pressure suction device to create negative pressure in the nasal cavity equivalent to raising the blood pressure of the bleeding point in the nasal cavity. This induces active bleeding, which, during the bleeding interval, helps the operator accurately locate the bleeding site.
Materials and methods
A total of 116 patients with refractory epistaxis, who were hospitalized between January 2019 and April 2021, were retrospectively enrolled. Due to the retrospective design of the study and the use of anonymized patient data, requirements for informed consent were waived. Inclusion criteria were as follows: (patients with) at least one nasal tamponade, with no bleeding spots found on systematic endoscopic examination by an otolaryngologist; able to tolerate local anesthesia and hold breath in conjunction with negative pressure devices; and no obvious contraindication to surgery. Individuals definitively diagnosed with coagulation dysfunction (blood system disease, liver and kidney dysfunction), those complicated by serious cardiovascular and cerebrovascular diseases, history of local trauma (including surgical trauma), refused local anesthesia, history of inflammation and tumors of the nasal cavity and sinuses, acute infectious diseases, history of radiotherapy for nasopharyngeal carcinoma and secretory otitis media, and hereditary haemorrhagic telangiectasia, were excluded. The Medical Ethics Committee of the authors' hospital approved this study.
Blood pressure, heart rate, electrocardiographic monitoring, and oxygen saturation were monitored in the local anesthesia operating room. With the patient supine, a conventional disinfectant towel was laid, nasal filler was removed, tetracaine and epinephrine cotton were used to anesthetize and contract the nasal cavity, and systemic examinations were performed under nasal endoscopy. The systematic search of the entire nasal cavity to detect bleeding was always performed in the same order: from anterior to posterior and from upper to lower, especially the following sites: junction of the nasal septum and nasal domain; nasal roof to the upper end of the nasal septum in the olfactory fissure area; junction of the middle nasal meatus and methyl plate of the middle turbinate (horizontal part and vertical part); upper margin of the inferior turbinate near the posterior fontanelle of maxillary sinus; front of the inferior meatus; posterior fornix of inferior meatus; and upper margin of the posterior nostril. Possible bleeding points were explored and, if none were found, the patients were randomly assigned to the negative pressure group (NPG) or control group (CG). For the CG, local selective tamponade was performed on suspected bleeding points during intraoperative exploration. In the NPG, the negative pressure device was applied, and negative pressure was adjusted to 40 kPa. The negative pressure football was placed into the anterior nostril of the affected side, and the patient was asked to press the other nasal alar with one hand to block the nasal cavity on the non-affected side and asked to hold their breath. The nasal cavity on the affected side connecting a negative pressure device form a closed space. then opening the negative pressure device, negative pressure in the nasal cavity is formed . Then, the bleeding site of the nasal cavity could be explored again under nasal endoscopy to accurately locate the bleeding site by following the blood flow at the time. Bleeding was stopped by electrocoagulation using an attractive haemostatic electrode, as shown in Figure 1. If nosebleed could not be induced, the patients underwent selective packing. The methodological protocol is illustrated in Figure 2.
Record of observation indexes
Epistaxis could be successfully induced in the NPG to locate the bleeding site. The length of hospital stay, number of rebleeding cases, postoperative pain, and postoperative complications (nasal adhesions, sinusitis, otitis media with secretions, septal perforation) were recorded in the two groups. Evaluation of the severity of postoperative pain after nasal packing was recorded using a visual analog scale (VAS). VAS scores were assessed using a ruler with two anchor points, with a score of 0 indicating the absence of pain and 10 indicating the worst pain (i.e. range, 0-10: 0 ¼ no pain, and 10 ¼ intolerable pain).
Evaluation of curative effect
After discharge, patients were advised to consume a balanced diet, devote attention to rest, not to blow their nose, control blood pressure, and keep the nasal cavity moist. If there was no recurrent nasal bleeding or bleeding on the affected side within one month, but the location of bleeding was significantly different from the previous site, the patient was considered to be cured.
Statistical processing
The recorded data were statistically analyzed using SPSS version 21 (IBM Corporation, Armonk, NY, USA). The ttest and chi-squared test were used for statistical comparisons and differences were considered to be statistically significant at level at p < .05.
Results
There was no statistically significant difference between the two groups in terms of sex, age, region of residence, and medical history. Demographic and clinical characteristics are summarized in Table 1. After the application of the negative pressure device in patients in the NPG, 56 sites of nasal bleeding were successfully found, accounting for 96.6% of all cases. Bleeding was not induced in 2 cases, as shown in Table 2.
After epinephrine and tetracaine cotton contraction, it was difficult to visualize obvious characteristic mucosal eminence, and some of the bleeding sites were difficult to find because they were in unusual locations. Results of the analysis revealed that the length of hospital stay in the NPG was shorter than that in the CG, and postoperative pain and complications (nasal adhesions, sinusitis, secretory otitis media, septal perforation) were better than those in the CG. Nasal bleeding recurred in 1 patient in the NPG after surgery (p < .05) ( Table 3).
Discussion
Recurrence of refractory epistaxis after nasal packing or surgery is a major problem for otolaryngologists due to the difficulty in identifying bleeding sites. Although ligation or arterial embolization of the external carotid artery and ethmoid artery by surgery can cure intractable epistaxis, they result in hemiplegic thrombosis and increase health costs [8,15]. Accurately locating bleeding sites during the operation is presently a 'hot topic' of discussion [6]. It has been reported that good efficacy in the treatment of epistaxis has been achieved by increasing target blood pressure during general surgery under anesthesia and locating the bleeding site of intractable epistaxis under nasal endoscopy [16]. However, we believe that this method increases surgical risk, especially in the middle-aged and elderly, thus increasing the incidence of cerebrovascular events in the operation theatre, and increasing the cost of surgery.
After combining a negative pressure suction device with an endoscope, given the severe bleeding induced by negative pressure in the nasal cavity, we used an electrode with negative pressure suction to perform electrocoagulation and haemostasis, which not only cleared the operative field but also shortened the duration of the procedure [9]. In this study, 116 patients with refractory epistaxis were enrolled and divided into two groups (i.e. NPG and CP). There were no statistically significant differences between the two groups in terms of clinical features or in age, sex, region of residence, and medical history. These results suggest that the efficacy of locating bleeding sites induced by the negative pressure device was high and recurrence was low in the NPG. Moreover, there was a significant difference in improvement in length of hospital stay, postoperative pain, and postoperative complications such as nasal adhesions and sinusitis. However, no statistical difference was found in secretory otitis media, septal perforation in the NPG compared with CG. We speculate that the low incidence of these two complications and the small sample size affected the statistical results. In the NPG, the largest number of bleeding sites was found in the upper nasal septum in the olfactory fissure area, accounting for approximately 39.7% [6,17,18]. The nasal cavity in the olfactory fissure area was the narrowest, and the patients were sensitive to pain, especially those with a high deviated nasal septum. Consequently, this made it difficult to observe the bleeding point behind the deviated septum; however, a 30 nasal endoscope was introduced when necessary [7,9]. Bleeding sites in the inferior meatus accounted for approximately 20.7%, especially in the anterior vault of the inferior meatus [19]. Bleeding in the anterior vault was easy to miss due to its narrowness and location in the front. Furthermore, using a rigid nasal endoscope facilitated entry to the inferior meatus from the middle part of the inferior meatus to observe the inferior meatus, especially during the bleeding interval. Finally, we found that this method was not only beneficial to locate the site of nasal bleeding, but also to test whether the site of nasal bleeding was accurate or omitted, and can be used as a supplementary examination method after nasal endoscopic haemostasis.
In conclusion, the treatment of refractory epistaxis using a negative pressure suction device combined with an endoscope to accurately locate the bleeding site resulted in a better surgical effect and less intraoperative pain compared with traditional anterior-posterior nostril tamponade or selective endoscopic packing. Patients exhibited higher tolerance during the operation, which may be explained by the shortened duration of the procedure. Moreover, refractory epistaxis often requires repeated exploration, repeated fracture, and movement of the inferior turbinate, middle turbinate, and other structures, resulting in pain and discomfort. It can be used to quickly locate the bleeding point, shorten the duration of the operation, and reduce unnecessary injury. As such, the incidence of complications is lower and the patients recover quickly after the operation [20]. Ultimately, the length of hospital stay is shortened, which improves patient satisfaction, and can, to some extent, relieve mental pressure on the operator, which is worthy of clinical promotion [20].
Our study was limited by the small number of patients with refractory epistaxis. However, there are plans to collaborate with several other hospitals to conduct a multicentre clinical study with a larger sample size to further explore the safety of negative pressure devices in this procedure. It is anticipated that this technique will be widely used in the clinical treatment of refractory epistaxis in the future.
Disclosure statement
No potential conflict of interest was reported by the author(s). | 2021-10-13T06:16:49.361Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "590d0f40b1abc09024e5282ab564958cd967a806",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/00016489.2021.1965652",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4de5ee56b07290271a8c192938fb1333bb2be2d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269927092 | pes2o/s2orc | v3-fos-license | Ipsilateral transfer of motor skill from lower to upper limb in healthy adults: A randomized controlled trial
Background and purpose Whereas motor skills of the untrained upper limb (UL) can improve following practice with the other UL, it has yet to be determined if an UL motor skill can improve following practice of that skill with the lower limb (LL). Methods Forty-five healthy subjects randomly participated in a 10-minute single-session intervention of (1) practicing 50 reaching movement (RM) sequences with the non-dominant left LL toward light switches (LL group); or (2) observing the identical 50 light switches sequences (Switches Observation (SO) group); or (3) observing nature films (Nature Observation (NO) group). RM sequence performance with the left UL toward the light switches was tested before and immediately after the intervention and retested after 24 h. Results Reaching response time improved in the LL group more than in the SO and NO groups in the posttest (pBonferroni = 0.038 and pBonferroni < 0.001, respectively), and improved in the LL group more than in the NO group in the retest (pBonferroni = 0.004). Percentage of fails did not differ between groups across the timepoints. Conclusions It appears that the actual practice of the RM sequence skill with the UL together with the cognitive element embedded in the observation of the RM sequences contributes to ipsilateral transfer from LL to UL.
Introduction
The ability to acquire new motor skills is essential for interacting with the environment throughout the life span, including in rehabilitation following injury to the central nervous system.Skilled performance becomes more specific when more practice is afforded [1][2][3][4].However, under certain conditions, some practiced skills may be intermanually transferred (intermanual transfer) to the performance of different skills or to other effectors (e.g., the contralateral limb) [5].
Intermanual transfer is related to the constraints and phases of motor skills acquisition [4,6,7].Generally, there are two phases: an initial fast phase that relates to within-session gains, followed by a slow evolving between-session gains phase [6,[8][9][10][11].The between-session gains lead to an enduring and robust memory of the skill [6].Once the learned skill has become specific in long-term memory, transfer of the gains from the learned task to a novel task is less likely [4,6,12].
Evidence exists for the intermanual transfer of strength [13][14][15][16] and motor skill [17][18][19][20][21][22].A meta-analysis showed that the unilateral training of the UL or LL resulted in a 15%-29% increase in the strength of the homologous muscles in the contralateral limb of young and healthy adults, as well as adults with orthopedic or neurological impairments [14].In addition, unilateral training of the UL resulted in improved or maintained strength of the contralateral, immobilized UL [23,24].With regard to motor skill, the reaction time of the finger sequence was transferred from the trained effector to the contralateral untrained effector [21,22], and the speed component rather than the accuracy component of a star tracing task was intermanually transferred [22].
In contrast to the evidence concerning the intermanual transfer of the UL [13][14][15][16][17][18][19][20][21][22], very few studies have investigated ipsilateral transfer within the same UL (intramanual transfer) [18,25] and between limbs [26][27][28].For example, transfer of a motor skill, in which the participants had to track the head of a snake (2D virtual "moving snake" task), has been found from the shoulder to the finger [18].After shoulder training, the accuracy index (the mean spatial error) of the finger improved.In a study of ipsilateral transfer from the LL to the UL, an increase in the 1 repetition maximum of the ipsilateral biceps brachi was found following a ten-week leg press resistance training program of the LL [26].The increase of UL strength was greater after the training of the UL biceps muscle, which was immediately followed by leg press, as compared to training of only the UL biceps [28].In addition, leg press training of the dominant LL in children resulted in both ipsilateral and contralateral increases in elbow flexor strength and grip force [27].It has been suggested that ipsilateral transfer between non-homologous effectors requires the intra-hemispheric transfer of information [18].
Whereas the abovementioned studies describe training that triggered the intermanual transfer of strength and motor skills [13][14][15][16][17][18][19][20][21][22][23][24] and the ipsilateral transfer of strength from the LL to the UL [26][27][28], to the best of our knowledge, no current data exists regarding the ipsilateral transfer of motor skills from the LL to the UL and vice versa.This study is the first attempt to determine whether there is an ipsilateral transfer of a motor skill from the LL to the UL, which can potentially provide additional insights about transfer principles and possible clinical applications.Specifically, we investigated whether practicing reaching movement (RM) sequences with the LL toward light switches can be transferred to the UL in healthy adults.The process of sequence learning involves two separate components: first, acquiring the arrangement of elements in the sequence, and second, being able to execute the sequence, thereby merging the elements into a single skilled action.As cognition plays a role in motor learning, particularly when it comes to choosing actions at the correct time and in the correct sequence [29], we compared the RM sequences practice to merely observing the same sequences of the light switches.In this manner, we sought to compare the contribution of the cognitive aspect (which is related to the memory of the sequence) vs. the combined cognitive and motor aspects to the ipsilateral transfer of RM sequences.We hypothesized that practicing RM sequences with the LL would improve the performance of RM sequences with the ipsilateral UL compared to merely observing the same sequences of the light switches or observing nature films.
Study design
This was a single-blind, parallel, randomized, controlled study.Data were collected in a brain and motor behavior laboratory based at Ariel University, Israel.Subjects were randomly assigned with a 1:1:1 ratio, using a random number generator in WINPEPI, to one of three groups: (1) practice of RM sequence with the LL toward light switches (LL group); (2) observation of sequence of light switches (Switches Observation (SO) group); and (3) observation of nature films (Nature Observation (NO) group).All participants were blinded to group allocation.Research assistants who administered the intervention and measured the outcomes received allocation information via coded email from the researcher SFT.Blinding of group allocation was maintained during the data analysis.The trial was retrospectively registered at the ClinicalTrials.govregistry on 30/07/2023 with trial registration number NCT05988775.All methods were performed in accordance with the relevant guidelines and regulations.
Participants
The sample size for this study was determined based on a power analysis calculation that was conducted using G*Power version 3.1.9.7.Power analysis yielded a total sample size of 45 individuals (15 individuals per group) for detecting significant interaction with an assumed effect size of 0.25 and a power of 90%.Forty-five subjects (23 women; aged 25 ± 3 years) participated in the study between May 9th, 2022 to July 26th, 2022.Inclusion criteria included being aged between 20 and 35, right-hand dominance and self-report regarding being healthy.Exclusion criteria included having musculoskeletal or neurological deficits interfering with task performance (proper UL and LL reaching performance).The study was approved by the Ethics Committee of Ariel University (approval number: AU-HEA-OE-20210610). Written informed consent was obtained from all participants involved in the study.The Consolidated Standards of Reporting Trials (CONSORT) recommendations (CONSORT Checklist) are followed in our study; a CONSORT flow diagram is shown in Fig 1.
Motor task
Subjects took part in two sessions.The initial session involved familiarization practice of the motor task, a pretest, a single session intervention (based on group randomization), and a posttest.The second session comprised a retest conducted 24 hours after the training.The familiarization practice and tests were conducted with the UL, and the single session intervention was carried out with the LL.
The recording device used in tests (pretest, posttest, and retest) consisted of a custom-made testing apparatus set up on an adjustable height rectangular table with a smooth laminated tabletop of 105 cm by 80 cm.Five switch-led units of 5 cm by 8 cm by 5 cm were connected to the tabletop in a half circle with a radius of 38 cm, numbered from 1 to 5. Each unit was comprised of a large push-button switch and a red light-emitting diode (LED).A computer, interfaced with a LabVIEW software data acquisition card, operated the system.The initiation of a particular unit's LED served as a signal to reach towards that unit and press the push-button switch.Deactivating the unit involved reaching for its switch, and the response time between the activation and deactivation of the LED was recorded.Reaching toward the switch of an activated unit deactivated it, and the response time between the activated and deactivated LED was recorded.A detailed description of the task and the apparatus is provided in a previous study [30].To evaluate UL performance, the subjects sat on a chair with sturdy back support, placed in front of a table, ensuring their hips and knees flexed at a 90-degree angle.Participants initially positioned their left fist at the table's edge in front of their chest (aligned parallel to switch 3).This placement allowed them to extend and touch switch 3 with their third right metacarpal (Fig 2A).The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details (in Fig 2).
Participants engaged in a familiarization practice involving 15 RM in three sequences 1-4-3-5-4-2.This practice entailed reaching with their left UL towards the activated unit, as quickly as possible, and returning the UL to the starting position.This process continued until the next unit was activated, with an activation duration and delay of 1 s.The subjects were instructed to reach as quickly and accurately as possible with the left UL from the starting position to the light switch, press it, and then return to the starting position.Throughout this action, they were instructed to ensure that their fist remained in contact with the table.All groups were informed about the sequence 1-4-3-5-4-2 in the familiarization practice.In each pretest, posttest, and retest, the subjects also performed RMs with their left UL toward units that were activated in the identical sequence 1-4-3-5-4-2, maintaining an activation duration and delay of 1 s.The subjects executed two 5 sequences (i.e., 60 RMs/trials); five sequences constituted a block (i.e., two blocks).If the subject failed to reach toward the activated unit and touched the unit-related switch within 1 s, the trial was deemed a ''fail" and was excluded from the averaged response time.After each block, the subjects rested for 30 s. Outcome measures were made by averaging the response time of all the RMs during the sequences (ms) (primary outcome measure), and the percent of fails was calculated for each block as (number of fails/30 trials)*100 (secondary outcome measure).Improved motor performance was indicated by a shorter response time and fewer failures.
Procedure of single session intervention
In each of the LL, SO and NO groups, a 10-minute single-session intervention was conducted.Subjects of the LL group sat on a custom-designed plinth with a solid back support in front of the apparatus at the same height as the tabletop; hence, they could perform the RM sequence with the leg.At the starting position, the heel was placed on the edge of the table in front of switch 3, so while the left heel touched switch 3, the knee reached 30˚of flexion (Fig 2B).The initial testing position of the SO and NO groups during the intervention was sitting on a chair with solid back support, hips and knees flexed 90˚, in front of the apparatus used for the tests.The LL group was instructed to reach with the left LL from the starting position as fast and accurate as possible to the light switch, press it, and return to the starting position, while the heel must remain in contact with the table.The subjects performed RMs toward the units that were activated in the same order as the tested sequence 1-4-3-5-4-2, with an activation duration and delay of 1 s.The practice included 10 blocks, each consisting of 30 RMs with a 30 s pause after each block.They were informed about the sequence.The SO group was instructed to observe the light switches while avoiding moving.The subjects observed RMs toward the units that were activated in the practiced sequence 1-4-3-5-4-2, also with an activation duration and delay of 1 s and 30 s pause after each block.Top of Form They were informed about the sequence.The NO group was instructed to observe a video clip while avoiding moving.The video clip consisted of a 10 min nature movie in cycles of one-minute observation and pausing 30 s, equivalent to the timing of RMs performed by groups LL and SO.
Statistical analysis
Age and sex were compared between groups (LL, SO, NO) using one-way ANOVA and chisquared tests, respectively.Normal distribution was found for response time and not for percent of fails.Differences between groups in the pretest, regarding response time and percent of fails, were investigated using one-way ANOVA and Kruskal-Wallis with Bonferroni correction for multiple comparisons, respectively.The effects of practice and time on the response time were investigated using a mixed-design ANOVA with time (pretest, posttest, retest) as the within-subject factor and group (LL, SO, NO) as the between-subject factor with Bonferroni correction for multiple comparisons.Due to the non-normal distribution of the percent of fails, subtraction values between each two tests (pretest-posttest, pretest-retest, posttestretest) were compared between groups using Kruskal-Wallis test with Bonferroni correction for multiple comparisons.All tests were performed using SPSS (version 26.0) with initial significance levels of p < 0.05.
Results
Forty-eight participants underwent the pre-enrollment screening evaluation.Of those, two did not meet the inclusion criteria and one had technical problems with the device.Age (LL group: 24.9 ± 1.8 years; SO group: 24.9 ± 3.2 years; NO group: 25.0 ± 2.7 years) and sex (LL group: eight women; SO group: seven women; NO group: eight women) did not differ between groups (p > 0.951, for all).Individual data are displayed in S1 Table.
Motor sequence learning task
Mean values of response time (ms) and percent of fails by group and time are shown in Table 1.Response time and percent of fails did not show significant differences between groups in the pretest (p = 0.840, p = 0.903, respectively).
Effects on percent of fails (%): Subtraction values did not differ between the groups (p� 0.065, for all).
Discussion
To the best of our knowledge, this is the first study that evaluated whether there is an ipsilateral transfer of a motor skill from the LL to the UL.We found that in the posttest, the response time of RM sequences of the UL was significantly faster (shorter) in the group that practiced the RM sequence with the LL (LL group) as compared to the group that observed a sequence of light switches (SO group) and the group that observed nature films (NO group), whereas it did not differ in the pretest between these groups.In the retest, the response time of RM sequences of the UL was significantly faster in the LL group compared to the NO group.In addition, in each group, response time improved significantly in posttest and retest compared to the pretest.The percent of fails did not differ between groups at the different time points.Our finding that, in the posttest, the response time of RM sequences of the LL was significantly faster in the LL group as compared to both the SO and NO groups is in line with our hypothesis that LL practice would improve UL performance compared to merely observing the same sequences of the light switches or observing nature films.This finding regarding the ipsilateral transfer of performance from LL to UL supports the findings of the few previous studies that investigated ipsilateral transfer of strength from the LL to the UL in healthy adults and youth [26,27,28], and the ipsilateral transfer of a motor skill (2D virtual "moving snake" task and star-line drawing task [29]) between proximal and distal effectors within the UL [18,29].The finding related to the ipsilateral transfer of a motor skill from the LL to the UL complements previous findings regarding the ipsilateral transfer of strength due to behavioral and neural evidence for dissociation between strength and motor skill [31][32][33].From the behavioral point of view, for example, a finger flexor control abnormality, which was not attributable to weakness, was demonstrated in poststroke patients [32].They showed more enslaving of passive fingers for any submaximal voluntary force, that is, even when normalizing for their weakness, they still had markedly less control.From a neural perspective, experimental evidence shows that the reticulospinal tract may be particularly important for generating higher force muscle contractions [33].
Our data are also in agreement with the generalized motor program theory.This theory considers motor learning as the generation of an abstract memory structure (i.e., a motor program), which enables a performer to adapt a learned skill to altering environmental requirements [5].This central motor representation is hypothesized to be independent of the effector used, as reflected in inter-and intramanual transfer.With regard to intermanual transfer, a key role is probably played by the corpus callosum, the largest white matter tract connecting the two cerebral hemispheres (but see also [34]).However, an ipsilateral transfer between limbs on the same side or within the same limb may require intrahemispheric transmission of information.Alternatively, ipsilateral transfer can be explained by shared representation in the rolandic motor association (RMA) region, a motor association area, which was recently found in the depths of the central sulcus [35,36].The RMA was found to be electrophysiologically active during tongue, hand or foot movements [36].The authors suggested that because the RMA is not plainly related to any single movement function, it is probably an association area that helps coordinate different effectors of movement.
The task of RM sequence performance includes motor (reaching performance) and cognitive aspects (sequence of light switches).Actually, any real-world motor task necessarily entails both cognitive and movement components [29].The LL and SO groups were explicitly instructed about the sequence order at the beginning of the task in order to focus on examining improvements in the motor performance of the sequence, rather than on the learning of the sequence order itself.In the single session intervention, the LL group was instructed to reach with the LL from the starting position as quickly and accurately as possible to the light switch, press it, and return to the starting position, whereas the SO group was instructed to observe the light switches while avoiding moving.The cognitive aspect was also related to the repeated exposure to the light switches of the sequence during the single session intervention, which included 50 sequences (in both the LL and SO groups) as an activation (illumination) of a specific unit LED was a cue for the subjects to reach toward that unit and press the push-button switch.This cognitive aspect of the task could have also led to the improved response time of the RM sequences of the UL.By comparing the LL and SO groups, we sought to disentangle the cognitive aspect (SO group) from the inherent combination of cognitive and motor aspects in the current task (LL group).The findings that, in the posttest, the response time of the UL RM sequences was better in the LL group than in the SO group, and that, in the posttest, response time of the UL RM sequences was not better in the SO group than in the NO group (while pretest values were similar in all the groups) suggest that just practicing the cognitive aspect of the task (being exposed to the light switches of the sequence) was not sufficient for triggering ipsilateral transfer from the LL to the UL.Therefore, it seems that the combined practice of the motor and cognitive aspects was required to trigger ipsilateral transfer from the LL to the UL.Indeed, there is evidence that intermanual transfer can be facilitated by a cognitive strategy [17,[37][38][39][40]. Explicit (cognitive) processes were found to be primarily responsible for intermanual transfer of a visuomotor adaptation task when participants adapted to a large visuomotor distortion of which they were aware [17].Elements of the task environment, such as the type of visual feedback available, can also alter the balance between cognitive strategies and motor adaptation and affect intermanual transfer [39].Intermanual transfer was facilitated in an endpoint feedback condition that consisted of cognitive strategy.The participants isometrically exerted force on a handle to adjust the height of the visual bar on the screen to a target level.Visual feedback was continuously provided for one group, while only the endpoint of the force trajectory was presented to another group.It was suggested that restricted visual feedback to the endpoint relied heavily on a cognitive strategy to solve the task because reaction times increased in that task.Intermanual transfer was facilitated in the endpoint feedback condition, suggesting that effector-independent learning was facilitated by a cognitive strategy [39].Despite the differences of experimental design and tasks in the above-mentioned studies [17,[37][38][39][40], the cognitive aspect, which is inherent in the task, improved the intermanual transfer.It should be noted that our study design did not aim to elucidate the respective contributions of the cognitive and motor aspects to ipsilateral transfer.
On the other hand, there is also contradictory evidence that intermanual transfer does not depend on cognitive awareness of visuomotor perturbation [40].Even informing the participants about the rotation prior to the adaptation session (presumably leading to full awareness) did not lead to increased intermanual transfer compared to adaptation without explanation [41].In another experiment, in which the degree of awareness of the visuomotor rotation was manipulated by introducing a 22.5˚perturbation in either an abrupt single step or gradually in ~1˚increments every 10 trials, intermanual transfer was similar in both the abrupt and gradual groups, suggesting that awareness of the perturbation has little effect on intermanual transfer [42].It is possible that these studies on visuomotor adaptation failed to demonstrate the effect of cognitive awareness on transfer because of the small perturbation sizes (32 deg [41] and 22.5˚ [42]) which probably did not lead to awareness.Even perturbations as large as 40e ngaged very little awareness [43].Awareness was indeed found to depend on perturbation size [43], and the extent of the participants' awareness of the learned perturbation was directly related to the amount of intermanual transfer [38].Werner et al. [43], examined interlimb transfer in four conditions in which the rotation size was 30˚or 75˚, and the rotation was provided either gradually or abruptly.The authors measured indexes of awareness and unawareness separately, and the results indicated that both awareness and transfer were larger in the abrupt 75˚condition.It should be noted that the extent of the transfer was found to differ depending on additional factors such as which hand is trained first [44] and the location of the targets in the workspace [45].
The response time of RM sequences improved in all groups in the posttest and retest compared to the pretest but did not improve further from the posttest to the retest, i.e., there was an initial within-session gain but there was no off-line consolidation.This finding in the NO group emphasizes that the number of RM sequences repetitions practiced by the UL during the pretest and posttest was not enough for consolidation of UL response time in the retest.The finding that the response time of RM sequences of the UL was significantly faster in the LL group compared to the SO group in the posttest but not in the retest implies that the ipsilateral transfer of performance from the LL to the UL in the posttest did not fully consolidate to the retest [46].It is yet to be determined if practicing a larger number of LL repetitions would produce ipsilateral transfer to the UL in the retest as well.
Limitations of the study
First, the experimenter was not blinded to group allocation.It should be noted, however, that the scoring of the motor task was automatically computed by the LabVIEW software.Second, conducting separate measurements for reaction time and movement time could have enhanced the focus on the ipsilateral transfer of the motor performance itself, which is primarily reflected in the movement time.
Conclusions
Our results provide evidence for the ipsilateral transfer of a sequential motor skill from the LL to the UL in healthy adults.These findings pave the way for further studies that can combine behavioral measures with neural measures (using, for example, transcranial magnetic stimulation or electroencephalography) to elucidate the neural mechanism underlying ipsilateral transfer between LL and UL.Ipsilateral transfer of motor skills may have practical implications and consequences for skill development in sports and rehabilitation settings.
Fig 1 .
Fig 1. Trial flowchart.LL group = lower limb group that practiced the RM sequence with the LL toward light switches; SO group = switches observation group that observed the sequence of light switches; NO group = nature observation group that observed nature films.https://doi.org/10.1371/journal.pone.0303459.g001
Fig 2 .
Fig 2. General setup.(a) performance with the upper limb.(b) performance with the lower limb.To evaluate upper limb performance, the subjects performed reaching movement sequences with the left upper limb toward the units.During the single session intervention, the subjects performed reaching movement sequences with the left leg towards the units.https://doi.org/10.1371/journal.pone.0303459.g002
Fig 3 .
Fig 3. Response time (ms) of reaching movements (RMs) during all sequences in each group at the different time points.Asterisks denote a significant difference (pBonferroni < 0.05).LL group = lower limb group that practiced the RM sequence with the LL toward light switches; SO group = switches observation group that observed the sequence of light switches; NO group = nature observation group that observed nature films.https://doi.org/10.1371/journal.pone.0303459.g003
Table 1 . Means, standard deviations and confidence intervals of response time and percent of fails for groups in time points.
LL group = lower limb group which practiced reaching movements sequence with the LL towards light switches; SO group = switches observation group which observed the sequence of light switches; NO group = nature observation group which observed nature films. https://doi.org/10.1371/journal.pone.0303459.t001 | 2024-05-22T05:09:38.933Z | 2024-05-20T00:00:00.000 | {
"year": 2024,
"sha1": "f86685d997aa9a570bd557e27a4a5f4eec37eca8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f86685d997aa9a570bd557e27a4a5f4eec37eca8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6695786 | pes2o/s2orc | v3-fos-license | What is the care pathway of patients who undergo thyroid surgery in France and its potential pitfalls? A national cohort
Context The rate of thyroid cancer is increasing in France, as well as concerns about overdiagnosis and treatment. Objectives To examine the care pathway of patients who undergo thyroid surgery in France and detect potential pitfalls. Design A large observational study based on medical reimbursements, 2009–2011. Setting Data from the Sniiram (National Health Insurance Information System). Patients Patients with thyroid surgery in 2010, classified into 4 groups: thyroid cancer, benign nodule, goitre or multiple nodules, other (hyperthyroidism, head–neck cancer). Main outcome measures Medical investigations during, prior and after thyroidectomy. Results A total of 35 367 patients underwent surgery (mean age 51 years, 80% women): 17% had a reported diagnosis of thyroid cancer, 20% benign nodule, 38% goitre or multiple nodules and 25% another diagnosis. The ratio of thyroidectomies with cancer over thyroidectomies with benign nodule was 0.8 and varied across regions. In the year preceding surgery, 82% of patients had an investigation by thyroid ultrasonography, 21% thyroid scintigraphy, 34% fine-needle aspiration cytology, 40% serum calcitonin assay and 54% serum calcium assay. In the following year, all patients with total thyroidectomy and 44% of patients with partial thyroidectomy and a diagnosis of benign nodule were taking thyroid hormone therapy. 100 patients had been reoperated for a compressive haematoma and 63 died during the first month, half of whom had been operated for cancer. Mean rates of recurrent laryngeal nerve injury and hypocalcaemia (requiring blood tests plus treatments within 4–12 months) were estimated at 1.5% and 3.4%, respectively, and were higher in the cancer group (2.3% and 5.7%). Conclusions This almost nationwide study demonstrates the suboptimal management of patients prior to thyroidectomy in France. It suggests overdiagnosis and potential harms to patients, and calls for a review of the relevance of thyroidectomy, particularly with regard to microcancers.
Results: A total of 35 367 patients underwent surgery (mean age 51 years, 80% women): 17% had a reported diagnosis of thyroid cancer, 20% benign nodule, 38% goitre or multiple nodules and 25% another diagnosis. The ratio of thyroidectomies with cancer over thyroidectomies with benign nodule was 0.8 and varied across regions. In the year preceding surgery, 82% of patients had an investigation by thyroid ultrasonography, 21% thyroid scintigraphy, 34% fine-needle aspiration cytology, 40% serum calcitonin assay and 54% serum calcium assay. In the following year, all patients with total thyroidectomy and 44% of patients with partial thyroidectomy and a diagnosis of benign nodule were taking thyroid hormone therapy. 100 patients had been reoperated for a compressive haematoma and 63 died during the first month, half of whom had been operated for cancer. Mean rates of recurrent laryngeal nerve injury and hypocalcaemia (requiring blood tests plus treatments within 4-12 months) were estimated at 1.5% and 3.4%, respectively, and were higher in the cancer group (2.3% and 5.7%).
Conclusions: This almost nationwide study demonstrates the suboptimal management of patients prior to thyroidectomy in France. It suggests overdiagnosis and potential harms to patients, and calls for a review of the relevance of thyroidectomy, particularly with regard to microcancers.
Strengths and limitations of this study
▪ The Sniiram database includes almost all the insured population in France where medical insurance is mandatory. It is one of the largest administrative databases. It has led to many publications to monitor quality of care, describe healthcare pathways at a national level and guide public health policies. In this paper, data from 77% of the French population were extracted to study the healthcare pathway of a cohort of 35 000 people who underwent thyroidectomy in 2010 and provide a national picture. ▪ This observational study relies on the quality of the surgical procedure coding performed in public or private hospitals, as these procedure codes are necessary in order to classify the four groups: cancer, benign nodule, multiple nodules or goitre, and other cases. Misclassifications may have occurred, which are more likely to happen in the case of microcancers considered as benign nodules in our study. ▪ While the Sniiram database records the follow-up and reimbursements of all patients, we still may have missed some non-surgical investigations performed prior to surgery. We used a reasonable 1-year interval to define this period but procedures performed during a public hospital stay are not systematically coded within the hospital when they do not provide higher funding to the hospital. However, procedures such as thyroid ultrasonography, scintigraphy, fine-needle aspiration cytology, serum calcitonin and calcium assays are rarely performed during a hospital stay. ▪ As the Sniiram does not provide outpatient diagnoses, we constructed several algorithms to define potential complications that did not require systematic hospitalisation, such as hypoparathyroidism (defined with more than three serum calcium assays and three deliveries of calcium supplements during the 4th to the 12th month after thyroidectomy) or recurrent laryngeal nerve injury (several definition based on speech therapy sessions, ear-nose-throat specialist visits or functional testing during the first 12 postoperative months).
INTRODUCTION
The prevalence of all forms of thyroid disease is difficult to assess . Published clinical trials are often old and the performances of detection have substantially improved over time, 1 <QC: There are 26 refs in the footnotes, which I have edited. Please check with the PE if the corrections are correct.>consequently modifying patient management. In countries with sufficient dietary iodine intake, such as France or the USA, the clinical prevalence of thyroid nodules is about 5% and is higher in women (5.3-6.4% vs 0.8-1.6% for men) and persons over the age of 50 years in whom the prevalence is about 30-40%. 2 The prevalence of thyroid disease based on ultrasonography screening is much higher and is currently estimated to be 67%, 1 3 comparable to the rate of nodules discovered at autopsy. 1 3 4 The growing prevalence of thyroid cancer has been clearly established. Over the past three decades, the number of new cases diagnosed in France has increased fivefold in both sexes. 5 6 This increased prevalence almost exclusively concerns papillary cancers, with no impact on mortality, which has decreased over the same period. Over a period of 20 years, the proportion of microcancers (<10 mm) has increased from 4% to more than 50%. One-half of these microcancers are smaller than 3 mm and are discovered incidentally on thyroidectomy specimens. This increased prevalence of microcancers is directly related to progress in the detection of nodules by increasingly efficient ultrasound machines and progress in the histological diagnosis of cancer as a result of very thin histological sections and the use of immunohistochemistry. In fact, the majority of these microcancers appear to undergo growth arrest, while progression to symptomatic cancer is observed in only 1 out of every 15 nodules. The increased incidence of thyroid cancer is thus due to microcancers and can be considered to constitute a form of overdiagnosis. 7 The diagnosis of thyroid cancer, regardless of stage, results in an alteration of the patient's quality of life and social representation and can be responsible for sometimes unjustified modifications of therapeutic management or potentially morbid treatment follow-up, resulting in increased costs induced by incidentally discovered cancers.
Over the past 10 years, several medical authorities have published guidelines for the management of thyroid nodules and/or thyroid cancers: the European Thyroid Association (ETA) and the American Despite several differences, all medical societies recommend thyroid-stimuling hormone (TSH) assay and thyroid ultrasonography in all patients with thyroid disease combined with fine-needle aspiration cytology of nodules with features suggestive of malignancy. [8][9][10] According to the SFE, the majority of thyroid incidentalomas require simple surveillance. The indications for thyroid surgery are rare, limited to nodules demonstrated to be malignant on preoperative investigations and very large or retrosternal nodules, 10 symptomatic or unsightly goitre or goitre accompanied by low TSH. Despite existing international and national recommendations, much concern is being currently raised about overdiagnosis and an excess of thyroidectomy, which may result in harms to the patients. 7 The objective of this observational study was to analyse the care pathway of patients prior to thyroidectomy in France during the year 2010 and to study the impact of surgery on postoperative morbidity and mortality in the Sniiram (French National Health Insurance Information System) database, a largely published and nationwide comprehensive administrative database of about 56 million people based on medical reimbursement data. 11
Information system and population
In France, the Sniiram is an anonymous, individual database concerning all the beneficiaries of the various national health insurance schemes. [11][12][13] Medical insurance is mandatory and is provided by the State for lowincome people. Many published studies have been based on the Sniiram which stands among the largest medicoadministrative databases worldwide and is largely used to guide public health policies in France as these data allow the systematic follow-up of all medical care received by the population. 12 13 It exhaustively records all reimbursed prescriptions and outpatient services and procedures, as well as their date, over the previous 3 years plus the current year. Identification of medicinal products is based on the Anatomical Therapeutic Classification (ATC) code, that of laboratory examinations is based on the national laboratory test coding table and that of procedures is based on the Classification Commune des Actes Médicaux (CCAM; common classification of medical procedures). The Sniiram does not contain any clinical data concerning the results related to prescriptions or examinations, but it nevertheless includes information on the possible presence of long-term diseases (LTDs), such as cancers, eligible for 100% reimbursement of healthcare expenditure following approval by a national health insurance physician. These LTDs are coded according to the International Classification of Diseases (ICD-10). A unique and anonymous identification number for each person also allows integration into the Sniiram database of the hospital discharge database (PMSI, Programme de médicalisation des systèmes d'information). The principal diagnoses and associated diagnoses recorded in the PMSI are coded according to the ICD-10 and the procedures performed, such as thyroidectomies, are coded according to the CCAM.
In 2010, the national health insurance general scheme (excluding local mutualist sections that provide medical insurance for, eg, students, teachers) covered about 77% of the 65 million inhabitants in France including lowincome people and was the only scheme for which both vital status and LTDs were comprehensively recorded at that time. Data for the health insurance general scheme beneficiaries who underwent thyroid surgery in 2010 were extracted from the Sniiram database. The diagnoses recorded during the hospital stay, and the clinical examinations and complementary investigations performed 1 year before and 1 year after surgery, estimated by reimbursement data, were analysed. In order to establish an estimate for France as a whole, the sample sizes of general scheme beneficiaries undergoing thyroid surgery were extrapolated (by age group and gender) to the national estimates provided by Insee (Institut national de la statistique et des études économiques) for the total population of France in January 2011. For the purposes of regional comparisons, regional rates of the general scheme were standardised for the age and gender structure of the Insee total population of France in January 2011.
Patients who had undergone thyroidectomy were classified into four exclusive groups according to the type of thyroid disease. The first group was composed of patients who had a diagnosis of thyroid cancer recorded in the databases. This group included patients with ICD-10 codes for malignant neoplasm of thyroid gland (C73, D09.3), neoplasm of uncertain behaviour of thyroid gland (D44.0), hypersecretion of calcitonin (E07.0) or multiple endocrine adenomatosis (D44.8) coded during the hospital stay for thyroidectomy or in the LTD coding, and patients who underwent lymph node dissection or radioiodine therapy without a diagnosis of hyperthyroidism.
The second group was composed of patients who had a recorded diagnosis of benign nodule according to ICD-10 codes for non-toxic single thyroid nodule (E04.1), benign nodule of the thyroid gland (D34) or benign tumour of other and unspecified endocrine glands (D35.7, D35.8, D35.9) during the hospital stay.
Finally, the fourth group comprised patients who had another recorded diagnosis, especially head and neck cancer and hyperthyroidism. These patients were excluded from the subsequent analysis, as thyroidectomy was simply an associated procedure or was performed for hyperthyroidism.
Definitions and statistical analysis
The care pathway was analysed in rolling years, 12 months before and 12 months after the date of thyroidectomy. A thyroidectomy frequency ratio was calculated between group 1 (cancer) and group 2 (benign nodule), overall and by region. In order to study regional variability, data were standardised for the age and gender of the population of beneficiaries on 31 December 2010. We compared the lowest to the highest value of 25 French regions using χ 2 tests. The 26th region (Guyana) was excluded due to a small number of cases (7 cases of cancer and 19 benign nodules) and different care pathways occurring in this rural region. Drug treatments were identified by the presence of at least three reimbursements over the 12-month period before and then the 12-month period after hospitalisation. Thyroid ultrasonography, fine-needle aspiration cytology and scintigraphy were identified by the presence of specific codes, whether they were performed in a hospital outpatient department or in private practice. However, the procedures performed during a public hospital stay were not systematically coded at that time, possibly resulting in missing data. Similarly, laboratory tests performed during a public hospital stay were not identified, as they are not reimbursed individually. Reimbursements for hospital outpatient and private practice endocrinology and visits to an ear-nose-throat (ENT) specialist were taken into account. There again, ambulatory visits to a public hospital specialist were not systematically coded at that time, possibly resulting in missing data. Postoperative sick leave allowances were also taken into account.
Complications were identified during the thyroidectomy hospital stay and over the following year from hospitalisation and/or ambulatory reimbursement data. Severe complications were defined by the development of compressive haematoma during the thyroidectomy hospital stay, death (in-hospital or during the first month) or the presence of a CCAM procedure supposedly related to thyroidectomy (tracheobronchial stent, tracheotomy, arytenoidectomy, etc). Readmissions for thyroid problems or for phosphorus-calcium imbalance were also identified. Various indicators that could also constitute markers of late ENT complications were constructed: at least two visits to an ENT specialist, at least one visit to a speech therapist, laryngeal function tests looking for recurrent laryngeal nerve injury, laryngoscopy, etc, over the 12-month period. Hypoparathyroidism was suspected by the presence of at least three serum calcium assays and at least three deliveries of calcium supplements over the period ranging from 4 to 12 months after surgery (to avoid selecting very transient hypoparathyroidism), or the presence of hospitalisation with a diagnosis of hypoparathyroidism over the 12-month period.
Finally, the number of patients who had undergone thyroidectomy and those with LTDs for thyroid cancer from 2010 to 2013 were analysed in order to estimate temporal trends.
Statistical analyses were performed with SAS software (SAS Enterprise Guide, V.4.3, SAS Institute, Cary, North Carolina, USA). Analyses of the Sniiram database have been approved by the French personal data protection agency (Commission Nationale Informatique et Libertés). Since the Sniiram database is anonymous, no other ethical approval was required for this study.
RESULTS
Among 50 million people insured under the health insurance general scheme (77% of the French population), 35 367 underwent thyroid surgery in 2010, that is, by extrapolation based on the age and gender structure of the French population, about 45 800 people in the overall French population. Patient characteristics and characteristics of the surgical procedures performed are reported in table 1.
Patients with a diagnosis of multiple nodules or goitre represented the largest subgroup (38% of patients), followed by those with a benign nodule (20%) and those with thyroid cancer, regardless of the stage (17%). One-quarter of patients had another type of diagnosis, mainly head and neck cancer or hyperthyroidism. Each of these subgroups comprised 80% women with a mean age of 51 years. Total thyroidectomy (or completion thyroidectomy) was performed for about 89% of patients with a postoperative diagnosis of thyroid cancer and 86% of cases of multiple nodules or goitres, while partial thyroidectomy was performed in 71% of patients with benign nodules. Patients were more frequently operated in private or public hospitals with high thyroid surgery rates, especially when they had a diagnosis of thyroid cancer or multiple nodules or goitres.
The rate of thyroidectomy with a diagnosis reported as thyroid cancer, nodule or goitre in 2010 (excluding thyroidectomies for head and neck cancer or hyperthyroidism) was 5.3 per 10 000 inhabitants and the standardised rates varied according to regions between 4.0 and 8.1 per 10 000 inhabitants ( p=0.003) as shown in figure 1.
In patients 20 years and older, the ratio of the number of thyroidectomies with a diagnosis of cancer over the number of thyroidectomies with a diagnosis of benign nodule was 0.8. This ratio varied between regions from 0.5 in Basse-Normandie, Bretagne, Limousin and Languedoc-Roussillon to 2.6 in Nord-Pas-de-Calais, as shown in figure 2. The percentage of thyroidectomies with a diagnosis of cancer over the total number of thyroidectomies with a diagnosis of cancer or benign nodule varied significantly from 28% to 69% ( p=0.001).
During the year preceding thyroid surgery, healthcare varied according to the group. Eighty per cent of patients of group 1 (n=5979) who finally had a diagnosis of thyroid cancer had evidence of investigation by thyroid ultrasonography and 44% by fine-needle aspiration cytology (table 2) prior to surgery. In group 2 (n=7270), corresponding to patients who had a recorded diagnosis of benign nodule, the fine-needle aspiration cytology rate was 34%.
Among people with thyroidectomy and a diagnosis of cancer or benign nodule, the overall fine-needle aspiration cytology rate was 39% and the standardised rates varied according to regions between 11% (Franche-Comté) and 53% (Ile de France; p=0.001, figure 3).
In the three groups of patients, TSH assays had been performed in about 90% of patients, T4 assay in more than 63%, T3 assay in more than 35% and a thyroid scintigraphy in more than 18%, prior to surgery. Serum calcitonin assay had been performed in 44% of patients and serum calcium assay in 58% of patients who finally had a diagnosis of thyroid cancer. These proportions were 39% and 50%, respectively, for patients with a diagnosis of benign nodule. Less than one-half of patients, regardless of their thyroid disease, were referred to an endocrinologist. Finally, neither thyroid ultrasonography nor fine-needle aspiration cytology was performed in 10% of patients, neither T3, T4 nor TSH assay was performed in about 9% of patients in groups 1 and 2.
The fine-needle aspiration cytology rate varied according to the region and was probably related to the availability of doctors able to perform this technique and cytopathologists. Among the patients who had undergone surgery and had a diagnosis of thyroid cancer or benign nodule, the fine-needle aspiration cytology rate was 53% in patients from Ile-de-France and Rhône-Alpes, but only 10% in those from Franche-Comté (and 0% in Guyana for only 28 patients undergoing surgery). The regional rates of fine-needle aspiration cytology were significantly correlated with the regional rates of thyroidectomy (Spearman correlation coefficient test: r=0.48, p=0.034).
During the 12 months following surgery, TSH assay was performed in almost all patients (table 3). T3 assay was performed in 51% of patients with thyroid cancer, while the rate of total thyroidectomy in this group was 76%. T3 assay was also performed in 26% of patients with a diagnosis of benign nodule, but the rate of partial thyroidectomy in this group was 71%. Thyroid hormone replacement therapy was administered to all patients who had undergone total thyroidectomy and in 44% of patients who had undergone partial thyroidectomy and had a diagnosis of benign nodule. The endocrinologist referral rate remained low: 56% of patients with thyroid cancer and 34% of patients with benign nodule consulted an endocrinologist. The mean duration of sick leave for employed patients was 89 days for patients with thyroid cancer and 38 days for patients with benign nodules. Sick leave lasted more than 3 weeks in 60-81% of cases, depending on the group.
Severe complications of thyroid surgery were rare (table 4). About 20 patients, 14 in the cancer group and <10 in the multiple nodules and goitre group (none in the benign nodule group), died in hospital; another 11 patients died during the first 30 days after surgery, that is, an overall short-term mortality of about 30 patients. The cause of death is not indicated in these administrative databases. One hundred patients, 25 in the cancer group and 75 all together in the benign nodule group and the goitre group, experienced postoperative compressive haematoma requiring reoperation. This compressive haematoma rate (0.4%) did not appear to be related to the underlying thyroid disease or to the type of surgical procedure performed, such as radical thyroidectomy or lymph node dissection. The late complication rate was estimated by the number of specialist visits or procedures, or the readmission rate during the year following surgery. Patients with more than two ENT or speech therapy visits and patients in whom a laryngeal procedure was performed were considered to have experienced a laryngeal complication. The late complication rate varied according to the group from 17% to 23%, and the recurrent laryngeal nerve injury rate varied from 2.3% to 1.2%. Patients in whom more than three serum calcium assays were performed and to whom calcium supplements were dispensed more than three times during the 4th to the 12th month following surgery were considered to suffer from persistent hypoparathyroidism. This hypoparathyroidism rate ranged from 5.7% for the thyroid cancer group to 1% for the nodule group. Among people with a diagnosis of benign nodule, a marker of hypoparathyroidism was recorded in 10 persons (0.2%) who underwent partial thyroidectomy and 63 (3%) of those with total or subtotal thyroidectomy. The readmission rate in the thyroid cancer group was higher for hypercalcaemia than for hypocalcaemia: 1.2% and 0.2%, respectively.
Between 2010 and 2012, the number of patients who had undergone thyroidectomy increased by 400 patients each year, from about 35 400 in 2010 to 36 200 in 2012, that is, a mean annual growth rate of +1.1%. This trend was reversed between 2012 and 2013, as the number of patients decreased by 900 patients to 35 300, that is, a growth rate of −2.6%. The number of patients with LTD 100% health insurance cover for thyroid cancer increased from 63 311 in 2011, to 65 401 in 2012 and 67 461 in 2013, that is, a mean annual growth rate of +3.2%. In parallel, between 2010 and 2013 among all general scheme beneficiaries, the number of thyroid ultrasonography examinations performed increased from 1.12 to 1.19 million (+2.2%/year) and the number of thyroid fine-needle aspiration cytology procedures increased from 89 000 to 98 000 (+3.4%/year), while the number of thyroid scintigraphies decreased from 66 000 to 58 500 (−3.8%/year). The number of TSH assays (alone or combined with other parameters) increased from 12.2 to 14.8 million (+7%/year), while the number of free T4 assays (alone or combined with other parameters) increased from 3.0 to 3.6 million (+7%/year), and the number of free T3 assays (alone or combined with other parameters) also increased from 1.0 to 1.3 million (+10%/year).
DISCUSSION
This observational and almost nationwide study based on more than 35 000 patients first demonstrates the suboptimal management of patients prior to thyroidectomy in France. The thyroidectomy rate with a diagnosis of benign nodule appears to be excessively high compared with the thyroidectomy rate with a diagnosis of thyroid cancer. Furthermore, these rates vary considerably from one region to another, documenting variations in clinical practices across the country. Fine-needle aspiration cytology before surgery for a suspicious thyroid nodule is performed in less than one-half of cases, while this procedure could avoid surgery for a certain number of patients, which, as shown by this study as well as other studies, is not devoid of complications. Second, some examinations are performed too frequently, such as preoperative thyroid scintigraphy and T4 assay, and preoperative and postoperative T3 assay. Therefore, these data suggest that suboptimal management prior to thyroidectomy leads to overdiagnosis and potential harms to patients 7 as well as a lack of efficiency for the medical insurance system.
The volume of data collected from a database covering 77% of the French population allows us to analyse the healthcare pathway and evaluate its health impact. [11][12][13] Although guidelines 8-10 do not specify a maximum interval between preoperative assessment and thyroid surgery, the 1-year interval adopted for this study appears to be reasonable. Some non-surgical procedures considered to be necessary to the preoperative and postoperative care pathways may not have been identified from the Sniiram database when they were performed during a public hospital stay as they are not systematically coded within the hospital when they do not provide higher funding to the hospital. However, these nonsurgical procedures to investigate thyroid disease are rarely performed during a hospital stay. More importantly, the quality of this study relies on the quality of the surgical procedure coding performed in public or private hospitals, as these procedure codes are necessary in order to classify the four groups of thyroidectomies: cancer, benign nodule, multiple nodules or goitre, other cases. Other data were therefore investigated, such as requests for LTD coverage for thyroid cancer, reimbursement for radioiodine therapy or lymph node dissection, to isolate cancer cases. Nevertheless, it is still possible that the group described as surgery with a diagnosis of benign nodule included few cases of thyroid cancer, which are more likely to be microcancers. Our data set also does not provide information on cancer size. However, the incidence of thyroid cancers and especially the incidence of micropapillary carcinomas has been studied based on a French registry covering over 3000 cases of thyroid carcinomas and a population over 4.6 million inhabitants. 6 During the period 1998-2000, half of thyroid carcinomas diagnosed had a size smaller than 10 mm and a third 5 mm or less. Between the years 1983-1985 and 1998-2000, the number of tumours with a size smaller than 10 mm increased ninefold over the same period.
Despite a number of differences, all medical societies recommend TSH assay and thyroid ultrasonography, as a basic work-up in patients with any form of thyroid disease, and a fine needle-aspiration cytology, guided by the ultrasound feature of nodule guidelines. [8][9][10] Our analyses show that the thyroidectomy rate started to decline in France between 2012 and 2013 (−900 thyroidectomies), although the number of patients with 100% fee coverage for thyroid cancer as LTD continued to increase. Therefore, there appears to be both an effect and latency in the diffusion of knowledge concerning the management of thyroid nodules.
TSH assay must be performed as first-line assessment of thyroid disease, as its sensitivity allows the detection of all cases of thyroid dysfunction. 10 T4 assay should only be requested as second-line test, together with T3 when TSH is low, and together with anti-thyroid peroxidase (TPO) antibody assay when TSH is high. 10 However, in 2010, T4 assay was performed in almost two-thirds of patients and T3 assay in more than one-third of patients prior to thyroidectomy with thyroid cancer or benign nodule. Similarly, thyroid scintigraphy is no longer indicated in euthyroid patients, 10 but it was performed prior to thyroidectomy in almost one in five patients who had thyroid cancer or benign nodule.
On the other hand, thyroid ultrasonography is an essential part of the diagnostic work-up. 8-10 14 It was frequently performed in our study, but only in 80-84% of cases rather than 100% of cases, as other diagnostic examinations may have been performed: neck vessel ultrasonography, neck CT scan or MRI, for example, which also suggests the possibility of incidentalomas, that is, incidentally discovered nodules. The Tirads score, proposed in 2009, 15 can be used to evaluate the ultrasound risk of malignancy by taking six features into account. Based on a series of 4550 operated nodules, the sensitivity of the Tirads score to detect malignant lesions varied from 87% to 95%, with a negative predictive value of 99%, and an excellent interobserver reproducibility. 3 15 Therefore, fine-needle aspiration cytology is recommended for all nodules associated with a high-risk context according to the Tirads score, all suspicious nodules or nodules larger than 2 cm, before deciding on the indication for surgery. 10 However, fine-needle aspiration cytology was performed in less than one-half of patients operated with a diagnosis of thyroid cancer (44%) or benign nodule (34%).
The percentage of patients receiving postoperative levothyroxine replacement therapy appears to be consistent with the surgical procedure performed, as this rate was 99% among patients who had undergone total thyroidectomy. Levothyroxine therapy rates after partial thyroidectomy were 73% in the thyroid cancer group and 44% in the nodule group. The expected hypothyroidism rate after partial thyroidectomy has been estimated by others to be 11%, 16 but this figure must be adjusted to the volume of thyroid parenchyma left, which is not available in our study. T4 assay was performed at least once in more than two-thirds of patients. At least one T3 assay was performed in more than one-fourth of patients after surgery. Although a proportion of patients require adjustment of thyroid hormone replacement therapy, T3 assay appears to be inappropriate and generates an excess cost.
This excess cost must be added to sick leave allowances received by employed patients. Guidelines for practitioners were introduced in France in 2010 to recommend sick leave of 10-15 days after thyroidectomy. In our study, more than 85% of patients received sick leave allowances for more than 14 days and more than 60% received sick leave allowances for more than 21 days. However, the specific reasons for extension of sick leave (ie, difficulties adjusting replacement therapy, complications of surgery or other causes) are not recorded in the Sniiram database.
The postoperative or short-term mortality rates and the rate of compressive haematoma requiring reoperation, 0.1% and 0.4%, respectively, are situated in the lower end of the ranges reported in the literature. The postoperative bleeding rate reported in the literature ranges from 0% to 6.5%. 17 18 In 2014, Weiss et al 19 reported a compressive haematoma rate of 1.25% in a series of 150 012 patients. Compressive haematoma is associated with a 2.9-fold increased risk of mortality, as mortality rates were 1.34% in the presence of haematoma versus 0.32% for the overall group. Some authors consider that thyroid cancer is associated with an increased risk of haematoma, 20 but such an association was not observed in our cohort. Compressive haematoma can be life-threatening and requires emergency decompression. It can also occur beyond the sixth postoperative hour. The Association Francophone de Chirurgie Endocrinienne (AFCE) has recently recommended that total thyroidectomy should not be performed as an ambulatory procedure. 21 Hypoparathyroidism and recurrent laryngeal nerve injuries were the most common complications of thyroidectomy. In the present study, the estimated hypoparathyroidism rate between 4 months and 1 year after surgery was 6% in the thyroid cancer group and 1% in the benign nodule group, on the basis of more than three serum calcium assays and more than three deliveries of calcium supplements. However, it increased up to 3% in the benign nodule group when total thyroidectomy was performed. The recurrent laryngeal nerve injury rate is more difficult to estimate: 9% and 4% of patients in the two groups attended speech therapy sessions, and 23% and 17% attended at least two ENT visits or speech therapy sessions or underwent functional testing for recurrent laryngeal nerve injuries during the first 12 postoperative months. The complication rate depends on the time since the operation, the mode of detection and the definition of postoperative complications. For example, Duclos et al, 22 using a serum calcium cut-off of 2 mmol/L, reported postoperative hypoparathyroidism and permanent hypoparathyroidism rates of 25.9% and 2.7%, respectively. The unilateral or bilateral recurrent laryngeal nerve injury rate varies according to the time since the operation, 2.3% after 1 year versus 9.8% immediately postoperatively, and the mode of detection, <2% without specific examination versus more than 6% when indirect laryngoscopy is performed. 23 Our estimations are based on an indirect approach, which can overestimate or underestimate the true complication rate.
CONCLUSION
With more than 35 400 general scheme beneficiaries (or about 45 800 nationwide) who underwent surgery in 2010, and 35 300 in 2013, thyroidectomy is one of the surgical procedures most commonly performed in France. The thyroidectomy rate with a diagnosis of benign nodule appears to be excessively high compared with the thyroidectomy rate with a diagnosis of thyroid cancer. Such assessment is likely to be shared by many European countries. 7 Partial compliance with guidelines prior to thyroidectomy, especially the low rate of fine-needle aspiration cytology, indicates the need for large-scale diffusion of current guidelines and clinical practice evaluation by all professionals involved in the care pathway.
Several actions have been initiated by the French national health insurance in 2015 to reduce potential harms from overdiagnosis and overtreatment. Specific booklets have been developed for general practitioners and patients. 24 Dedicated visits to general practitioners, specialists and surgeons are performed. Public or private hospitals with a low ratio of thyroidectomies with a diagnosis of cancer over thyroidectomies with a diagnosis of benign nodule are monitored.
Active surveillance of microcarcinomas with a rigorous patient selection is also being discussed among international experts. 25 It involves a psychological cost to the patient and a financial cost to the society due to the annual follow-up (medical visit, thyroid and lymph node ultrasound scan) and should be discussed with each patient. Some French experts recommend practising fine-needle aspiration of a micronodule in case of suspicion of lymph node or extrathyroid involvement, or of a documented increasing size of the nodule, if the micronodule is based on the isthmus or at the highest third part of the thyroid, or if the patient is aged <40 years. An international review of the relevance of thyroidectomy and assessment of the long-term risk of microcancers is necessary in view of changing international clinical practices. 7 25 | 2017-06-25T05:03:40.323Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "b89d9f46b0b09be99eaaa8c3e4808ccbd6b68e48",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/7/4/e013589.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b89d9f46b0b09be99eaaa8c3e4808ccbd6b68e48",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234192120 | pes2o/s2orc | v3-fos-license | Oral and cutaneous manifestations of covid-19 in pediatric patients
COVID-19 began in December 2019 in Wuhan City, China, and on March 11, 2020 it was classified by the World Health Organization as a pandemic. It is an asymptomatic infection that can progress to severe respiratory conditions. In adults, it is more prevalent, but is also observed in children, with the occurrence of extra respiratory symptoms, such as oral and cutaneous manifestations. This literature review aims to report the oral and cutaneous manifestations of COVID-19 in pediatric patients. The bibliographic search strategy was carried out in the PubMed, SciELO and Bireme databases on August 1, 2020, using MeSH Terms “COVID-19”; “Child”; “Oral Manifestations”; “Skin Manifestations”; “Ageusia”; “Dysgeusia” and corresponding Decs, and also manual search, without language restriction. The stages of search, screening, selection, evaluation of studies and data extraction were performed by four independent reviewers. Nine studies that met the eligibility criteria were identified. The most cited oral and cutaneous manifestations were, respectively, taste dysfunction in adolescents and erythematous eruption in extremities and trunk. Health professionals should be aware of these manifestations, however, this is a recent theme in the literature, and more careful studies with greater strength of evidence still need to be performed. Indexing terms: Coronavirus Infections. Child. Oral manifestations.
INTRODUCTION
In the city of Wuhan in China, in December 2019, the spread of the disease began, which would become a pandemic, caused by SARS-CoV-2, a pleomorphic RNA virus with a microscopic crown shape. The disease that was named by the World Health Organization (WHO) as COVID-19 spread rapidly around the world [1], reaching 187 countries from December 2019 to July 2020 [2]. In Brazil, until August 3, 2020, the number of confirmed cases was 2,733,677 and deaths from the disease, 94,104 [3].
The clinical picture can manifest itself with symptoms ranging from an asymptomatic infection to severe respiratory implications, and may have extrarespiratory manifestations, affecting the cardiovascular, gastrointestinal, renal, hepatobiliary system, endocrinological, dermatological and nervous system, eyes and conjunctiva [4,5]. Initially, COVID-19 was more prevalent in adults with non-specific symptoms such as fever, cough, myalgia, dyspnea and diarrhea [6,7]. However, it can also be observed in children, being evident the severity of the disease in term or preterm babies [8]. Among the extra-respiratory manifestations that have been described as important symptoms of COVID-19 are the possible cutaneous and oral manifestations, among them the decrease or loss of taste (ageusia) [5]. These signs and symptoms represent a challenging differential diagnosis.
Thus, knowing what the literature reports, identifying what can assist dentist in understanding the symptoms of the disease, becomes relevant so that everyone can be attentive during their clinical practice regarding coping with COVID-19. This study aims to carry out a review of the literature on oral and skin manifestations in pediatric patients diagnosed with COVID-19.
METHODS
The literature review was carried out based on the formulation of the following guiding question: Can oral and skin manifestations be observed in children and adolescents diagnosed with COVID-19?
The bibliographic search was conducted on August 1, 2020 in the databases National Library of Medicine National Institutes of Health (PubMed), Scientific Electronic Library Online (SciELO) and the Biblioteca Virtual em Saúde (Virtual Health Library) Bireme. The search, screening, selection, study evaluation and data extraction steps were performed by 4 reviewers, independently, using controlled keywords from the Medical Subject Headings -MeSH Database ("COVID-19"; "Child"; "Oral Manifestations"; "Skin Manifestations"; "Ageusia"; "Dysgeusia") and corresponding descriptors from Bireme. In addition, additional search in the references of the included articles was also considered in the research.
Inclusion criteria: randomized clinical trials, observational studies and literature reviews published as of December 2019; without language restriction; studies that respected the acronym PECO (P=patients under 18 years old; E=exposure to the Sars-Cov-2 coronavirus; C= not applicable; O=oral manifestations, dysgeusia, ageusia and skin manifestations). Exclusion criteria: letters to the editor, abstracts, articles whose full access was not possible, absence of researched outcomes. Four independent reviewers performed the steps of searching, screening, selecting articles, evaluating the complete texts included and extracting the data.
Identification and selection of studies
After the searches were completed, 29 publications were identified in the databases, as described in table 1.
Data base
Search strategies Number of articles PubMed
Among the studies analyzed, nine met the eligibility criteria. For the synthesis of the articles, data extraction was performed considering the following category variables: author/year, study design, population (sample/age), objectives/methods, oral manifestations, skin manifestations, conclusions (table 2).
DISCUSSION
The Sars-CoV-2 virus has infected more and more people around the world. Knowledge of the signs and symptoms of COVID-19 is important for the early diagnosis of the disease since the virus can be transmitted even without significant symptoms [32]. In children, the virus manifests more mildly, and extra-respiratory signs suggestive of association with manifestations of the virus can be observed [13,15,22,25,[29][30][31][32]. For this association to be proven, factors such as drug use reactions, blood disorders and manifestations of other viral diseases, for example, dengue, rubella and measles must be ruled out [29].
In this review, cutaneous manifestations and taste disorders were found, whose authors associated to COVID-19. Among the articles researched, most of them are case reports [15,22,25], [30][31][32], with this, the CARE checklist was applied (available at: care-statement. org/checklist) to assess the methodological quality of the reports, where it was possible to identify the lack of conformity with some criteria that characterize a case report, thus highlighting limitations in the studies and a possible increase in the risk of methodological bias. In many cases, for example, there are reports of patients who have not been tested, or with a negative result for Covid-19, different tests and at different times of presenting symptoms, make it more difficult and/ or impossible to make more assertive conclusions about the manifestations. Even so, these studies point out the importance of the theme.
By considering the cutaneous clinical aspects manifested with COVID-19, the heterogeneity of the characteristics is perceived. However, the pathogenic mechanisms are still unknown, although it is believed that hyperactivity of the immune response and microvascular lesion are involved [33]. The main symptoms described in the studies were: erythematous lesions and urticaria [13,15,[29][30][31]33], and the main locations of these lesions were on the limb ends and torso [13,15,[30][31][32], and [33]. However, it is necessary to observe other signs and symptoms, even less frequently, in order to promote early diagnosis and avoid complications during the course of the disease. One cannot forget the need for diagnosis that differs from other systemic infections and their manifestations.
With regard to oral manifestations, gustatory dysfunction was found in two reports, both with adolescents aged 15 to 17 years old [22,25]. The mechanism is still poorly understood, there are indications that it is related to the inflammation of the chemoreceptors, and that there is interference from phenotypic factors, but there are still few studies on the subject. It is worth mentioning that patients may have gustatory dysfunction as the first symptom of COVID-19 [25]. In Brazil [34], this manifestation associated with influenza syndrome is a criterion for performing the COVID-19 test. This is important information for the observation and management of cases in which loss of taste (ageusia) or decreased taste (dysgeusia) is reported, making it necessary to carry out a careful evaluation by the health professional regarding the extra respiratory manifestations that may occur, especially in children and adolescents, in order to early identify the presence of infection with the new coronavirus.
Multisystem inflammatory syndrome has been observed in children, with similar symptoms and which overlap with Kawasaki disease (KD). A systematic review of the manifestations of COVID-19 with children identified that among 17 patients (out of 114), 12 (70.6%) had symptoms of KD [35], which is a systemic vasculitis, common in children under 5 years old, but rare after the age of eight, with unknown etiology. Its cutaneous manifestation is the polymorphic rash, and it presents, in addition to other symptoms, oral manifestations, such as erythema and edema on the tongue, lips, oral mucosa, lingual papillae, and cleft lip. KD has been identified simultaneously or shortly after confirmation of Covid-19 [36][37][38].
Despite the paucity of consistent and more easily comparable data, and due to such a diagnostic challenge, existing publications can serve as an alert for health professionals, until new studies can effectively demonstrate or not the cause and effect relationship.
The importance of the oral health professional is emphasized, especially the pediatric dentists involved in the screening process, so that they contribute by guiding patients in case they identify any changes.
CONCLUSION
This review addressed studies available in the literature regarding oral and skin manifestations of COVID-19 in pediatric patients, with erythematous rashes on the limb ends and torso being the most common. Regarding the manifestation in the oral cavity, the gustatory dysfunction reported by adolescents stands out. However, the scarcity of studies is still present, with several issues to be clarified. More careful studies must be carried out, as it is still a recent topic in the literature, whose cause and effect relationship is not well established. In addition, the information presented may change due to the dynamics of the disease course caused by the new coronavirus and the performance of new studies.
Collaborators MF MORAES, YR NATALINO, AF HOLANDA, HF and SOUZA SOBRINHO bibliographical research, data extraction, writing and approval of the final version. LC SARMENTO, review of the manuscript and final approval. APM GOMES, participation in the writing of the methodology, review of the manuscript and final approval. LF SANGLARD, study design, data analysis, interpretation of results, writing, review of the manuscript and final approval. | 2021-05-11T00:07:21.844Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "abe1dfba8e65f346be1bdd58793d93544e931e01",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rgo/a/SyHvtQ5csRLHxShp8tDYhmy/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3157612ba72e8ef9d12572ae97ab4fd1027b63d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203628764 | pes2o/s2orc | v3-fos-license | A Dynamic Semantics for Causal Counterfactuals
Under the standard approach to counterfactuals, to determine the meaning of a counterfactual sentence, we consider the “closest” possible world(s) where the antecedent is true, and evaluate the consequent. Building on the standard approach, some researchers have found that the set of worlds to be considered is dependent on context; it evolves with the discourse. Others have focused on how to define the “distance” between possible worlds, using ideas from causal modeling. This paper integrates the two ideas. We present a semantics for counterfactuals that uses a distance measure based on causal laws, that can also change over time. We show how our semantics can be implemented in the Haskell programming language.
Introduction and background
The problem of modeling counterfactual statements and situations has drawn much attention, in computer science, linguistics, and other disciplines. In addition to its intrinsic interest, counterfactual reasoning is important for artificial intelligence systems to be able to handle novel situations (Pearl and Mackenzie, 2018).
The classic approach to counterfactuals in linguistics and philosophy is based on a possible-worlds semantics (Lewis, 1973;Stalnaker, 1968;Kratzer, 1981). To evaluate a counterfactual, we examine a possible world where the antecedent is true, and evaluate the consequent. For example, let us consider the following classic example from Lewis (1973): (1) If kangaroos had no tails, they would topple over.
In the actual world, kangaroos have tails, but we can think of a possible world in which they do not, and consider whether they topple over in that world. However, not all possible worlds should be considered. We can consider a world in which kangaroos have no tails, but use crutches, and perhaps they would not topple over in that world. But in the actual world, kangaroos do not use crutches, so why should we consider those worlds in which they do? We therefore only consider the "closest" possible worlds to the actual world, according to some distance metric or ordering of worlds.
Formally, we have an accessibility relation R, such that R(w, w ) is true if and only if w is sufficiently similar to w. This defines for each world w a context, or modal horizon, consisting of those worlds w such that R(w, w ) (von Fintel, 2001). A counterfactual φ > ψ is true in a world if and only if in all the worlds in the modal horizon where φ is true, ψ is true.
Von Fintel (2001) provides evidence that this context changes over time, by considering sequences of counterfactuals. Briefly, if there are no worlds in the modal horizon where the antecedent φ is true, the modal horizon expands until it includes those φ-worlds most similar to the current world. However, after the counterfactual has been evaluated, the accessibility relation does not revert to its previous state. For example, consider the following sequence of counterfactuals (a Lewis-Sobel sequence): (2) If kangaroos had no tails, they would topple over.
If kangaroos had no tails but used crutches, they would not topple over.
In the closest possible worlds in which kangaroos have no tails, they do not use crutches, and do topple over. However, in the closest worlds in which kangaroos both have no tails and use crutches, they do not topple over. The above sequence makes sense. But the next sequence of counterfactuals, with the order of the sentences reversed (a reverse Sobel sequence), is semantically infelicitous: (3) If kangaroos had no tails but used crutches, they would not topple over.
#If kangaroos had no tails, they would topple over.
The first sentence expands the modal horizon to include worlds in which kangaroos have no tails and use crutches. Once we have introduced worlds in which kangaroos use crutches, we cannot subsequently forget about them when thinking of worlds where they have no tails. Therefore, when evaluating the second sentence, we must consider all worlds in the modal horizon where kangaroos have no tails, including both worlds in which they do and those in which they do not use crutches. In some of these worlds, they topple over, and in others, they do not.
In the classic possible-worlds approach to counterfactuals, the notion of distance or similarity between worlds is deliberately left underspecified. However, a computational implementation of counterfactuals must specify the distance metric to be used. Let us consider a possible world to be characterized by the "facts" true in that world (Kratzer, 1981). Given two worlds that differ from the actual world in the same number of facts, which one is closer? Pollock (1976) suggests that "subjunctive generalizations" are more important than other facts, while Kratzer (1981) suggests that certain facts should be "lumped" together. For example, if one looks in a mirror, one would expect to see their reflection, even if it is not currently visible (because they are not currently looking in the mirror). In other words, the facts "one looks in a mirror" and "one sees their reflection" should be lumped together: if the truth value of one fact changes, the truth of the other should change as well.
A related idea from Pearl (2000) is that the distances between worlds rely on the notion of cause and effect. Specifically, worlds that differ in their causal laws are more distant than worlds whose laws are the same. If we say that looking in a mirror causes one to see their reflection, then among worlds where one looks in the mirror, those in which they see their reflection are closer to the actual world than those where they do not.
Pearl formulates causal laws in terms of structural equations. An equation a = f (b) denotes that, in a particular world, the value of a is dependent on the value of b. This allows us to reason about what the value of a would have been, if the value of b had been different. The set of structural equations, together with an enumeration of the variables, defines a causal model. While Pearl's framework cannot model all possible counterfactual sentences, others have extended the causal modeling approach to different types of counterfactuals (Briggs, 2012).
Causal modeling approaches to counterfactuals make use of interventions: changes in the causal model (Pearl, 2000). Specifically, to evaluate a counterfactual sentence, change the underlying model to make the antecedent true, and allow the change to propagate through the model. Then evaluate the consequent with respect to the new model. Briggs (2012), making connections between causal modeling and possible-worlds approaches, identifies causal models with possible worlds. Applying an intervention then corresponds to selecting the closest possible world where the antecedent is true.
In this paper, we present a semantics for counterfactual sentences that integrates causal reasoning with a dynamic semantics, such as that of Groenendijk and Stokhof (1991). Causal reasoning allows us to give an exact specification of the vague notion of "distance" between worlds, while a dynamic semantics allows us to analyze how the meaning of counterfactuals changes with context. The key idea connecting these two approaches is that causal laws can be encoded in an accessibility relation, and therefore a change in context is equivalent to an intervention in the causal model. We can formalize this using ideas from Alternating-time Temporal Logic with Intentions (ATL+I), a logic for strategic reasoning (Jamroga et al., 2005). We also present a computational implementation of our semantics in the Haskell programming language, available at https://github.com/klai12/dscc.
Our implementation is based on concurrent game structures, introduced by Alur et al. (2002) as an extension of Kripke structures to open (multi-agent) systems. A Kripke structure contains a set of possible worlds, a set of propositions, and a labeling function from worlds to sets of propositions true in those worlds (Kripke, 1963). Concurrent game structures add a set of players, where each player has, for each possible world, a non-empty set of moves available at that world. The transitions available from some world are determined by the moves taken by each player at that world.
We can formally assign types to the above components as follows. We take worlds, propositions, players, and moves to be primitive types World, Prop, Player, and Move, respectively. It will be convenient to also define a type Vector for move vectors, i.e., which move is taken by each player, as [(Player, Move)]. A concurrent game structure then consists of the following six components: We now introduce the notion of a strategy. We adopt the definition in (Jamroga et al., 2005), as a function that, for a given player, maps each world to a non-empty subset of the moves available to that player at that world. Strategies therefore have type World -> [Move]. We can then define a "strategy function" σ as a non-empty subset of the move function, with type Player -> Strategy (or equivalently Player -> World -> [Move]), that specifies a strategy for each player. In ATL+I, because the strategies employed by each player restrict the set of moves from which the player will choose, and the transitions allowed from a world depend on the moves made by each player, the strategy function therefore determines which transitions are allowed. The set of allowed transitions, in turn, forms an accessibility relation that depends on the strategies used by each player.
To return to the setting of counterfactuals, we recall that in a dynamic semantics, the accessibility relation (or modal horizon) changes over time. Furthermore, using a causal modeling approach, the change in the accessibility relation is determined by an intervention in a causal model. Our proposal is to identify variables in a causal model with players in a concurrent game structure. Then we can use the strategy for a player to encode the structural equation for that variable, such that a change in strategy corresponds to an intervention in the causal model.
Example: Kangaroos, tails, and crutches
As an illustrative example, we will again consider the case of the kangaroos. Let us assume that kangaroos will topple over if and only if they have no tails and they do not use crutches; otherwise they will stay upright. Let Q, R, and S be Boolean variables corresponding to whether kangaroos have tails, use crutches, and topple over, respectively. Then we can write the structural equation S = ¬Q ∧ ¬R to encode this causal law. Now we can represent our scenario as a concurrent game structure. First, the set of players in our model is A = {Q, R, S}. Each variable in the causal model is a player in the concurrent game structure. Note that despite the use of the term "player", the players in our model are not agents, or even entities, for that matter; there are no players corresponding to "kangaroos", "tails", or "crutches".
Next we consider the space of possible worlds. We will introduce a possible world for each possible combination of moves the players can make. We will discuss the meanings of the different moves each player can make below; for now, we will say that players Q and R have two moves each (which we will call 0 and 1), and S has three moves (which we will call 0, 1, and x). Therefore, there are 2 × 2 × 3 = 12 possible worlds in our concurrent game structure. We will also say that each player has the same set of available moves at each world; i.e., for all worlds w, the move function D is specified by D(Q, w) = D(R, w) = {0, 1}, and D(S, w) = {0, 1, x}. We will label the possible worlds according to the moves made by each player to arrive at that world; e.g., w 10x is the possible world that results when Q makes move 1, R makes move 0, and S makes move x. The combination {(Q, 1), (R, 0), (S, x)} is then a move vector, and therefore we know that for all worlds w, the transition function δ(w, {(Q, 1), (R, 0), (S, x)}) = w 10x . We can calculate the other values of the transition function in the same way.
We have specified the possible moves for each player at each world, but what do the moves mean? Although our players are not agents in the conventional sense, we can nevertheless think of them as being able to "set" their own values. For all players, then, the move 0 sets its value to 0 in the next world, while 1 sets its value to 1.
The above moves are sufficient for those variables that are exogenous, i.e., those whose values are not dependent on the values of any other variables. In our scenario, Q and R are exogenous variables. For an endogenous variable such as S, whose value is dependent on the values of Q and R, it is not possible to represent the causal law governing S, only using some combination of moves 0 and 1. The reason is because the value of S in the next world is dependent on the values of Q and R in the next world, not the current world. For endogenous variables, therefore, we introduce a third move x, which sets the value of the endogenous variable according to its structural equation. For example, the move x for player S sets the value of S in the next world to be equal to ¬Q ∧ ¬R. In summary, exogenous variables have two moves 0 and 1, while endogenous variables have a third move x.
The initial set of propositions is P = {q, r, s}. Our propositions correspond to valuations of each of the variables; e.g., q is true in those worlds where the value of Q is 1, etc. Where necessary, the values of endogenous variables can be calculated using their structural equations. For example, the value of S in w 10x is ¬1 ∧ ¬0 = 0 ∧ 1 = 0. The labeling function is then straightforward to calculate: L(w 000 ) = ∅, L(w 10x ) = {q}, etc.
Finally, we must specify our initial conditions: the initial strategies of each player. For player S, the strategy is to enforce the causal law S = ¬Q ∧ ¬R at each world. Therefore the strategy for S is simply λw.x: at all worlds w, make move x.
As for players Q and R, because they are exogenous variables, they do not have structural equations in Pearl's causal models (Pearl, 2000). However, we do not want to say that they have no strategies. As previously mentioned, when evaluating a counterfactual sentence, we only want to consider those worlds that are closest to the actual world. But in ATL+I, having no strategy means placing no restrictions on which worlds are accessible from the actual world (Jamroga et al., 2005). Intuitively, given a world with some value of Q, worlds with the same value of Q can be considered closer to that world than worlds with the opposite value, all else being equal. Therefore, one possible strategy is to keep the value of Q the same: The strategy for R can be similarly specified.
The dynamics of causal counterfactuals
Now we can describe the evaluation of counterfactual sentences in our framework. We translate sentences into formulas of type Form. In addition to the formulas of propositional and basic modal logic, we also include the formula scheme Str a strategy phi; these correspond to ATL+I sentences (str a σ a )φ.
In ATL+I, it is the evaluation of str-formulas in which changes of strategy occur; in our framework, counterfactual sentences are translated into str-formulas for evaluation.
Formulas must of course be evaluated relative to some model. In addition, in a dynamic semantics, we must also keep track of the context. To do this, we make use of Haskell's state monad. We define the type Model of our states as a record type, that includes the current strategy function, as well as four components of our concurrent game structure: the sets of players and worlds, and the labeling and transition functions. Because of how we constructed our concurrent game structures above, the set of propositions and the move function can be inferred from the other components.
For a given function (and context), our model checker returns the set of possible worlds where the formula is true. As such, our main function, check, has type Form -> State Model [World]. The model checker is based heavily on that in (Jamroga et al., 2005) for ATL+I, which itself is derived from the model checker for ATL in (Alur et al., 2002). Propositions are checked using the labeling function, and formulas of propositional logic follow via the usual set-theoretic operations. The checking of modal formulas makes use of a pre-image function, which, given a set of possible worlds, returns the set of worlds that can access any of those worlds. Then, for example, to check a formula ♦φ, we first find the set of worlds where φ is true, and then calculate the set of worlds such that the φ-worlds are accessible.
Finally, to check str-formulas, we introduce a revise function. This is the mechanism by which causal interventions are modeled. Formally, let σ be the current strategy for player a, and σ be a's new strategy. Then we can say that revise(a, σ ) = {σ ∪ σ }.
We should note that our revise function differs from that of Jamroga et al. (2005). Whereas changes of strategy in ATL+I involve replacement of the player's previous strategy, our revise function simply add the moves from σ to a's previous strategy. We recall that in von Fintel's dynamic account of counterfactuals, the accessibility relation (modal horizon) expands but does not contract. In other words, all worlds accessible from a given world before an update to the model, remain accessible afterwards.
Returning to the kangaroos, we can now see the difference in the evaluation of the Lewis-Sobel sequence in (2) and the reverse Sobel sequence in (3). We will use the propositions q, r, and s as before, to represent kangaroos having tails, using crutches, and toppling over, respectively. In evaluating the sentence "If kangaroos had no tails, they would topple over" under the causal modeling approach, we apply an intervention in the model to set Q = 0. This corresponds to a strategy for Q to go to a world where ¬q is true; i.e., λw.0. Then, following von Fintel (2001), we check whether in all accessible worlds where ¬q is true, s is also true; this is the strict conditional (¬q → s). Therefore, the formula we want to evaluate is (str Q (λw.0)) (¬q → s).
Similarly, when we evaluate the sentence "If kangaroos had no tails but used crutches, they would not topple over", we want to expand our modal horizon to include worlds where ¬q and r are both true. This involves changes in strategy by both Q and R; Q to set Q = 0, R to set R = 1. The formula to be evaluated must therefore include both an (str Q (λw.0)) term and an (str R (λw.1)) term. Then, since we want to check the truth of ¬s in those accessible worlds where both ¬q and r are true, our formula is (str Q (λw.0))(str R (λw.1)) ((¬q ∧ r) → ¬s).
Suppose that starting from our initial conditions, the sentence "If kangaroos had no tails, they would topple over" is uttered. We first update the strategy function for Q, to add the move 0 to Q's initial strategy. This has no effect in worlds where Q = 0, as the default strategy for Q is to keep its value the same. However, in worlds where Q = 1, Q now has two moves consistent with its new strategy, 0 and 1. Now, using the updated accessibility relation, we evaluate the formula (¬q → s). Every world now has an accessible ¬q-world. We note that according to the structural equation S = ¬Q ∧ ¬R, s will be true in those worlds where ¬q and ¬r hold. Since S's strategy is to enforce the structural equation, we know that it will hold in all accessible worlds. In addition, R's strategy continues to dictate that from every world, any accessible world will have the same valuation of R. We conclude that the sentence is true in those worlds where R = 0; these include the actual world w 10x .
Then suppose the sentence "If kangaroos had no tails but used crutches, they would not topple over" is uttered. Again, we update the strategy function for Q to add move 0. But since 0 was previously added when evaluating the first sentence, this revision has no effect. Next, we add the move 1 to all worlds in R's strategy, similarly as before. Now we evaluate the strict conditional ((¬q ∧ r) → ¬s). Since S's strategy still has not changed, the causal law S = ¬Q ∧ ¬R continues to hold in every accessible world. Therefore, for every world in our model, in every accessible world where (¬q ∧ r) is true, ¬s is true, and so is the sentence.
What if the order of the two sentences were reversed? First, starting again from the initial conditions, the sentence "If kangaroos had no tails but used crutches, they would not topple over" is uttered. Because move 0 had not been added yet, it is this sentence that adds 0 to Q's strategy. All other effects of uttering this sentence are the same as before, as is the set of possible worlds where it is true. However, we can see a difference in the evaluation of the second sentence "If kangaroos had no tails, they would topple over". Updating the strategy function for Q has no effect, since the move 0 has already been added to Q's strategy by the first sentence. Furthermore, it is no longer the case that R's strategy keeps the valuation of R constant; as a result of the first sentence, move 1 is now available to R at every world. In other words, among the ¬q-worlds accessible from any given world, one of them will also be an r-world. Since S = ¬Q ∧ ¬R holds in every world, we know that from any world, one of the accessible ¬q-worlds will not be an s-world. We conclude that the sentence does not hold in any world.
Translating counterfactual sentences into str-formulas
One challenge in synthesizing a causal modeling approach to counterfactuals with a possible-worlds semantics is the difference in how counterfactual sentences are evaluated in the two approaches. Under the classic possible-worlds framework, we check whether in the closest possible worlds where the antecedent is true (making any changes to the accessibility relation, if necessary, to ensure that at least one such possible world exists), the consequent is true. In a causal theory of counterfactuals, the antecedent of the counterfactual determines the intervention to be applied to the causal model. Then, the consequent is evaluated relative to the new model.
In this paper, we identified the necessary change in the accessibility relation with the intervention in the causal model, which we implement as a change in strategy for some player. Such an approach raises two questions. The first question concerns which possible worlds count as worlds where the antecedent is true. In the kangaroo example, when we translated a counterfactual of the form φ > ψ into an strformula, the strict conditional portion of the formula was simply (φ → ψ). In other words, if the antecedent of the counterfactual is φ, then we check whether the accessible φ-worlds are also ψ-worlds. However, there is evidence that this approach may not work for all scenarios. Briggs (2012) discusses the scenario, originally found in Pearl (2000), of an execution of a prisoner. A full description of the scenario can be found in either of the above papers; we note here that there are two executioners, X and Y, and whether they fire is determined by whether the captain C signals for them to do so. In other words, the behavior of executioner X is governed by the structural equation X = C. If either executioner fires, the prisoner dies. In the actual world, the captain signals, both executioners fire, and the prisoner dies.
Briggs considers the sentence "If executioner X had fired, then (even) if the captain had not signalled, the prisoner would have died." Under a causal model, we intervene to change the structural equation X = C to X = 1. However, in the classic possible-worlds framework, no change in the accessibility relation is necessary. Executioner X fires in the actual world, and as a consequence of (weak) centering, the assumption that every world is at least as similar to itself as to any other world, every world is then accessible to itself. Under the classic approach, we check the truth of the consequent in the closest possible world where the antecedent is true; i.e., the actual world, where the consequent is false. But as Briggs notes, applying the intervention to the causal model changes the truth of the consequent.
When specifying a set of possible worlds corresponding to a causal model, we must distinguish between worlds where different causal laws hold. For example, in the kangaroo scenario, we distinguish worlds w 10x (where kangaroos do not topple over because they have tails) and w 100 (where they do not topple over, because it is a law of nature that they never topple over), even though the same propositions are true in both worlds: L(w 100 ) = L(w 10x ) = {q}. Likewise, the antecedent of the counterfactual in the execution case is the proposition that executioner X fires; let us call it x. We note that x does not determine what structural equation holds in a particular world; in some x-worlds, the relevant causal law is X = 1, while in others, it is X = C. When we say "if executioner X had fired" in a causal model, the relevant possible worlds are those in which the structural equation is X = 1. The corresponding proposition is not x, but a different proposition (call it x 1 ), which is true in exactly those worlds where the causal law X = 1 holds.
Counterfactuals with complex antecedents
Second, we note that antecedents of counterfactuals are propositions (of type Prop), while strategies have type World -> [Move]. Is there a way to systematically translate propositions into strategies? We have already seen that for atomic propositions such as r, we intervene to make sure that there is an accessible world where r is true, by adding the move 1 to the strategy of player R at every world: λw.1. Similarly, for negations of atomic propositions, such as ¬q, we add move 0 to Q's strategy: λw.0.
We have also seen an example of a conjunction, (¬q ∧ r). To ensure that there is an accessible world where the conjunction holds, we simply have both players change their strategies in sequence: (str Q (λw.0))(str R (λw.1)).... We note that the order each player changes their strategy does not matter. The moves each player is allowed to make are affected only by their own strategy, not those of any other players, and the strict conditional portion of the counterfactual formula is only evaluated after all strategy changes.
For other complex antecedents, Briggs (2012) borrows the idea of a state space from Fine (2012). States are defined by a valuation of some variable(s); e.g., Q = 0 ∧ R = 1. For propositional antecedents (including negations, conjunctions, disjunctions, and material conditionals), Briggs specifies states that make the antecedent true. For example, a disjunctive antecedent (φ ∨ ψ) is made true by three states or interventions: one that sets φ = 1, one that sets ψ = 1, and one that sets both φ = 1 ∧ ψ = 1.
One challenge that arises in adapting this approach to ours is that evaluating the disjunction involves checking the results of three different interventions applied to the original model. However, in our dynamic semantics, once an intervention is made, the moves added to the player's strategy remain available to future evaluations; there is no "going back" to try a different intervention. In addition, while the states associated with the disjunction (φ ∨ ψ) are the same as those associated with the negated conjunction ¬(¬φ ∧ ¬ψ), Ciardelli et al. (2018) provide evidence that those antecedents in fact have different meanings.
Furthermore, it is not clear what impact, if any, a disjunctive antecedent should have on the accessibility relation at all. Ciardelli et al. (2018) discuss the example of two switches for a light. They are connected in such a way that the light is on if the switches are both up or both down, and off otherwise. In the actual world, the switches are both up and the light is on. While Ciardelli et al. do not consider sequences of counterfactuals, it is easy enough to construct a reverse Sobel sequence as with the kangaroos: (4) If switch A and switch B were both down, the light would be on.
#If switch A was down, the light would be off. Now let us replace the conjunction with a disjunction. In their experiment, Ciardelli et al. found that the sentence "If switch A or switch B was down, the light would be off." was judged by most participants to be true (in contrast with the sentence "If switch A and switch B were not both up, the light would be off.", with a negated conjunctive antecedent). If we use this sentence instead in our sequence, the infelicity seems to go away: (5) If switch A or switch B was down, the light would be off.
If switch A was down, the light would be off.
In fact, according to the rule of simplification of disjunctive antecedents, the second sentence is a logical consequence of the first. Nevertheless, this indicates that perhaps the modal horizon did not expand to include worlds where switch B was down in this case, at least not permanently; if it had, then we would have to consider them when evaluating the second sentence. Alternatively, von Fintel (2001) suggests that logical arguments, unlike normal discourse, carry with them an assumption of constant context. Certainly more research is needed in this area.
Conclusion
In this paper, we present a semantics for counterfactuals that combines ideas from dynamic semantics and causal modeling approaches. Our implementation is based on concurrent game structures, where variables are interpreted as players and interventions as changes in players' strategies. Using the classic example of kangaroos with no tails, we show how our approach is able to capture judgments about sequences of counterfactuals. | 2019-09-27T01:36:04.597Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "ed3a1b06f426f47e9a25ace4c01d5e277b334dbe",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W19-0601.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "ed3a1b06f426f47e9a25ace4c01d5e277b334dbe",
"s2fieldsofstudy": [
"Computer Science",
"Philosophy"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231654752 | pes2o/s2orc | v3-fos-license | Development and External Validation of a Novel Immune Checkpoint–Related Gene Signature for Prediction of Overall Survival in Hepatocellular Carcinoma
Objective: The purpose of this study was to develop and validate a novel immune checkpoint–related gene signature for prediction of overall survival (OS) in hepatocellular carcinoma (HCC). Methods: mRNA expression profiles and clinical follow-up information were obtained in the International Cancer Genome Consortium database. An external dataset from The Cancer Genome Atlas (TCGA) Liver Hepatocellular Carcinoma database was used to validate the results. The univariate and multivariate Cox regression analyses were performed based on the differentially expressed genes. We generated a four-mRNA signature to predict patient survival. Furthermore, the reliability and validity were validated in TCGA cohort. An integrated bioinformatics approach was performed to evaluate its diagnostic and prognostic value. Results: A four-gene (epidermal growth factor, mutated in colorectal cancer, mitogen-activated protein kinase kinase 2, and NRAS proto-oncogene, GTPase) signature was built to classify patients into two risk groups using a risk score with different OS in two cohorts (all P < 0.0001). Multivariate regression analysis demonstrated the signature was an independent predictor of HCC. Furthermore, the signature presented an excellent diagnostic power in differentiating HCC and adjacent tissues. Immune cell infiltration analysis revealed that the signature was associated with a number of immune cell subtypes. Conclusion: We identified a four–immune checkpoint–related gene signature as a robust biomarker with great potential for clinical application in risk stratification and OS prediction in HCC patients and could be a potential indicator of immunotherapy in HCC. The diagnostic signature had been validated to accurately distinguish HCC from adjacent tissues.
INTRODUCTION
Hepatocellular carcinoma (HCC), which is characterized by a low survival rate, aggressive nature, and high metastasis potential, is the most common subtype of hepatic malignancies worldwide, accounting for ∼90% of primary liver cancers (Bray et al., 2018). There are ∼841,000 new confirmed cases of HCC and 782,000 deaths in 2018. Although the great developments in radiotherapy, chemotherapy, liver transplantation, and other potentially curative treatment have revolutionized the treatment of HCC, the long-term prognosis remains poor because most HCC patients are at the late stage at the time of diagnosis and have lost the opportunity of surgical removal of their lesion (Bruix et al., 2014). Most patients with advanced-stage HCC ultimately do not benefit from traditional medications (Stravitz et al., 2008). Therefore, high-risk HCC patients with potentially poor prognosis must be monitored, and timely and effective treatment should be taken to prolong the survival and improve the quality of life (Llovet et al., 2015). Traditional methods utilizing clinical tumor-node-metastasis (TNM) staging, vascular invasion, and other clinicopathologic parameters contribute to predict HCC prognosis (Bruix et al., 2016). Despite the availability of multiple treatment opportunities, diagnosis is still made in an advanced stage, limiting application of most therapeutic choices that currently are based on the Barcelona Clinic Cancer Liver Classification system (Aho et al., 2014). However, considering the great complexity and heterogeneity of HCC, the predictive ability of such models is still far from satisfying.
In recent years, the emergence of immune checkpoint inhibitors has revolutionized the therapeutic landscape of cancer patients. Sorafenib was well-established as a standard of care for HCC for nearly a decade until 2018 that lenvatinib has finally become the first-line treatment in clinical practice, and regorafenib, ramucirumab, and cabozantinib have been recommended as second-line drugs approved by the US Food and Drug Administration (Longo et al., 2019;Dong et al., 2020). Recently, the seminal IMbrave150, a global, multicenter, open-label, phase 3 randomized trial, has approved of immunotherapy plus antiangiogenesis (atezolizumab combined with bevacizumab) for first-line systemic treatment for unresectable HCC in many countries (Finn et al., 2020). In the past 3 years, new data from trials of immune checkpoint inhibitors provided multiple new options for advanced HCC (Dong et al., 2020). It is well known that tumor cells can evade immune surveillance and promote cancer growth and progression by activating various immune checkpoint pathways. Programmed death (PD) ligand 1 (PD-L1)/PD1 and cytotoxic T-lymphocyte antigen (CTLA)-4 inhibitors have been used in multiple malignancies, whereas crucial molecules able to disturb other coinhibitory signaling pathways are under investigation (Longo et al., 2019). In the era of immunotherapy, immune checkpoint inhibitors have also been used for HCC patients; however, not all patients could benefit from immunotherapy (Miamen et al., 2012;Mahn et al., 2020). There is an urgent need for effective biomarkers in patients with HCC to improve survival prediction and early diagnosis and the potential benefits of immunotherapy. Therefore, based on the immune checkpoint-related genes, we used two cohorts to develop and validate a robust prognostic signature for HCC and explore its diagnostic value, as well as contribute to determine effective immunotherapy for HCC.
Data Collection and Immune Checkpoint-Related Gene Acquisition
Level 3 mRNA expression data and corresponding clinical follow-up information for 240 patients with primary HCC (231 with complete follow-up information) and 202 adjacent tissues were downloaded from the International Cancer Genome Consortium (ICGC) database (https://dcc.icgc.org/, LIRI-JP). RNA-sequencing data from 374 HCC patients and 50 adjacent tissues with corresponding clinical follow-up information (370 with complete follow-up information) were downloaded from The Cancer Genome Atlas (TCGA) database and were used for external validation of the signature. The probe IDs were changed into the corresponding gene symbols based on their annotation files. When several probes matched to an identical gene symbol, we averaged them for further analysis. The PD-1/PD-L1 and CTLA-4 signaling pathways genes were extracted from the KEGG [Kyoto Encyclopedia of Genes and Genomes (https://www. kegg.jp/)] and Reactome (https://www.reactome.org/) pathway database to retrieve crucial genes of the PD-1/PD-L1 and CTLA-4 signaling pathways. A total of 282 unique candidate genes were retrieved from KEGG (n = 225) and Reactome (n = 97) pathway database (Supplementary Table 1). The identified intersection sets among immune checkpoint-related genes and ICGC and TCGA cohorts were used for subsequent analysis.
Prognostic Genes Identification and Gene Signature Construction
The differentially expressed genes (DEGs) between HCC tissues and adjacent tissues were screened using the "limma" R package with absolute value of the log2 fold change (logFC) >1 and false discovery rate (FDR) <0.05 in the ICGC cohort. Next, the relationship of DEGs with overall survival (OS) in HCC was calculated with univariate Cox regression analysis. We further narrowed the gene range in the univariate analysis with P < 0.05 by performing LASSO-penalized Cox regression analysis with 10-times cross-validations using the glmnet package in R. Multivariate analysis was finally used to identify the optimal model according to the smallest Akaike information criterion value, which is a measure of the goodness of fit (Aho et al., 2014). Afterward, the identified immune checkpoint gene-based prognosis risk score was designed on the basis of linearly combining the risk score formula with the expression level multiplied regression model (β). Risk score = βgene 1 * gene 1 expression + βgene 2 * gene 2 expression + . . . + βgene n * gene n expression. Therefore, a risk score was obtained for each patient based on the risk score formula. All the patients were classified into high-risk and low-risk groups based on the median value of the risk score as a cutoff value. Kaplan-Meier analysis was performed to compare the statistical differences in survival rate between the high-risk and low-risk groups. Time-dependent receiver operating characteristic (ROC) curve analysis and area under the curve (AUC) for 1-, 3-, and 5-year OS was carried out to determine the clinically predictive ability of the gene signature. Prognosis prediction performance was evaluated by the AUC for 1-, 3-, and 5-year OS from the time-dependent ROC analysis.
Independence of the Prognostic Gene Signature
The univariate Cox regression analysis was performed to screen out the significance of the novel signature and clinicopathologic parameters on the OS of patients with HCC. Multivariate Cox regression analysis was further conducted to identify independent prognostic variables. Survival analysis was carried out to validate the risk stratification ability of the novel signature when patients were classified into different clinical subgroups.
External Validation of Gene Expression Pattern and Prognostic Signature
TCGA cohort was used for the validation of identified DEGs. The risk score of each patient in the TCGA cohort was calculated based on the risk formula mentioned above, and patients were classified into the high-or low-risk groups according to the cutoff point of the median risk score. The same analyses were conducted to validate the reliability and validity of the novel signature, including Kaplan-Meier analysis, ROC curve analysis, and multivariate Cox proportional hazards analysis.
Constructing and Validating a Predictive Nomogram
A composite nomogram was established based on all independent prognostic parameters identified by the multivariate Cox proportional hazards analysis to predict the probability of 1-, 3-, and 5-year OS using the "rms" package in R software. Validation of the nomogram was explored by discrimination and calibration. The calibration plot and the concordance index (C-index) were used to assess the performance of the prediction model by a bootstrap method in both cohorts. The decision curve analysis was carried out to explore the clinical effectiveness of the model in comparison with the AJCC staging system. The optimal model is the one with the highest net benefit as calculated.
Construction and Validation of the Diagnostic Performance of the Immune Checkpoint-Related Gene Signature
To explore the diagnostic potential of the novel gene signature in distinguishing HCC patients from adjacent tissues, ROC analysis of each identified gene was performed between 240 patients with HCC and 202 adjacent tissues in the ICGC cohort and further validated in 374 HCC and 50 adjacent samples in the TCGA. Support vector machine (SVM) is a supervised classification model with widely acknowledged generalizability (Cherkassky, 1997). Therefore, we established a diagnostic classifier with identified immune checkpoint-related gene genes by using the SVM to distinguish HCC from adjacent tissues. Furthermore, the performance of the classifier in n distinguishing early stage of HCC patients (stage I) from adjacent tissues was further measured via the AUCs in both cohorts.
Gene Set Enrichment Analysis
To explore the alerted biological processes underlying the new established prognostic signature, GSEA was carried out to investigate whether the identified sets of genes presented statistically significant differences between the high-and low-risk groups (Thomas et al., 2011). Gene sets at P < 0.05 and an FDR < 0.25 were considered to be significantly enriched and to identify biological processes.
Immune Cell Subtypes and Its Correlation With Identified Immune Checkpoint-Related Genes
To investigate the relative abundance of tumor-infiltrating immune cells from gene expression profiles in HCC, the analytical tool called CIBERSORT (https://cibersortx.stanford. edu/) was used to calculate immune cell infiltrations. The algorithm estimated the putative abundance of immune cells using a reference set with 22 immune cell subtypes (LM22) with 1,000 permutations (Newman et al., 2015). We used the mRNA expression matrix as the input files to evaluate the immune infractions of each sample through the CIBERSORT algorithm (Zhao et al., 2020). Cases with a CIBERSORT output of P < 0.05, demonstrating that the inferred proportions of immune cell populations produced by CIBERSORT are accurate (Ali et al., 2016), were filtered out for subsequent analysis. The CIBERSORT output values were defined as immune cell infiltration fraction per sample. For each case, the sum of 22 immune cell type fractions equaled 1. The associations of the feature genes with infiltrating immune cells levels were investigated by Spearman rank correlation analysis using the R software and were visualized with "ggplot2" package.
Analysis of Immunotherapy Efficacy in the Validation Cohort
Tumor mutation burden (TMB) can reflect the total number of mutations in cancer cells, which could be used for evaluating the therapeutic effect of immunotherapy (Liu et al., 2019). The mutation data of HCC patients were downloaded and stored as MAF format in the TCGA data portal. TMB analysis was performed by R package "maftools" (Mayakonda et al., 2018). The association between the risk score and expression levels of immune checkpoint genes (CTLA4, PD1, and PD-L1) was investigated.
Statistical Analysis
The expression changes of identified genes between HCC and normal samples were compared using Student t-test. A heatmap was generated using the "pheatmap" package of the R software. Survival curves were generated using the "survival" package. The ROC curves were performed by an R package "survivalROC." The "rms" package was used for nomogram construction, and the Hmisc package was used for calculation of C-index. Multivariate Cox proportional hazards regression analyses with 95% confidence intervals (CIs) were used to identify potential prognostic factors. The visualization of 22 types of infiltrating immune cells was performed by using R package "corrplot." P < 0.05 was considered to be significant. All statistical analyses were performed using R (version 3.6.3; https://www.r-project.org/).
Patient Demographics and Clinical Characteristics
The clinicopathological characteristics of the TCGA and ICGC cohorts are listed in Table 1. Samples with clinicopathological and follow-up information were included for survival analysis in this study, consisting of 232 HCC samples in the ICGC cohort and 370 HCC samples in the TCGA cohort, respectively. The patient selection scheme and workflow chart are shown in Figure 1.
Feature Gene Identification and Prognostic Gene Signature Construction
A total of 140 overlapping immune checkpoint-related genes between two cohorts were identified for subsequent analysis. Next, 14 up-regulated genes and 3 down-regulated genes were identified (Figure 2A). Afterward, univariate Cox analysis identified seven genes associated with survival ( Figure 2B), and five genes retained after LASSO Cox regression ( Figure 2C). Finally, multivariate Cox regression analysis was carried to build a risk signature. As a result, epidermal growth factor (EGF), MAP2K2, MCC (mutated in colorectal cancer), and NRAS (neuroblastoma RAS viral oncogene homolog) were determined as remarkably prognostic-related genes ( Figure 2D). The risk score of the signature for each sample was calculated as the following equation: risk score = 0.384204567 * expression of EGF + 0.012818859 * expression of MAP2K2 + 0.063749656 * expression of NRAS -0.267497698 * expression of MCC. Among them, EGF, MAP2K2, and NRAS had coefficients >0 and were considered high-risk factors associated with short survival; MCCs had coefficient <0 and were considered protective factors associated with long survival. The risk score was computed for each individual in the ICGC and TCGA cohort, and patients were classified into low-and high-risk groups.
The Performance of Gene Signature HCC patients of high-risk group showed a significantly unfavorable OS than patients of low-risk group in the ICGC cohort [hazard ratio (HR) = 5.83, 95% CI = 2.68-12.66, P < 0.0001; Supplementary Figure 1A] and further validated in the TCGA cohort (HR = 1.83, 95% CI = 1.28-2.61, P = 0.0009; Supplementary Figure 1B). Subsequently, to explore the stability and reliability of the signature, survival analysis in different subgroups was performed. As shown in Supplementary Figures 1C-L, the Kaplan-Meier curves demonstrated that the signature was a stable prognostic biomarker for patients with HCC stratified by age (<60 or ≥60 years), sex (male or female), stage (stage I-II or stage III-IV), cancer family history (yes or no), and without prior malignancy. In addition, the AUC values of the prognostic signature for the 1-, 3-, and 5-year survival rates in the ICGC cohort were 0.75, 0.78, and 0.79, respectively, ( Figure 3A). The signature expression between two cohorts, risk score distribution, and survival status of each patient is shown in Figure 3B. The prognostic signature could separate HCC patients into low-and high-risk groups, and with the increasing risk score, patients in the ICGC cohort have a worse OS; the expression of prognostic genes increased. Moreover, the AUC values for OS in the TCGA cohort at 1, 3, and 5 years were 0.63, 0.60, and 0.56, respectively ( Figure 3C). An increased risk score was associated with higher patient death rate ( Figure 3D). These results confirmed that the novel signature accurately predicted the prognosis of HCC patients.
Independent Prognostic Value of the Immune Checkpoint-Related Gene Signature
A multivariate Cox regression analysis was first performed among the available clinicopathological variables to determine whether the risk score was an independent prognostic factor for OS in the ICGC cohort. It was revealed that the risk score of the signature was significantly associated with the OS of patients with HCC after correction for other confounding factors variables (P < 0.0001, Table 2). Furthermore, after correction for other confounding factors in the TCGA cohort, the risk scores were still independent of other risk variables for OS in the multivariate Cox regression analysis (P = 0.043).
The Novel Gene Signature for Diagnostic Prediction of HCC
First, the levels of expression of four feature genes were verified in the TCGA cohort. Consistent with the results in the ICGC cohort, MAP2K2, NRAS, and EGF were found to be significantly up-expressed, whereas MCCs were significantly down-expressed in HCC (all P < 0.0001, Figures 4A-D). In the early diagnosis of HCC, there is a need for sensitive and specific diagnostic biomarkers to accurately distinguish HCC patients from adjacent tissues. Next, ROC analysis was carried out to investigate the diagnostic performance of the four genes for HCC between 240 HCC and 202 adjacent samples in the ICGA cohort. As revealed in Figure 5A, AUCs for EGF, MAP2K2, MCC, and NRAS were 0.668 (97.5% CI, 0.622-0.711), 0.887 (97.5% CI, 0.854-0.915), 0.845 (97.5% CI, 0.808-0.877), and 0.861 (97.5% CI, 0.825-0.892). In addition, the diagnostic power of single gene in the signature was further confirmed in the TCGA cohort ( Figure 5B) to distinguish HCC from adjacent tissues in the ICGC cohort ( Figure 5C). What is more, the powerful diagnostic capacity was further validated in the TCGA cohort ( Figure 5D). The model illustrated perfect discriminatory performance in the diagnosis of HCC with AUC of 0.988 (95% CI, 0.972-0.996), sensitivity of 90.64% (95% CI, 87.2%−93.4%), and specificity of 100% (95% CI, 92.9%−100%) to distinguish HCC from adjacent tissues. Particularly, the diagnostic model illustrated a perfect diagnosis performance for early stage of HCC (TNM stage I), as revealed in Figure 5E.
The diagnostic model diagnosed HCC patients at early stage with a sensitivity of 94.44%, specificity of 93.56%, and AUC of 0.955. We then validated the diagnostic model using the TCGA cohort. The diagnostic model diagnosed HCC patients at early stage with a sensitivity of 93.02%, specificity of 96.0%, and AUC of 0.985 (Figure 5F).
Gene Set Enrichment Analyses
The GSEA was carried out to reveal the biological processes altered between high-risk and low-risk groups. As revealed in the Supplementary Figure 3A, cell cycle, DNA replication, extracellular matrix (ECM)-receptor interaction, bladder cancer, gastric cancer, non-small cell lung cancer, and mismatch repair pathways were significantly enriched in the highrisk group. Chemical carcinogenesis, cytokine-cytokine interaction, JAK-STAT signaling pathway, and PPAR signaling pathway were significantly enriched in the low-risk group (Supplementary Figure 3B).
Immune Cell Infiltration and the Association With Four Immune Checkpoint-Related Genes
We first explored the composition of immune cells in HCC patients ( Figure 6A). The proportions of regulatory T cells (Tregs), macrophages M0, and macrophages M1 in the high-risk group were significantly higher than in the low-risk group (all P < 0.05, Figure 6B). However, the proportion of naive cells (P < 0.001) and memory cells (P = 0.009) in high-risk group were significantly lower than in low-risk group. As revealed in the Figure 7A, NRAS was positively correlated with resting dendritic cells, Tregs, and activated CD4 memory T cells (all P < 0.05) and negatively correlated with follicular T helper cells, naive B cells, and gamma delta T cells (all P < 0.05). MAP2K2 was positively correlated with plasma cells, Tregs, macrophages M0, follicular helper T cells, and activated CD4 memory T cells (all P < 0.05) and negatively correlated with CD4 memory resting T cells, naive B cells, and M1 macrophages ( Figure 7B; all P < 0.05). MCC was positively correlated with activated mast cells and CD4 memory resting T cells (all P < 0.05) and negatively correlated with CD8 T cells, Tregs, resting mast cells ( Figure 7C; all P < 0.05). EFG was negatively correlated with monocytes ( Figure 7D; P = 0.0395).
Potential of the Risk Score as an Indicator of Response to Immunotherapy
The association between the risk score and expression levels of three immune checkpoint genes was explored. As shown in Supplementary Figure 4, the risk score was significantly positively correlated with CTLA4 (coefficient = 0.114, P = 0.029) and showed a positive trend for PD-L1 (coefficient = 0.092, P = 0.078). However, it was not significantly correlated with PD1 (P = 0.575). TMB along with copy-number alteration can be used to categorize cancers into distinct sensitivity to immune checkpoint inhibitor therapy across pan-cancers (Liu et al., 2019). We further demonstrated that patients in high-risk group presented a significantly higher TMB than patients in the low-risk group (P = 0.0329), revealing that the high-risk group was more likely to have an immune response and response to immunotherapy.
DISCUSSION
HCC is one of the most prevalent deadly malignancies worldwide, showing a poor prognosis due to the high molecular and cellular heterogeneity and high rate of recurrence and metastasis (Desai et al., 2017;Finn et al., 2018). Although great and rapid progress has been made in surgical and medical therapy methods, the prognosis of HCC remains unsatisfactory. Lack of efficient detection biomarkers on the early stage contributes to the progression of HCC, and survival times differ greatly even among patients with the same TNM stage of disease. Therefore, early diagnostic markers and novel accurate prognostic models are urgently required in diagnosis and predict the survival of HCC patients.
In the past few decades, critical breakthroughs have been made in the field of immune surveillance, including the involvement of PD-1/PD-L1 and CTLA-4 signaling pathways in the development and progression of cancers, which plays a vital role in the regulation of immune responses . CTLA-4 is the first checkpoint protein blockade proven to be effective in cancer immunotherapy. It can migrate to the surface of T cells and compete with CD28 for binding to CD80 and CD86, thus inhibiting the proliferation and activation of T cells (Jiang et al., 2019). Moreover, other immune checkpoint molecules, such as PD-1, PD-L1, CD28, galectin-9 (Gal-9), and T cell immunoglobulin and mucin domain 3 (TIM-3), can properly regulate the immune system to avoid autoimmune responses caused by excessive activated immune cells (Zou et al., 2016). Nevertheless, when the immune checkpoint genes are overexpressed or activated, the immune function will be inhibited. As a result, cancer cells that excessively activate immune checkpoint genes can prevent local immune cells from escaping surveillance and clearance, thereby accelerating its growth .
However, the number of immune checkpoint-related gene biomarkers and prognostic models that could be utilized to predict the survival of HCC patients is still lacking. The present study aimed to identify an effective prognostic signature to stratify HCC patients and predict the survival of HCC. In this study, a total of 140 shared immune checkpoint-related genes were identified from two datasets. Five prognosis-related DEGs were screened out by using univariable Cox regression and LASSO algorithms in the ICGC cohort, which were then subjected to multivariate Cox regression analysis. Finally, a novel four-gene signature was generated and validated its efficiency in TCGA cohort, which could successfully categorize patients into low-and high-risk groups with distinct OS, where the high-risk subset exhibited a significantly poor prognosis pattern than the low-risk group. The efficacy of novel signature was found in the development cohort, validating cohort, and the subgroup from ICGC, indicating a robust high prognostic value of the signature. The AUC values of the prognostic signature for OS prediction presented excellent predictive performance in both cohorts. This four-gene signature also demonstrated to be an independent prognosis factor for HCC survival in two cohorts. A nomogram combining gender, prior malignancy, tumor stage, and risk score was proposed, which proved to be a better predictor than nomograms constructed with a single prognostic variable. The nomogram constructed with the combined model might be the optimal model for predicting OS for patients with HCC, which contributes to clinical management of HCC. Next, the SVM classifier incorporating the four genes displayed perfect discriminatory ability in distinguishing HCC from adjacent tissues with an AUC of 0.954 (95% CI, 0.930-0.971) in the ICGC cohort and an AUC of 0.988 (95% CI, 0.972-0.996) in the TCGA cohort. Furthermore, the diagnostic model illustrated a perfect diagnosis performance for early stage of HCC with an AUC of 0.955 (95% CI, 0.921-0.978) in the ICGC cohort and validated in the TCGA cohort with an AUC of 0.985 (95% CI, 0.959-0.997). We utilized a more comprehensive approach to develop a robust prognostic signature for HCC and successfully validated it in an external cohort. Moreover, high-risk group was more likely to have an immune response and respond to immunotherapy. Therefore, this immune checkpoint-related gene prognostic signature is accurate, robust, and interpretable. Tumor-infiltrating immune cells have a high prognostic value as to tumor progression and patient's survival in many solid organ malignancies (Marabelle et al., 2014). We found that the four genes were correlated with multiple immune cells; for example, NRAS was correlated with dendritic cells resting, Tregs, CD4 memory activated T cells, follicular helper T cells, naive B cells, and gamma delta T cells.
We identified four risky prognostic genes (EGF, MCC, MAP2K2, and NRAS). The EGF protein acts by binding with high affinity to the cell surface receptor and epidermal growth factor receptor (EGFR). Dysregulation of EGF has been correlated with the development and progression of multiple malignancies. Previous study had been validated that epithelialmesenchymal transition (EMT) in cancer cells is a vital step in malignancy progression, including cancer growth, invasion, and metastasis, and contributes to a high malignancy stage (Lindsey and Langhans, 2014). EGF is one of the growth factors and is known to play a role in EMT in HCC (Lim et al., 2020). EMTs and their associated early metastasis-related processes are activated by multiple growth factors such as EGF, transforming growth factor β, and platelet-derived growth factor (Gurzu et al., 2019). In addition, previous animal studies have demonstrated that targeted overexpression of EGF induces the formation of highly malignant HCC in mice, and its receptor EGFR is also up-expressed in HCC tissues (Liu et al., 2018). Interferon γ (IFN-γ), EGFR, and mitogen-activated protein kinase (MAPK) signaling pathways were associated with PD-L1 gene expression in HCC. EGF stimulation enhanced PD-L1 mRNA and protein expression levels in a representative HCC cell line group, further increased by EGF and IFN-γ stimulation (Xing et al., 2020). MCC, which located on chromosome 5q21 and encoded a protein that comprised 829 amino acids, is a candidate colorectal cancer suppressor gene that is reported to negatively modulate cell growth and differentiation and cell cycle and could suppress Wnt/β-catenin signal transduction (Fukuyama et al., 2008;Wang et al., 2016). MCC functions as an oncogene in B cells and may serve as a diagnostic marker and therapeutic target in B cell malignancies (Edwards et al., 2014). As a member of the MAPKK/MAP2K family, mitogenactivated protein kinase kinase 2 (MAP2K2) was demonstrated to be correlated with tumorigenesis and involved in the well-known RAS-RAF-MAP2K/MEK-MAPK/ERK pathway (Codogno and Meijer, 2005;Shinojima et al., 2007). A new mutation in MAP2K2 gene was reported, which most likely conferred resistance to dabrafenib and trametinib treatment and anti-PD1 therapies (nivolumab plus pembrolizumab), whereas a frameshift mutation in B2M was the strongest candidate alteration for progression on checkpoint inhibitor therapy in melanoma (Richmond et al., 2019). A previous transcriptome profiling study demonstrated that neuroblastoma RAS viral oncogene homolog (NRAS) was dysregulated in fibrolamellar HCC; however, the possible clinical implications or the function of NRAS has not been explored (Sorenson et al., 2017). Additionally, NRAS overexpression associated with poor outcome and proliferation in vivo. NRAS knockdown enhanced sorafenib efficacy in resistant cells and may be a prognostic predictor in HCC (Dietrich et al., 2019).
It was reported that NRAS mutations and PD-L1 expression were most common in primary vaginal melanomas and can be probably used as therapeutic targets (Wang et al., 2020). Meanwhile, the results of GSEA analysis illustrated that the four-gene signature-enriched pathway was notably associated with tumorigenesis pathways and immune-related biological processes.
To our knowledge, this is the first study to establish a prognostic signature based on immune checkpoint-related genes in HCC. However, our study had some limitations. In this study, some other important variables were unviable such as history of cirrhosis, history of hepatitis B virus, α-fetoprotein value, alcohol consumption, etiology of liver disease, and the main mode of treatment, which may have had a certain effect on the results. In addition, further validation of the effectiveness of the signature in other independent prospective studies and functional experiments of the identified genes is needed. Besides, more prospective clinical trials with larger sample sizes are required for further evaluation the potential diagnostic power. Thus, there is still a long way to go before the findings can be applied to clinical practice.
CONCLUSION
A novel immune checkpoint-related gene signature was developed, and it presented great potential clinical application value in predicting the OS for patients with HCC. The signature could act as a robust biomarker for early diagnostic and prognostic of HCC.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: The data sets involved in our study are publicly available in ICGC database (https://dcc.icgc.org/) and the TCGA database (https://portal.gdc.cancer.gov/).
AUTHOR CONTRIBUTIONS
SC and YD is the principle investigator. EZ conducted statistical analysis and data management. EZ edited and SC wrote and revised the manuscript. All authors read and approved the final manuscript.
ACKNOWLEDGMENTS
We sincerely acknowledge the publicly available International Cancer Genome Consortium (ICGC) database and The Cancer Genome Atlas (TCGA) database. | 2021-01-21T14:21:03.293Z | 2021-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "49e93f6764a41ddf67127f1e95a153061a60be75",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2020.620765/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49e93f6764a41ddf67127f1e95a153061a60be75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246295306 | pes2o/s2orc | v3-fos-license | The Role of Maxillofacial Structure on Condylar Displacement in Maximum Intercuspation and Centric Relation
Purpose This study is aimed at evaluating the impact of the craniofacial structure and occlusal conditions on the position of the articular heads of the mandibular condyles in the maximum intercuspal position (MIP) and comparing the centric relation (CR) and MIP of the mandibular condyles prior to orthodontic treatment. Methods The studied group consisted of 33 women and 15 men (median age of 17.75 years). Contact points of opposing teeth in the MIP were assessed by hand-held casts. Condylar displacement (CD) in three spatial planes on both sides was measured on models mounted in an articulator using a mandibular position indicator (MPI). Patients were divided into groups according to craniofacial structures (vertical and horizontal growth directions). The Mann-Whitney, Kruskal-Wallis, post hoc Dwass-Steel-Critchlow-Fligner, and Pearson's χ2 independence tests as well as Spearman's nonparametric correlations were used in the statistical analyses. Results Within the limitations of this study, no statistically significant correlation of CD with certain cephalometric measurements from a lateral cephalometric radiograph (ANB, SN-ML, and SGo/NMe) was observed. Correlation, however, was found between condylar displacement in the transverse axis and the mandibular plane angle SN-ML (p = 0.033) and also between condylar displacement in the anteroposterior axis and a midline shift of the mandible (p = 0.041). The results revealed a relationship between Angle's classification of molar position on the right side and anteroposterior CD values (p = 0.006). Conclusions Cephalometric measurements cannot be used to predict CD at the level of the condyles. Analysis of occlusal conditions of models mounted in an articulator is desirable for patients with Angle's class I and lower jaw asymmetry.
Introduction
The centric relation (CR), which is defined as an optimal, orthopedically stable musculoskeletal position [1], provides maximum jaw stability and minimizes the force directed on each tooth during its functioning [2]. If occlusal interferences are present, then seating of the condyles in CR will be prevented and they will undergo a forced displacement as to allow for a stable occlusion of the teeth-the maximum intercuspal position (MIP). Such condylar displacement (CD) from CR to MIP occurs in 83.3% of the untreated population [3,4]. Electromyographic studies suggest that the position of the articular heads of the mandibular condyles in the CR, when the teeth are in the MIP, allows for more harmonious and less intense work of the masticatory muscles [5] and provides a clinically repeatable reference posi-tion for developing a functional treatment plan for occlusion [6]. For this reason, the orthodontist should take this position into account in order to provide maximum comfort posttreatment for the patient. Orthodontic treatment should be planned with the intent of achieving maximal conformity of the condylar processes' positions in CR and MIP [2,3,7,8].
It has been proposed that an articulator be used when planning orthodontic treatment to better assess the relationship between occlusion and position of the condyles [7,9]. Mounting casts in an articulator can be helpful to better visualize conditions of occlusion in a stable musculoskeletal position and may offer the patient better quality treatment by providing a fuller diagnosis [10]. However, this approach is not always necessary. The majority of growing orthodontic patients usually end treatment before maturation of the temporomandibular joint (TMJ) has completed. An orthodontist's goal is to provide occlusal conditions within the patient's physiological tolerance or adaptability and should strive to achieve occlusal conditions resulting in a maximal stable musculoskeletal position. An articulator may be more helpful in adults due to completed TMJ growth and lower patient adaptability. This may be particularly helpful in patients with a hyperdivergent facial pattern. Ponces et al. [11] showed that the risk of making an incorrect diagnosis in this group of patients was about 30%. More studies with varying patient groups are needed. There is a lack of studies assessing in which patients mounting casts would be indicated, and many orthodontists are not willing to mount every single patient cast in an articulator because of the tediousness of this task. A meta-analysis of such studies would allow for a compromise in this discussion.
This cross-sectional study seeks to evaluate the impact of the craniofacial structure and occlusal conditions on the position of the mandibular condyles' articular heads in the MIP and compare the CR and MIP of the mandibular condyles prior to orthodontic treatment.
Materials and Methods
2.1. Patients. The studied group consisted of 48 patients (aged 11.50-50.30 yrs, median 17.75 yrs, 33 women and 15 men) with complete permanent dentition or interdental deficiencies due to premature loss of permanent teeth. Patients were recruited from the department of orthodontics of a university-based dental hospital. Gender was not a qualifying criterium for the study. Patients with facial trauma within the previous 5 years were excluded from the study. All patients were examined by the same experienced operator to avoid researcher-biased error.
Ethical
Issues. The study was conducted according to the guidelines of the Helsinki Declaration of 1975, as revised in 2013, and approved by the Institutional Bioethics Committee of the Medical University of Białystok, protocol code number R-I-002/226/2017. Consent was given by all participants for a physical exam as well as for the usage/analysis of X-ray images and dental casts. Participants (or their legal guardians where applicable) signed appropriate consent forms before being enrolled in the study.
Methods.
Interview and physical exam data were analyzed. Patients were divided into two groups using clinical indicators of temporomandibular disorders (TMDs) according to the Helkimo index [12]. One group included patients with no or minimal TMJ disorder (Helkimo Di0 and DiI) and the second group with more pronounced disorders (Helkimo DiII and DiIII).
A pantomographic and lateral telerentgenogram were taken of the head in the MIP with consideration of the Natural Head Position (Planmeca ProMax 3D Mid; Planmeca Oy, Helsinki, Finland), and high-quality orthodontic diagnostic casts were obtained from class IV synthetic dental stone (IV high-strength dental stone) (Fujirock EP; GC EUROPE N.V., Leuven, Belgium). Dental impressions were made with an irreversible hydrocolloid mass (Hydrogum 5; Zhermack S.p.A., Badia Polesine (Rovigo), Italy) on metal trays with deepened walls (Algilock; Hager & Werken GmbH & Co. KG, Duisburg, Germany). The analysis of diagnostic casts in the MIP was carried out on an intraoral record made of soft pink modelling wax (Modelling wax; Zhermack Sp. z o.o., Warszawa, Poland). The patient was asked to bite down on the wax in the MIP. The acquired impression's accuracy was rechecked in the patient's mouth after cooling in ice water. The CR was determined by the "power centric" method according to Roth [13] after prior neuromuscular deprogramming (pulsatile biting of a wooden spatula for 5-10 minutes) and then registering with wax (Bite Registration Sheet Wax, Almore International, Inc., Portland, OR, USA) on two pieces. A four-layer front wax record was obtained after positioning the patient at a 45°angle to the ground and heating the wax in a water bath to 57°C (06-DK-2000-1; Przedsiębiorstwo Techniczno-Handlowe "CHEMLAND," Stargard, Poland). The patient's mandible was guided by the operator to avoid protrusion during the closing motion. The frontal impression was then cooled for 5 minutes in ice water and placed between the patient's dental arches together with the heated rear impression consisting of two or three layers. The patient's mandible was initially guided by the operator, and after reaching the appropriate grooves in the frontal impression, the patient was asked to bite down with increased force. Analysis of the casts in the CR was performed in a SAM 3 articulator. Registration of the maxilla's position using a face-bow (AxioQuick III, SAM Prazisionstechnik GmbH, München, Germany) allowed the upper cast to be mounted in the articulator using dental stone (Stodent III arti; Zhermack Sp. z o.o., Warszawa, Poland). CR registers were used to mount the mandibular cast. CR registration was repeated after 1-2 weeks in 10 randomly selected patients to assess the reproducibility of the CR records. These patients were fitted with new casts of the mandible to previously mounted casts of the maxilla. The results of both registrations underwent a comparative analysis.
Points of contacts of opposing teeth in CR and MIP were assessed. Contacts of opposing teeth in the MIP were assessed according to Angle's classification. The presence of a scissor-bite and cross-bite was verified. CD measurements on left and right sides were performed on casts using a mandibular position indicator (MPI) with a gauge (MPS, SAM Prazisionstechnik GmbH, München, Germany). Measurements were taken in three spatial planes assessing the positions of the condylar processes in the MIP in relation to the hinge axis of the articulator representing the CR. The difference was measured in the anteroposterior (x), vertical (z), and transverse (y) axes. The linear displacement of the position of the condylar processes in a given axis (Δx and Δz) was measured using graph paper and a magnifying glass with 0.1 mm measuring lines. Each measurement was performed twice, by the main researcher and by a second independent researcher, and then averaged. The device was recalibrated every 5 measurements using the MPI.
The condyle's position was assessed by criteria proposed by Utt et al. [14] and Hidaka et al. [15]. The ideal ranges of 2 BioMed Research International the position of the condylar process were accepted as x < 1, z < 1, and y < 0:5. Discrepancies of ≥2 mm in the anteroposterior or vertical axes or ≥0.5 mm in the transverse direction were considered clinically significant. Cephalometric images were analyzed according to Jarabak and Björk's method in the Dolphin program (v1.8; Dolphin Imaging and Management Solutions, Chatsworth, California). Facial skeleton structure was assessed (rotation direction, mandibular plane inclination angle, and skeletal classes) and its impact on the CR-MIP difference. Patients were divided according to vertical cephalometric measurements into 3 groups depending on the SGo measurement (posterior face height) in relation to the NMe (anterior face height). Patients with ≤59% ratio were included in the hyperdivergent face type group. Normal divergent face types included those with a SGo/ NMe (sella gonion/nasion menton) of 59-65. Patients with a ratio ≥ 65 were qualified to the hypodivergent group. Patients were also divided into 3 groups depending on the mandibular plane's inclination angle assessed by the NS/ML (nasion sella line-mandibular line) measurement according to Björk. A value of 33 ± 6 degrees was accepted as normal. Patients above this value were included in the posteriorotation group, and patients below this were included into the anteriorotation group.
Patients were also grouped according to horizontal cephalometric measurements to skeletal classes according to the ANB angle. Skeletal class I comprised an ANB (point Anasion-point B) of 3:0 ± 2:5 degrees, class II included patients above this value, and class III included patients below this value.
Statistical Analysis.
Quantitative variables were analyzed by nonparametric tests. Consistency of repeated measurements was assessed using intraclass correlation coefficients (ICCs) for absolute compliance. Comparisons between subgroups were performed using Mann-Whitney tests, while Kruskal-Wallis tests were used to compare larger subgroups, supplemented with post hoc tests according to Dwass-Steel-Critchlow-Fligner [16]. Relationships between pairs of quantitative variables were determined using Spearman's nonparametric correlation coefficients. Relationships between qualitative or ordinal variables were assessed by Pearson's χ 2 independence tests. Calculations were made using IBM SPSS Statistics version 20.0. Statistical hypotheses were verified at a 0.05 significance level.
Results
The study included 48 patients. Characteristics of the studied group are presented in Tables 1-7. Table 8 outlines the CD results with standard deviations in all three spatial planes. Most commonly observed displacements were 3 BioMed Research International downwards in the vertical axis, in the transverse axis, and to a lesser extent, in the horizontal axis. Figure 1 presents the placement of the condylar process's position in the MIP in relation to CR on a 1 mm grid. Articular surfaces of the condylar processes were located in the inferoanterior range in 58.3% of patients (Δx > 0 and Δz > 0), which indicates displacement of the condylar process in the MIP down and forward. The condylar articular heads of the mandible were in the ideal range (Δx < 1, Δz < 1, and Δy < 0:5) in 58.3% of patients.
Among 96 examined positions of the condyles of the mandible, significant CD in the vertical dimension (Δz ≥ 2 mm) occurred in six instances (6.3%). Significant transverse CD (Δy ≥ 0:5 mm) was registered also in six instances (6.3%). However, only 1 patient presented with a horizontal dimension of CD as Δx ≤ −2 mm or Δx ≥ 2 mm.
3.1. Assessment of the Test Method's Repeatability. Evaluation of the repeatability of the registrations after 1-2 weeks in 10 patients showed statistically significant agreement only in measurements of Δx on the left side and Δz on the right side (Table 9). Other measurements were not statistically significant. Negative Δz values were recorded in 5 patients.
Due to the small number of negative Δz measurements, these (negative) measurements were repeated. A patient was excluded from the study if the repeated result was positive. Repeat negative values were averaged and included in the statistical analysis.
3.2.
Results of Statistical Analysis. Significant positive correlations of Δ x and Δ z with the corresponding measurements of the opposite side were observed (Table 10). Negative cor-relation of a bilaterally Δ x shift with the left sided Δ z measurement was also shown, meaning that the condylar shift backwards affects the left condyle causing a downward shift of its position. Positive correlation was also noted between the Δy and Δ z measurements on the right side, signifying Figure 1: Graphical representation of displacement of condylar processes (n = 48). CR: centric relation. BioMed Research International that rightward condylar displacement affects the position of the right condyle, shifting it more downwards. Displacement of the condylar process in the anteroposterior axis was associated with a mandibular midline displacement (p = 0:041). Protrusion of the left condylar process resulted in mandibular midline shift to the right (p = 0:029) Table 11.
The Pearson chi-square test for independence, which compared patients with an ideally positioned condylar process (Δx < 1 mm, Δz < 1 mm, and Δy < 0:5 mm) with patients with CDs exceeding ideal values, did not show a significant correlation between CD and the Helkimo index and cephalometric measurements (ANB, SN-ML, and SGo/ NMe). There was no statistically significant relationship between the range of condylar displacements in the 3 spatial planes and cephalometric variables (ANB and SGo/NMe) (data not shown). However, a relationship between the displacement of the condylar processes in the transverse axis and the mandibular plane inclination angle was observed (SN-ML) (p = 0:033). Patients with posteriorotation had more of a rightward CD, while those with anteriorotation presented with leftward CD (p = 0:022) Table 12. Correlation between the classification of occlusion of the first molars according to Angle on the right side and the anteroposterior CD (Δx) was seen (p = 0:006) Table 13. A similar, but not statistically significant, trend was observed on the left side. The Dwass-Steel-Critchlow-Fligner test showed a statistically significant difference between Angle's classes I and II (p = 0:01) and II and III (p = 0:02). The condyles in the MIP were distal to the position in CR in Angle's class II, as opposed to class I where they were previously located anteriorly. There were no significant differences between classes I and III. No correlation between Angle's classes and CD in the z and y axes was seen. Furthermore, there was no relationship between CD size and the presence of scissor or cross-bite.
Discussion
The MPI used with the SAM articulator in assessing the condyles' positions proved to be accurate and reliable [3,7,17,18]. Although most orthodontic patients could be assessed using ordinary hand-held casts, it is recommended to mount casts in an articulator since a malocclusion may mask the true maxilla-to-mandible ratio [13,19,20]. The occurrence of CD affects the orthodontic diagnostic process and changes malocclusion characteristics which are initially assessed in the MIP. In order to avoid errors in diagnosis and orthodontic treatment planning, lateral cephalometric images of the head should be converted from MIP to CR, especially when at least one axis displacement of ≥2 mm is present [19,21].
Crawford [3] has shown that CDs larger than 1 mm in the horizontal or vertical planes or 0.5 mm in the transverse plane may adversely affect the TMJ. According to researchers, TMD symptomatology increases when condylar position indicator (CPI) measurements oscillate between 1 and 2 mm. CDs over 2.0 mm, in turn, are critical factors that should be taken into account when estimating the risk of a TMD. The orthodontist is unlikely to achieve as accurate CR and MIP compliance as the restorative dentist, but these studies suggest that the smaller the difference in the CR-MIP, the less likely TMD symptoms will develop. TMD is a multifactorial pathology, and a direct correlation between occlusion and TMD symptoms is difficult to determine. Nonetheless, the lack of scientific evidence is not a confirmation that such a relationship does not exist [22]. Orthodontic treatment routinely alters a patient's occlusion, and abnormal tooth contact is one of the potential risk factors for TMD; thus, orthodontists should provide the patient with a therapeutic position that minimizes this risk [2].
This study saw a much smaller percentage of patients with significant CD in all three directions compared to other studies [14,15]. Significant displacement was observed in the transverse axis in 8 condyles (8.3%), in the vertical axis in 6 condyles (6.25%), and only in 1 condyle in the anteroposterior axis (1%). Five patients (10.4%) had significant displacement in the vertical or anteroposterior axis on at least one side. Seven (14.6%) patients had significant displacement in one of the three planes. Differences in the magnitude of shifts in the previous studies may have been due to anatomical differences of the TMJ's or dental arches, inclusion criteria of a given study, presence of a TMJ dysfunction, differences in the neuromuscular deprogramming methods, or differing CR registration techniques.
This research, as in other studies [3,15,17], noted mandibular condyles of most patients in the MIP were located in the anteroinferior or posteroinferior range. In a group of untreated patients, 95.8% of the condyles in the MIP were located below the CR, which also coincided with other studies [3]. In this study, the majority of condyles (58.3%) were located in an anteroinferior position, meaning that they were displaced anteroinferiorly. However, in other studies published previously [3,11,15,17,21,23,24], most condyles were located in the posteroinferior range. An anterior displacement is associated with interceptive occlusal contacts [11] noted that in a group of hyper and normodivergent patients, as in most studies [3,15,21,23,24], rearward dislocation of the condyles was observed more frequently, whereas the condyles were displaced more forward in a group of hypodivergent patients. The forward displacements observed here are probably the result of the population's frequently encountered deeper facial structures. This may be associated with other muscular activity associated with varying facial patterns. Elevator muscles are stronger, placed more forward, and act more vertically in hypodivergent faces. This causes a greater release of force in the forward direction [3].
CD and Maxillofacial
Morphology. This research confirms the results found in previous studies that there is no relationship between face morphology and CD size in anteroposterior and superoinferior measurements [14,15]. However, a relationship between the displacement of the condylar processes in the transverse axis and the mandibular plane inclination angle (SN-ML) was observed in this study. This may be a result of altered masticatory muscle work in patients with improper vertical face shape. The data related to this topic are rare and contradictory, as such more indepth studies are needed in this direction. Girardot [23] noticed larger vertical and anteroposterior CDs in hyperdivergent face morphologies, while Burke et al. [25] found a reduction in the upper joint space in the same type of face. Ponces et al. [11] showed that a group of patients with a hyperdivergent face type was characterized by a much greater CD along the vertical axis; however, horizontal CD occurred in these patients much less, the largest being in the group of hypodivergent patients. The studied group in this study consisted mostly of normodivergent and hypodivergent patients, commonly seen in a white population, and may have been the reason for the differing results. Lim et al. [26] showed that patients with a large CR-MIP discrepancy were characterized by specific facial features: decreased SNB angle, N Perpendicular to Pg, the height of the mandible's ramus, increased ANB angle, and inclination of the mandibular ramus in both CR and MIP. There were no significant differences in the measurement of the facial skeleton in the MIP in patients with small or large CR-MIP 7 BioMed Research International discrepancies, possible due to the fact that only patients with TMJ disc displacement and CR-MIP discrepancy were included in the study. The small number of patients with a significant CD in this study may have been insufficient to confirm this observation. However, considering anteroposterior disorders, Shildkraut et al. [21] showed that the differences in the position of the condylar processes of the mandible in the vertical dimension Δz occurred equally in patients with skeletal classes I and II. The differing results of individual studies may result from varying research methodologies or nonuniform qualification of patients such as the inclusion or elimination of patients with TMD symptoms. The neuromuscular system can respond to occlusal interferences in two ways: one by moving the condyle in the joint to achieve maximum occlusive contacts, while the second results in the appearance of an anterior open bite and contacts only on the lateral teeth. In the second situation the CD is reduced. Another factor affecting the heterogeneity of tests is the exclusion of negative values of vertical axis Δ z displacements, commonly resulting from an error at the stage of obtaining the CR registration. Negative Δz values should not occur in patients without TMD. However, this may occur when patients with symptoms of TMJ degeneration are included in a study. Varying research results and the small number of such studies in Europe and the lack there of in Poland show the need to continue these types of projects on a larger scale, taking into account the same number of patients with different facial morphologies.
CD Asymmetry.
This study saw a mandibular midline shift in 54.2% of patients, although bilateral Δx measurements showed a significant relationship (nonparametric Spearman correlation 0.658). Measurements of Δz were also significantly correlated (nonparametric Spearman correlation 0.609). Some asymmetry was observed due to the fact that this correlation was not perfect. In this study, when the CD was downwards, it was greater on the left side, as was in the study by Hidaka et al. [15]. The left condyle moved forward in the MIP (median shift of 0.25 mm), while the right condyle showed almost no tendency to move (median shift of 0.03 mm). This asymmetry resulted in rightward midline shift of the mandible in the MIP. Statistical analysis confirmed this observation to be significant (p = 0:029). This study also noticed a negative correlation between anteroposterior and superoinferior measurements, admittedly only concerning the left condyle. Displacement of the left condyle forward affects its upward position. Hidaka et al. [15] also noticed pronounced asymmetry of the CD, noting that displacement downwards was larger on the left while forward displacement was larger on the right. These features may cause displacement of the anterior portion of the mandible to the left, however, Hidaka et al. showed a weak positive correlation in this direction. Studies by Pullinger et al. [27] have also shown a low occurrence of lateral shifts and a slight association with asymmetry of right and left condyle positions. Therefore, mandibular condyle displacement can be one of many components of mandibular asymmetry. The clinical implications resulting from these observations should prompt in-depth investigations, espe-cially in patients with midline displacement of the mandibular arch. When malocclusion is due to asymmetry of condyles in their articular fossa, the CR should improve the occlusal condition. This may allow a skeletal component of the defect to be excluded from the diagnosis and correction of the midline through teeth movements may not be necessary to the extent that this would have been planned in the MIP. However, this study did not confirm a relationship between the presence of transverse malocclusions and CD size. This may be due to the small number of patients included in the study with this particular type of defect.
CD and
Angle's Classification. Wood and Elliot [28] noticed that the mandibular body and teeth can dislocate distally, resulting in an increased horizontal overlap, reduced vertical overlap, and a change in molar relation from Angle's classes I to II. This research confirms these observations at the level of the condylar processes. The median anterior shift of Δx = 0:3 mm in patients with Angle's class I explains the occurrence of Angle's class II on casts registered in CR, after when reaching first contact in class II may produce an anterior shift and final contact is achieved in the MIP in class I. Following this line of reasoning, a median shift of Δx = 0:33 mm in Angle class III could reduce the severity of the defect on CR-registered casts. However, this study did not find a significance correlation here. This may be explained by the small number of patients with class III. The median displacement of Δx = −0:3 in patients of Angle's class II after mounting casts in CR could also reduce the defect, which changes due to the predisposition to shift posteriorly in order to achieve an MIP. Significant variations in Δx in classes II and III (p = 0:02) confirm this theory. However, according to Lim et al. [26], patients with high CR-MIP displacement at the level of the incisors in CR have a more retracted mandible and a more vertical growth pattern, which may exacerbate the severity of a class II defect. This tendency is mainly seen in patients with a vertical growth pattern, hence resulting in the discrepancies found in this study. No statistically significant relationship was found between the molar classification according to Angle on the left side and the value of Δx (data not shown). This is most likely due to the small study group. A similar tendency, similar to that seen on the right side, however, suggests that with a larger group, this result would probably be significant. However, Utt et al. [14] found no differences in the size of MPI measurements between patients from classes I and II; therefore, further research is required to confirm these observations. Nonetheless, particular attention should be paid to the need for CR registration in patients with Angle's class I where the condyles in the MIP are often displaced anteriorly.
Potential Research Errors.
A repeated recording of the CR performed to assess measurement error after 1-2 weeks did not reveal statistically significant compliance of most measurements. This is probably because the "power centric" method was developed to record condylar position on the day of registration. The method's repeatability has been previously documented [13,29,30]. However, in the event of 8 BioMed Research International occlusive interference, an incorrect mandibular closing pattern may persist in order to avoid excessive occlusive forces on the teeth. This situation may impede determination of the CR [13,19,31]. An ideal study protocol would require complete deprogramming of all patients by splint therapy. However, this is not practical in small-scale studies due to long-term splint therapy. Including such deprogramming in future studies could increase their value. Some sources suggest excluding negative Δz values [13]. This study saw low values of negative Δz measurements; thus, the measurements were repeated in these patients. If the repeated result was again negative, it was included in the study and remaining patients were rejected. Negative values may have resulted from the use of an averaged hinge axis in the articulator, insufficient patient deprogramming, excessive muscle tone/force, or TMJ degeneration. Slavicek attributes this finding to a compression phenomenon [19]. The risk of error was minimized due calibration of the apparatus every five patients as well as measurements being performed by two independent researchers.
This study qualifies as an early attempt to assess the impact of the craniofacial structure and occlusal conditions on the position of the mandibular condyles. More studies with a larger patient base are needed. Studies to date have focused on hyperdivergent patients where this study involved hypodivergent ones which allowed for the observation of differences in the position of the condylar processes in Angle's classes I and II and the correlation between lateral shifts of the mandible and an asymmetry of right and left condyle positions. A larger number of similar studies would allow for a meta-analysis to be carried out, which would help orthodontists assess the need for cast articulation in orthodontic treatment in various patient groups.
Conclusion
With the limitation of the present study, cephalometric measurements (ANB, Sgo/NMe, and SN-ML) do not provide sufficient information to predict the frequency, size, and direction of CD at the level of the condylar processes. Cast analysis in an articulator makes it possible to diagnose the size and direction of the CD and is particularly desirable in patients with Angle class I, in whom an anterior CD may mask the occurrence of an Angle class II in CR. In addition, it would allow an assessment of whether the malocclusion is the result of an eccentric shift of the mandible, in which the asymmetrical displacement of the condyles results in a mandibular midline shift.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2022-01-21T17:07:20.337Z | 2022-01-19T00:00:00.000 | {
"year": 2022,
"sha1": "47eaee3e586d7f32bca5235ca9fbaf629ea1e5fd",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2022/1439203.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74e222a6377643e267f19bb6fba85192bf3a492e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119297728 | pes2o/s2orc | v3-fos-license | Mid-infrared (3-8 {\mu}m) $\text{Ge}_{1-y}\text{Sn}_y$ alloys (0.15<$y$<0.30): synthesis, structural, and optical properties
$\text{Ge}_{1-y}\text{Sn}_y$ alloys with compositions in the 0.15<$y$<0.30 range have been grown directly on Si substrates using a chemical vapor deposition approach that allows for growth temperatures as high as 290 $^{\circ}$C. The films show structural properties that are consistent with results from earlier materials with much lower Sn concentrations. These include the lattice parameter and the Ge-Ge Raman frequency, which are found to depend linearly on composition. The simplicity of the structures, directly grown on Si, makes it possible to carry out detailed optical studies. Sharp absorption edges are found, reaching 8 $\mu$m near $y$ =0.3. The compositional dependence of edge energies shows a cubic deviation from the standard quadratic alloy expression. The cubic term may dramatically impact the ability of the alloys to cover the long-wavelength (8-12 $\mu$m) mid-IR atmospheric window.
The recent development of Si-compatible Ge1-ySny alloys represents an intriguing opportunity for infrared technologies, since the alloys are expected to possess a direct band gap E0 between 0.8 eV and -0.4 eV, similar to the ubiquitous HgCdTe system. Furthermore, most theoretical predictions and extrapolations from experimental data indicate that E0 becomes zero for y = 0.25-0.30, so that the two important mid-IR atmospheric windows in the 3-5 μm and 8-12 μm ranges should be accessible using Ge-rich alloys. However, a full experimental verification of these predictions is not available, because systematic band gap studies become increasingly problematic as the Sn concentration exceeds 15%. This is due to the fact that the standard Ge-buffer technology loses its effectiveness for accommodating the lattice mismatch, causing a deterioration in materials quality that can even lead to epitaxial breakdown. 1 Attempts to circumvent the quality issues to achieve alloys with y >> 0.15 are based on lowering the growth temperature to about 150 ℃ in molecular beam epitaxy (MBE), 2,3 or using complex buffer layers with intermediate compositions. [4][5][6] However, very few reports have been published on band gaps from such samples, and the few results available are inconsistent. For example, Imbrenda finds E0 = 0.358 eV for y = 0.15 alloys grown by MBE on Ge substrates, 7 whereas Dou et al. obtain exactly the same value for y = 0.223 using chemical vapor deposition (CVD) on graded buffers deposited on Si substrates. 6 These contradictory results suggest that the elucidation of the mid-infrared optical properties of Ge1-ySny will require experiments on samples that meet the following requirements: first, a smooth and monotonic compositional dependence of their structural properties that is consistent with the properties of low-Sn alloys of proven quality; second, the samples should not contain intermediate buffer layers, particularly graded ones, because such complex structures make it very difficult to extract the optical properties of the layer of interest. In this letter, we report on the structural and optical characterization of a series of Ge1-ySny alloys that satisfy these two criteria.
Our Ge1-ySny alloys were synthesized by CVD using stoichiometric reactions of highreactivity Ge3H8 and SnD4 custom reagents. 8 The layers are grown directly on Si, bypassing Ge buffers and/or complex graded layers. The composition range reaches far beyond the previous y = 0.17-0.18 threshold for samples grown directly on Si, 9 and includes the highest Sn levels synthesized to date using practical CVD methods. The growth is conducted between 245-290 o C, significantly above the temperatures (~150 o C) employed in MBE. This facilitates nearly full strain relaxation, as shown by X-ray diffraction (XRD). The relaxed lattice parameter and the Ge-Ge Raman frequency follow the same linear compositional dependence previously established in low-Sn films, 10,11 demonstrating similar structural properties and no Sn-segregation. The simplicity of the structures, devoid of buffer layers, makes it possible to carry out detailed optical studies using spectroscopic ellipsometry (SE). In the visible range, we find sharp features corresponding to all optical transitions observed in Ge-like materials. In the IR, we find absorption edges extending all the way down to 8 μm. The compositional dependence of these features show a nearly ideal quadratic dependence for high-energy features, but clear deviations from this dependence for E0(y). gave Sn/Ge ratios closely matching the corresponding ratios in the gaseous mixtures. In the Sidoped samples, the amount of Si was found to range from 2% to less than 1% at the highest Snconcentrations. At such low levels Si has a very minor impact on the material properties, but the ability to incorporate this element under the high-Sn growth conditions may turn out to be important to achieve full mid-IR coverage, as discussed below. The temperatures that maximize the growth rate while maintaining a mirror-like surface appearance and suppressing Sn The inset shows the case of the E1 transition. In Ref. 12 a value b1 = 1.32 eV was obtained for the bowing parameter of this transition by fitting experimental data from Ge1-ySny alloys with y < 0.14.
These data points appear in the shaded region of the inset. The data from our new samples with 0.14 < y < 0.30, shown by squares, fall almost perfectly on the same curve determined from the low Sn-concentration data. Essentially the same bowing parameter was also found in Sn-rich samples by Carrasco et al.,(Ref. 14) so that the quadratic dependence is valid over the entire 0 < y < 1 compositional range.
The band gap spectral region was investigated with infrared SE (IRSE). The measurements were performed on a J.A. Woollam TM IR-VASE system over an energy range extending from 0.03-0.7 eV, with a step size of 1meV and three angles of incidence, typically 65°, 70°, and 75°. The sample was modeled as substrate, a GeSn film, an oxide layer, and a roughness layer. The Tanguy for the so-called Hulthen excitonic potential. 18 This potential includes a screening parameter g that is computed using a prescription from Bányay and Koch 19 starting from the Thomas-Fermi screening wave vector, which is calculated using standard expressions. The direct band gap as a function of composition is shown in Fig. 6 20 and proposed an expression of the form , with = 2.66 eV , and = -5.4 eV. These fit parameters were obtained by studying samples with y < 0.10, which are indicated as circles in Fig. 6(b). Convincing statistical evidence for this effectively cubic compositional dependence was obtained from a very large sample set, but the deviations from a purely quadratic function are very small for y < 0.10. Our extended data for y > 0.10, on the other hand, provide clear evidence for the characteristic S-shape associated with cubic terms. A fit that includes all available data points gives = 2.88 ± 0.04eV, and = -5.23 ±0.025 eV, and is shown as a solid line. The S-like shape in the compositional dependence has been observed in III-V alloy systems 21,22 and was justified theoretically in Ref.
20. Furthermore, Lan, Chang, and Liu (LC&L) also predict an S-like dependence from a pseudopotential band structure calculation within the virtual crystal approximation. 23 Their calculation is perfectly fit using a cubic expression, and we show the result (combined with room temperature band gaps) as a dash-dotted line in Fig. 6(b). We see that the deviations from the best fit to the data and the LC&L prediction are not very large for the y < 0.3 available experimental data, but the extrapolation to Sn-rich alloys are significantly different. Both predictions seem to disagree with recent measurements on samples with y > 0.94 (Ref. 14), but the discrepancy should be interpreted with caution because the E0 transition in α-Sn has a unique line shape that is not fully understood. 24 The difference between the LC&L prediction and the best fit in Fig. 6(b) has an important practical consequence: LC&L predict a vanishing band gap for y = 0.35, which implies that the 8-12 μm window could be easily covered with Ge-rich Ge1-ySny alloys. On the other hand, if our best fit is valid well beyond y > 0.3, the full 8-12 μm window would only be accessible to alloys with y = 0.7-0.8, which, if feasible, may require a completely different growth strategy. There is, however, a counterintuitive approach that may lead to smaller band gaps in a Ge-rich material: the reduce E0. Our finding that Si can be incorporated into the lattice using the Si4H10 precursor suggest that our growth strategy is a promising route to achieve full mid-IR coverage at modest Sn concentrations.
This work was supported by the AFOSR under grants FA9550-17-1-0314 and FA8650-18-C-1152. The use of the TEM facility at the Eyring Materials Center is gratefully acknowledged. | 2019-04-15T17:24:13.000Z | 2019-04-15T00:00:00.000 | {
"year": 2019,
"sha1": "92c501676dca19e9f86e02f36cb0f73340b2e57a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1904.07201",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c55932d35087989fe540bd7108507fdc90f1a732",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
247842039 | pes2o/s2orc | v3-fos-license | RNA editing increases the nucleotide diversity of SARS-CoV-2 in human host cells
SARS-CoV-2 is a positive-sense, single-stranded RNA virus responsible for the COVID-19 pandemic. It remains unclear whether and to what extent the virus in human host cells undergoes RNA editing, a major RNA modification mechanism. Here we perform a robust bioinformatic analysis of metatranscriptomic data from multiple bronchoalveolar lavage fluid samples of COVID-19 patients, revealing an appreciable number of A-to-I RNA editing candidate sites in SARS-CoV-2. We confirm the enrichment of A-to-I RNA editing signals at these candidate sites through evaluating four characteristics specific to RNA editing: the inferred RNA editing sites exhibit (i) stronger ADAR1 binding affinity predicted by a deep-learning model built from ADAR1 CLIP-seq data, (ii) decreased editing levels in ADAR1-inhibited human lung cells, (iii) local clustering patterns, and (iv) higher RNA secondary structure propensity. Our results have critical implications in understanding the evolution of SARS-CoV-2 as well as in COVID-19 research, such as phylogenetic analysis and vaccine development.
Introduction
The rapid spread of coronavirus disease 2019 (COVID-19) across the world represents an urgent healthcare emergency. By January 2022, the virus had infected >352 million people and caused >5.6 million deaths globally, and these numbers continue to increase. COVID-19 is caused by a novel coronavirus designated as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1,2]. In the past two years, extensive efforts have been made to characterize this highly contagious virus: the genomes from thousands of infected patients have been sequenced, and the transcriptome architecture has been determined. The genome of SARS-CoV-2 is a positive-sense, single-stranded RNA of~30 kb and contains ten canonical RNA products in addition to a few unknown ORFs [1][2][3]. These results have provided a key foundation for elucidating the evolutionary pattern and pathogenicity of SARS-CoV-2 and for developing effective treatment strategies. However, our knowledge of nucleotide variation and plasticity of this viral genome is still limited, especially RNA modifications induced in human host cells.
RNA editing is a widespread nucleotide modification mechanism through which specific nucleotides are modified by RNA editing enzymes at the RNA level without altering template genomic DNA [4]. Adenosine to inosine (A-to-I) is the most prevalent editing type in humans [5]. The A-to-I conversion is catalyzed by adenosine deaminases that act on RNA (ADARs), and the resulting inosines are recognized as G by the translational machinery [6,7]. The other known RNA editing type is cytidine to uridine (C-to-U), which is catalyzed by APOBEC1 [8]. Upon entering human cells, whether and to what extent SARS-CoV-2 is subjected to the activities of human RNA editing enzymes remains largely unexplored. This knowledge is of importance for at least two reasons. First, as the virus employs its negative-strand RNA as a replication template [9] (Fig 1A shows the example of A-to-I RNA editing), the nucleotide changes thus induced could become a direct source of genetic variations inherited from generation to generation. Second, in sharp contrast to the human genome, the vast majority of the SARS-CoV-2 genome is protein-coding, and thus, RNA editing events would have a much higher probability of causing amino acid changes, thereby modifying protein products. Although identifying RNA editing events from RNA-sequencing data has been well described in many species, including humans, such an analysis for an RNA virus is not trivial. This is because, without the DNA sequence for comparison, it is almost impossible to distinguish single nucleotide variants (SNVs) caused by spontaneous mutation processes from those due to RNA editing, solely based on alignment-based sequence analysis. In this study, our strategy was to first identify a high-confidence nucleotide variant candidate pool from metatranscriptomic sequencing reads of COVID-19 patient samples using a robust bioinformatics pipeline and then test whether real RNA editing signals were enriched in the candidate pool. To do so, we evaluated multiple RNA editing-specific characteristics of the candidate sites in comparison to other A/T sites in the SARS-CoV-2 genome, including (i) ADAR1 binding affinity predicted by a deep-learning model based on ADAR1 CLIP-seq data; (ii) cause-effect relationship between ADAR1 expression and the global RNA editing level based on a drug-treated human cell line perturbation experiment, (iii) local clustering patterns of candidate sites from a distance-based analysis, and (iv) RNA secondary structure propensity. The results from these analyses strongly suggest that an appreciable proportion of the RNA variants we identified result from the ADAR1-mediated A-to-I RNA editing process.
Results
To study the potential effects of A-to-I RNA editing in SARS-CoV-2, we first performed a systematic analysis of metatranscriptomic sequencing reads of bronchoalveolar lavage fluid
PLOS GENETICS
samples of COVID-19 patients obtained from four independent studies (S1 Table). We developed a rigorous bioinformatics pipeline to detect SNVs, which includes (i) removing low-quality reads, (ii) identifying viral reads using Fastv [10], (iii) trimming end nucleotides due to their higher error rates, (iv) generating high-quality alignment, and (v) detecting SNVs with a significant variant allele frequency (VAF) above the background mismatch rate (Fig 1B). For 19 samples investigated, one sample had no detectable SNVs; and among the remaining samples, we observed a consistent pattern of A>G and T>C substitutions showing the highest abundance, followed by C>T and G>A substitutions in 17 samples (Figs 1C, S1, and S2). The dominance of the two SNV types, A>G and T>C, corresponding to potential A-to-I RNA editing events (in the positive and negative strands, respectively), was consistent at different VAF cutoffs across the 17 samples (Fig 1D). It should be emphasized that several sources may contribute to these observed SNVs, including sequencing errors, single nucleotide polymorphisms (SNPs, the fixed nucleotide differences between the studied virus and the reference virus genome), de novo mutations, and acquired RNA editing events in human cells. To distinguish high-confidence A-to-I RNA editing events from other types of variations, we applied a series of filters to the A>G/T>C variants in the 17 samples. First, given (i) the Illumina sequencing error rate is known to be~0.1% [11] and (ii) the SARS-CoV-2 mutation rate is estimated to be similar to that of the mouse hepatitis virus (MHV), which is 2.5×10 −6 substitutions per site per cell infection [12], we filtered those with VAF < 0.5% to remove the potential contaminations of sequencing errors and de novo mutations as well as very weak RNA editing sites. Second, given the prevalence of SARS-CoV-2 SNPs has been estimated to be 9.6 nucleotides between any two viral sequences [13], we also excluded a handful of SNVs with VAF > 70% since they are likely to be of such a source. Third, we focused on those recurrent editing sites in at least 3 out of the 17 samples. In total, we identified 144 recurrent A>G/T>C SNVs with VAF of 0.5-70% to obtain a high-confidence set of A-to-I RNA editing candidates, which is far more than any other SNV types based on the same procedure (Fig 1E and S2 Table). We further examined the flanking sequences of these candidate editing sites and observed a preference for G depletion and enrichment at the nucleotides 5' and 3' to the editing sites (-1 and +1 position), respectively, which is consistent with the context signal previously reported in human transcripts (Fig 2A) [14][15][16]. In terms of functional impact, 55% of the editing sites would cause nonsynonymous substitutions (Fig 2B), most of which are in ORF1ab, followed by the spike protein (Fig 2C). Fig 2D shows their position distribution along the viral RNA genome.
Although we followed the best common practice in the RNA-editing field to identify a set of high-confidence A>G/T>C mismatch positions that served as an A-to-I RNA editing candidate pool, as discussed above, it is impossible to exclude the potential contributions of other sources, e.g., de novo mutations. To address this challenge, we sought to test whether our candidate pool was enriched for genomic and functional features that are known to be specific to A-to-I RNA editing. ADAR1 is the major enzyme responsible for most A-to-I RNA editing signals observed in humans [17,18]. Because an RNA virus replicates in the cytoplasm and the human p150 isoform of ADAR1 is present in this compartment as well, it is supposedly the major factor responsible for viral A-to-I RNA editing activity [19]. Thus, we reasoned that if a large proportion of the inferred RNA editing sites in SARS-CoV-2 are authentic, these sites would be expected to have higher ADAR1 binding affinity than other A/T sites, which can be evaluated through a sequence-based binding affinity prediction model. Based on a recent ADAR1 CLIP-seq peak set [20], we built a hybrid neural network consisting of a dilated deep convolutional neural network, a deep recurrent neural network, and eventually, a fully-connected layer inspired by the deepRAM architecture [21], which is designed to effectively capture the RNA sequence context around ADAR1 binding peaks (including both local motifs, and long-range interactions) (Fig 3A). We achieved extremely high performance with this model in cross-validation (training set, area under receiver operating characteristic curve [AUROC] = 0.998 and area under precision-recall curve [AUPRC] = 0.998; testing set, AUROC = 0.985 and AUPRC = 0.988, Fig 3B). We further validated the model performance using 10,000 independent human A-to-I RNA editing sites [22] and observed a sharp peak with a fairly low variance of ADAR1 binding scores centered on these known sites, further supporting the high accuracy of our model (Fig 3C). Notably, because our model was trained based on human sequences, which inevitably caused the model to learn features specific to both ADAR1 binding and the human genomic context, predicted ADAR1 binding affinity scores cannot be compared across different species directly. Instead, it is more appropriate to compare different sites for their relative ADAR1 binding affinity within the same species because they share the same genomic context. Confirming our hypothesis, the RNA editing sites detected in SARS-CoV-2 showed a significant shift towards higher ADAR1 binding scores (Kolmogorov-Smirnov test, p = 0.035, Fig 3D). We also found that the enrichment ratio increased with the score cutoff value (Fig 3E). This result indicates that many of these RNA editing candidate sites indeed tend to bind to ADAR1.
We further evaluated other RNA-editing specific features of the candidate pool in multiple aspects. First, to test the causal relationship between the expression of ADAR1 and the global RNA editing level, we obtained the RNA-seq data generated from a human lung cell line model following SARS-CoV-2 infection (Fig 4A) [23]. In the infected human cells, the ADAR1 expression level was significantly inhibited by an immunosuppressive reagent, ruxolitinib (t-test, p = 5×10 −4 , Fig 4B; the inhibitory effect was more striking for p110 mRNA isoform, S3 Fig). Consistently, we observed much lower average RNA editing levels across the candidate sites (t-test, p = 0.035, Fig 4B). In addition to the sample-wise comparison, we analyzed the editing-level change per site and found that 63 out of the 84 (75%) editing sites with sufficient coverage showed a decreased editing level, significantly higher than random expectation (one-sided binomial test, p = 2.5×10 −6 , Fig 4C). This result demonstrated a direct effect of host ADAR1 on the dynamics of viral A-to-I RNA editing. Second, RNA editing sites are known to form local clusters. Indeed, the RNA editing candidate sites showed a much shorter distance to neighboring RNA editing candidate sites than randomly sampled, same-size A/T
PLOS GENETICS
control sets (permutation test, p < 1×10 −3 , Fig 4D). Third, A-to-I RNA editing is known to be specific to double-stranded RNA structures. Using a computational RNA structure prediction algorithm, CROSS [24], we assessed the secondary structure propensity of the SARS-CoV-2 sequence and found that the inferred RNA editing sites were enriched in regions with significantly higher RNA secondary structure propensity (Kolmogorov-Smirnov test, p = 0.014, Fig 4E). Indeed, the proportion of RNA editing sites was significantly higher than that of non- edited sites in secondary structure regions using different propensity-score cutoffs (Fig 4F). These multiple lines of evidence strongly suggest that a considerable proportion of our inferred RNA editing candidate sites result from ADAR1-mediated A-to-I RNA editing.
Finally, we examined the potential impact of A-to-I RNA editing events on two aspects of COVID-19 research. First, the phylogenetic analysis of SARS-CoV-2 plays a key role in studying the virus origin and evolutionary patterns. Although the vast majority of the RNA editing events have a low editing level, several cases can reach a very high level (e.g., � 30%), thereby likely being identified as major alleles in the genome assembly. Thus, RNA editing signals may confound the phylogenetic inference. To demonstrate this point, we compared phylogenetic trees for seven samples from two studies (SRP142226 and SRP248092) after either excluding the 10 heavily edited sites (Fig 5A) or considering the edited alleles at these sites (Fig 5B) and found distinct tree topologies. Second, epitope-based vaccines have been under intensive investigation for COVID-19 prevention. We recently reported that RNA editing contributes to peptide diversity, and editing-derived epitopes can elicit immune responses in cancer cells [25,26]. Focusing on recurrent RNA editing events across samples, we assessed the effects of RNA-editing-induced amino acid changes on the binding affinity of the T-cell epitope to HLA and found a few cases where the edited peptide significantly increased the binding affinity relative to the wild-type peptide (Fig 5C and S3 Table).
Discussion
In this study, we provide global evidence that SARS-CoV-2 undergoes ADAR-mediated A-to-I RNA editing in human cells. Although it remains unclear as to what extent the detected RNA editing occurs in the virus genome vs. transcribed RNA products, besides spontaneous mutations, RNA editing may represent another source of genetic variants that can shape the plasticity and evolution of this virus. SARS-CoV-2 genome replication mainly takes place in the cytoplasm. Besides the ADAR1 p150 isoform, which is present in the cytoplasm, our results suggest that the ADAR1 p110 isoform plays a role in the RNA editing activity for SARS-CoV-2, which is consistent with a recent study showing that p110 acts as a restriction factor for influenza virus [19]. In sum, the RNA editing events identified across the virus genome are likely mediated by ADAR1, as supported by our assessments on ADAR1 binding affinity and cause-effect pattern of ADAR1 expression and RNA editing activities.
Our study has several limitations. First, although we observed consistent A-to-I RNA-editing signals above the background, the signal-to-noise of our RNA editing calls is not high. This is mainly due to two reasons: (i) the A>G/T>C mismatches are more enriched among SNVs with an extremely low VAF (Fig 1D), but we only included those variants reaching a significant editing level to focus on the RNA editing events with a meaningful biological impact; (ii) because of the distinct sequence context of SARS-CoV-2, the ADAR1 binding model built from human sequence data may be under power to distinguish true RNA editing sites from the background noise. Additional efforts should be made to detect RNA editing events using a more accurate bioinformatic pipeline. Second, our study did not include direct experimental validation for the inferred RNA editing sites. For example, an assessment of the effects of ADAR1 inactivation on RNA editing patterns would provide more convincing evidence.
As an additional source of genetic variations, A-to-I RNA editing induced by human host cells would accelerate the overall evolution of SARS-CoV-2. Similar to spontaneous mutations, the fate of an RNA editing event depends on the fitness effect it causes: it can be subjected to purifying selection if it is deleterious or positive selection if it is advantageous. Given the very low editing levels, the fixation probability of the vast majority of A-to-I RNA editing events would probably be low. However, quantitative assessment of the fixation and evolutionary fate of such RNA editing events is challenging due to several reasons. First, it is hard to estimate the real RNA editing rate per generation from the observed VAF in bulk RNA-seq data, as the RNA editing level can be a result of multiple generations (due to multiple replications in a host cell, multiple cell infections within an individual human host, or even multiple hosts). Second, our knowledge about the fitness landscape of RNA editing events in SARS-CoV-2 is very limited (if any). Third, it remains unclear which model the virus evolution follows in host cells (e.g., "explosive growth" vs. "equilibrium"). Interestingly, we also observed C>T/G>A peaks in the SNV spectrum, which might reflect C-to-U RNA editing. These two types of RNA editing processes may cancel each other's effects on GC content in the viral genome to some extent. Further efforts are required to investigate the functional consequences of A-to-I editing events and assess whether C-to-U RNA editing also exists.
We note that two recent studies reported similar host-dependent RNA editing activities of SARS-CoV-2 in human cells [27,28]. However, we would like to emphasize three novel aspects of our study. First, through multiple independent analyses, including a deep-learning-based ADAR1 binding affinity model, we provide more convincing evidence for the inferred A-to-I RNA editing of SARS-CoV-2, which is independent of the alignment-based SNV profile analysis. Second, we show that the nucleotide variations induced by RNA editing could confound phylogenetic analysis, a key approach to inferring the evolutionary origin of SARS-CoV-2. Third, our results suggest that RNA-editing-derived peptides may serve as epitopes for vaccine development. However, A-to-I RNA editing has been considered as one of the mechanisms to suppress the innate immune response induced by dsRNA in human cells [29,30] and has also been shown to be used by RNA viruses to affect immune evasion [31,32]. Thus, neoantigens due to RNA editing events may only represent a limited adverse effect in the interactions between the virus and host cells. Together, our study provides critical insights into the evolution of SARS-CoV-2 and highlights a need to consider these host-induced nucleotide variants in future COVID-19 research.
Sequencing data and preprocessing
All the sequencing data were generated as metatranscriptomic reads from bronchoalveolar lavage fluid of COVID-19 patients. We employed Fastp [10] to obtain clean reads and then Fastv (https://github.com/OpenGene/fastv) to extract viral reads.
Single nucleotide variant detection
Viral reads were aligned against the reference genome of SARS-CoV-2 (positive virus strand, NC_045512.2) with BWA MEM [33]. We first estimated the number of mismatches at different nucleotide positions in the reads. To do so, we mapped clean reads that passed quality control to the reference genome and calculated the mismatch frequencies at both read ends and observed that the first ten and the last four nucleotides (from 5' end) showed relatively high mismatches, suggesting higher sequencing error rates in these positions. We, therefore, trimmed these nucleotides from each clean read and realigned the reads. We focused on 19 samples with � 20,000 clean reads mapped to the SARS-CoV-2 genome for downstream analysis. For each position with alternative allele(s) relative to the reference genome, we focused on the positions (depth � 10) with a dominant alternative allele, which was defined as # reads of the dominant alternative allele > 10 × # reads of the remaining alternative alleles (if any). To further exclude SNVs likely caused by sequencing errors, we first empirically estimated the overall mismatch rate for each sample, followed by a binomial test, and only kept SNVs with a dominant alternative allele showing FDR < 0.25 and supported by at least two reads. Among the 19 samples, one sample had no detectable SNVs, and another sample did not show the A>G/T>C enrichment. We, therefore, focused on the remaining 17 samples for further analyses. To identify high-confidence A-to-I RNA editing events (A>G/T>C), we first retained those RNA editing events with an editing level of 0.5-70% in each sample and then selected those sites with a recurrence in � 3 samples. We repeated the same procedures to identify other SNV types for comparison. We employed ANNOVAR [34] to annotate 144 unique RNA editing sites based on the gff file for the SARS-CoV-2 genome (https://www.ncbi.nlm.nih.gov/ nuccore/NC_045512.2/). We extracted the flanking sequences centered on each editing site to assess the preferred sequence contexts.
Construction and validation of an ADAR1 binding affinity prediction model
We employed a state-of-the-art deep neural network architecture as detailed previously [21] to build a prediction model that can evaluate the binding affinity of the human ADAR1 protein to SARS-CoV-2 genomic sequences. Briefly, the deepRAM architecture was based on a hybrid of a dilated deep convolutional neural network (CNN) and a deep recurrent neural network (RNN) to fully take advantage of the rich information embedded in the RNA sequence context (including both local motifs and long-range interactions). An automatic model parameter-sweeping procedure was used to ensure a parameter set that optimized the model performance.
To construct positive input data sets, we randomly extracted 20,000 101-bp RNA sequences centered on the peak summit from an ADAR1 binding peak set generated from a CLIP-seq experiment in the human U87MG cell line [20]. We built negative sets by applying dinucleotidefrequency-preserving-shuffling to the positive sets to discourage the model from discriminating foreground sets from background sets by low-level genomic features only, such as GC content [35]. We randomly divided our data into 80% and 20% for training and testing, respectively. Following word2vec transformation, sequence features were propagated through CNN, RNN, and eventually a fully connected layer, where a sigmoid function was used to bound the network output in between 0 and 1, representing the binding probability (Fig 2A). In light of a 40-round hyper-parameter random calibration, we ended up with a model with a CNN layer of 32 filters, a bi-LSTM layer of hidden size 100, an Adagrad optimizer, a Xavier initializer, a learning rate of 0.046, a dropout ratio of 0.3, and the number of learning steps as 5,000.
To rigorously validate the ability of our prediction model to identify true A-to-I RNA editing sites, we assessed whether it would robustly distinguish the proximal flanking sequences of known RNA editing sites from the distal ones. We first randomly selected 10,000 RNA A-to-I editing sites from a pool of RNA events in a lymphoblastoid cell line annotated in the RADAR database [22]. Then, we partitioned the 1,101-bp region centered on the RNA editing sites into consecutive 1,001 101-bp windows with a step size of 1 bp. Finally, we scanned these windows with our model to generate a continuous ADAR1 affinity distribution. We compared the ADAR1 binding scores between the 144 RNA editing sites and those sites without editing signals detected in any sample.
Analysis of RNA editing in the drug-treated cell line perturbation experiment
To validate whether the identified RNA editing sites are directly modulated by ADAR1 activity, we analyzed a public RNA-seq dataset in which ADAR1 was inhibited (three replicates in the drug-treated and control groups, GSE147507, series 16). In brief, the SARS-CoV-2 receptor ACE2 was over-expressed in a lung adenocarcinoma cell line, A549. The cells were then treated with ruxolitinib (a JAK1 and 2 kinase inhibitor) or control, denoted as SARS-CoV-2_Rux and SARS-CoV-2, respectively, and infected with SARS-CoV-2. The expression level of ADAR1 and its isoforms was calculated by Cufflinks, and a two-tailed Student's t-test was employed to evaluate the statistical significance between the two groups. To quantify RNA editing level across the six samples, we downloaded fastq files from SRA (Accession No. SRP253951). We employed Fastp [10] to obtain clean reads and then Fastv (https://github. com/OpenGene/fastv) to extract viral reads. Viral reads were aligned against the SARS-CoV-2 reference genome (NC_045512.2) with BWA MEM [33]. For each BAM file, we calculated the RNA editing levels at the 144 editing candidate sites (For "A" site: #G/depth; For "T" site: #C/ depth), and then the average values for sites with sufficient coverage (�×10) were compared to assess the editing activity difference between ADAR1-high (control) and -low (drug-treated) groups. A two-tailed Student's t-test was used to assess the statistical significance between the two groups. We also compared the editing-level change per site upon the treatment and a binomial test with a success rate of 0.5 to test whether significantly more RNA editing sites were inhibited.
RNA editing site clustering analysis
For each A-to-I editing site, we calculated the shortest distance (nt) between this site and its two immediate neighbor editing sites, and we obtained the median value across all the sites.
We performed the same analysis for 1,000 control sets, each consisting of the same numbers of A and T, randomly sampled from the SARS-CoV-2 genome. We compared the median values of the true RNA editing set against those control sets to assess the statistical significance.
Prediction of SARS-CoV-2 RNA secondary structure propensity
The secondary structure propensity score of the SARS-CoV-2 sequence was based on the SHAPE-MaP profiling data [36]. To consider the genomic context of the flanking sequences to compute a smoothed RNA secondary structure propensity score for each position, we averaged the scores of the up-and down-stream 170 nucleotides.
Phylogenetic tree construction
To reconstruct phylogenetic trees, we first inferred the genome sequences of the seven samples by replacing the reference nucleotides with SNVs with a VAF of � 50%. We reconstructed the phylogenetic trees using Unweighted Pair Group Method with Arithmetic mean algorithm (UPGMA) from MEGA-X [37] under two conditions: i) excluding the 10 RNA editing sites with an editing level of � 30% in any sample, and ii) considering edited alleles (G or C) at these sites in the corresponding samples. We performed a bootstrapping analysis 1,000 times to evaluate the topology robustness.
Epitope prediction
To evaluate the possibility of epitopes introduced by RNA editing, we extracted both wild-type and edited peptide sequences around the missense, high-confidence A-to-I RNA editing events. We performed eluted ligand likelihood prediction using the netMHCpan (v4.0) webserver [38]. We only considered the 100 most common HLA haplotypes across 21 populations [39]. Table. List of potential epitopes introduced by high-confidence A-to-I RNA editing candidates. (XLSX) | 2022-04-01T06:22:58.563Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "4282331a92d5e46305a1a36f885854fef4ad6603",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1010130&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c30d91c54ca7ce0e62b262dbdb680b83d14e1ae4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56006147 | pes2o/s2orc | v3-fos-license | Large-baseline InSAR for precise topographic mapping : a framework for TanDEM-X large-baseline data
The global Digital Elevation Model (DEM) resulting from the TanDEM-X mission provides information about the world topography with outstanding precision. In fact, performance analysis carried out with the already available data have shown that the global product is well within the requirements of 10 m absolute vertical accuracy and 2 m relative vertical accuracy for flat to moderate terrain. The mission’s science phase took place from October 2014 to December 2015. During this phase, bistatic acquisitions with across-track separation between the two satellites up to 3.6 km at the equator were commanded. Since the relative vertical accuracy of InSAR derived elevation models is, in principle, inversely proportional to the system baseline, the TanDEM-X science phase opened the doors for the generation of elevation models with improved quality with respect to the standard product. However, the interferometric processing of the largebaseline data is troublesome due to the increased volume decorrelation and very high frequency of the phase variations. Hence, in order to fully profit from the increased baseline, sophisticated algorithms for the interferometric processing, and, in particular, for the phase unwrapping have to be considered. This paper proposes a novel dual-baseline region-growing framework for the phase unwrapping of the large-baseline interferograms. Results from two experiments with data from the TanDEM-X science phase are discussed, corroborating the expected increased level of detail of the large-baseline DEMs.
Introduction
Synthetic Aperture Radar Interferometry (InSAR) is a well established remote sensing technique widely employed for the retrieval of topographic information (Bamler and Hartl, 1998;Moreira et al., 2013).Several spaceborne and airborne SAR systems have been actively acquiring interferometric data in the past decades.Among those, the TanDEM-X (TerraSAR-X add-on for Digital Elevation Measurements) stands out as a single-pass bistatic radar mission designed to deliver a highly accurate Digital Elevation Model (DEM) with 90 % point-to-point relative vertical error smaller than 2 m for areas of moderate terrain, and smaller than 4 m for steep areas on a grid of around 12 m by 12 m spacing (Krieger et al., 2007(Krieger et al., , 2013)).
In October 2014, after successfully completing the data acquisition for the construction of the standard global DEM (Zink et al., 2014(Zink et al., , 2016)), the TanDEM-X mission has entered its science phase.During this phase, acquisitions with very large across-track separation between the two satellites have been performed in both pursuit monostatic and bistatic modes (Hajnsek and Busche, 2014;Buckreuss and Zink, 2016).Such configurations enable the generation of local DEMs with higher horizontal and/or vertical accuracies than the standard TanDEM-X products.In fact, with proper combination of baselines and tuning of the system parameters, products fulfilling the HRTI-4 standard (i.e., 6 m posting and relative vertical accuracy of less than 0.8 m) can be achieved (Wessel et al., 2016;Pinheiro and Reigber, 2016).
This paper presents a new approach for the generation of highly accurate DEMs using data from the TanDEM-X science phase.Specifically, a dual-baseline region-growing phase unwrapping framework is proposed.Since the approach requires the calibration of the wrapped phases, an al- The larger the baseline, the greater the difference between master and slave wavevectors is and, consequently, the higher the sensitivity of the interferometric phase to increments in the z and y directions.When penetration occurs, e.g., over vegetated areas, multiple scatterers fall into the same resolution cell decreasing the quality of the interferometric measurements.
ternative for the calibration of orbital errors using the complex interferograms is briefly addressed.Finally, the elevation models obtained from two experiments are discussed, each experiment composed of two large-baseline TanDEM-X acquisitions.
2 Large-baseline SAR interferometry: potentials and limitations Figure 1 shows a pictorial representation of an interferometric SAR system composed of one master (in black) and two slaves (in red and purple).The difference between master and slave viewing geometries due to the spatial baseline allows for the separation of scatterers located at the same range distance from one sensor (e.g., along the master iso-range), but having distinct heights above ground.As shown in the picture, the larger the baseline, the greater the difference between master and slave wavevectors (k m , k s 1 and k s 2 ) is and, consequently, the higher the phase variation induced by increments in the z and y directions.Moreover, when penetration occurs, e.g., when imaging semitransparent media such as forest or ice, multiple scatterers fall into the same resolution cell.In this case, the interferometric measurement has increased uncertainty, hindering the retrieval of accurate topography, as discussed later in this section.Finally, note that the height information retrieved with SAR interferometry corresponds to the radar phase center.When employing shorter wavelengths, e.g., Ka-band, the penetration is limited, and the retrieved model is closer to a surface model.On the other hand, when transmitting longer wavelengths, e.g., P-band, the wave penetrates deeper into the medium, and the retrieved model is closer to a terrain model.
The relative height accuracy of elevation models obtained through SAR interferometry is given by where h 2π is the height of ambiguity (HoA), i.e., the height variation corresponding to a 2π change in the interferometric phase; and σ φ is the standard deviation of the phase errors (Krieger et al., 2007).Since the height of ambiguity is inversely proportional to the baseline, large baseline acquisitions can, in theory, yield DEMs with improved quality.However, the typical increase of the interferometric phase noise in datasets acquired with large baselines limits the effective improvement.The quality deterioration is mainly caused by the increase of baseline and volume decorrelation.Baseline decorrelation occurs due to the spectral mismatch in range caused by the different viewing geometries of master and slave.In principle, it can be avoided by properly filtering the range spectrum during the processing, at the expense of the range resolution and, consequently, the available number of looks (Reigber, 1999).The plot in the left column of Fig. 2 depicts the percentage of valid bandwidth lost due to the spectral shift, and its variation with the HoA for different local terrain slopes (α).For the simulation, an X-band system with a range bandwidth of 150 MHz is considered (i.e., the value used for the large-baseline TanDEM-X acquisitions), and the off-nadir angle is 44 • .For the simulated parameters, a maximum bandwidth loss and, consequently, reduction of the number of looks of around 20 % can be expected.
If, on the one hand, range filtering mitigates baseline decorrelation, on the other hand, for volume scatterers, i.e., scatterers with a vertical profile allowing electromagnetic wave penetration, a certain level of decorrelation cannot be avoided (Treuhaft and Siqueira, 2000).The scatterers at different heights within the resolution cell have different phase contributions, which are more or less alike according to the system HoA.The volume decorrelation is then given by the integration of all contributions, i.e., where B is the baseline between master and slave acquisitions, λ is the wavelength, θ is the mean incidence angle, R is the slant range distance, h v is the vertical extent of the volume and f (z) describes its vertical structure.In the right side of Fig. 2, the effect of volume decorrelation on the relative height accuracy of products generated with SAR interferometry is seen.Also for this plot an off-nadir angle of 44 • is used.Moreover, an exponential model for f (z) with extinction factor of 0.5 dB m −1 and underlying SNR decorrelation of 0.95 are considered, values consistent with the TanDEM-X scenario (Kugler et al., 2010;Krieger et al., 2013).The simulation shows that for volume extents of less than 2 m, the relative height accuracy decreases monotonically with the HoA.However, as the volume extent increases, the quality of the height measurement actually degrades with the increase of baseline (or decrease of HoA), i.e., large-baseline shortwavelength interferometers are not able to accurately retrieve the topography over such media.Moreover, as demonstrated in De Zan et al. ( 2012), Eq. ( 2) does not fully justify the decorrelation observed over vegetated areas in TanDEM-X products.In fact, the distribution of the scatterers within the resolution cell can further degrade the coherence, deeming large-baseline data over forested areas virtually unusable.
A further challenge for the handling of large-baseline interferograms concerns their elevated fringe frequency.The small height of ambiguity causes large phase variations between neighboring pixels, which associated with elevated noise can prevent the retrieval of phase uniqueness.A poorly executed phase unwrapping may introduce large-scale errors, hindering the achievable absolute accuracy.Moreover, certain adopted phase unwrapping strategies, e.g., based on maximum-likelihood (ML) estimation, can introduce saltand-pepper errors due to pixel-wise unwrapping errors, thus compromising the obtained relative vertical accuracy.
The unwrapping of data from the TanDEM-X science phase can profit from the use of the standard TanDEM-X product as a reference height model to flatten the phase.Nevertheless, areas of challenging terrain might still be affected by unwrapping errors.For path-following unwrapping algorithms, the increased decorrelation over volume scatterers can be particularly problematic, causing the phase unwrapping to diverge even when using a priori height information.Hence, it is interesting to employ unwrapping strategies which are able to properly circumvent low-coherence regions.The alternative proposed here is a dual-baseline extension of the region-growing algorithm first presented in Xu and Cumming (1999).
Dual-baseline region-growing phase unwrapping
Interferometric datasets acquired with different baselines or carriers have different heights of ambiguity.In principle, by properly combining all available interferograms, it is possible to eliminate or reduce the ambiguity of the interferometric phase.
In the past decades, many strategies have been developed having as common goal the retrieval of the underlying height information from several wrapped phases.Examples of multi-channel algorithms include Ghiglia and Wahl (1994), Fornaro et al. (2006), Ferraioli et al. (2009), and Shabou et al. (2012).The first two approaches propose maximum likelihood (ML) frameworks for the retrieval of the height, while the third and fourth employ maximum a- posteriori extensions in order to incorporate contextual information.ML approaches are able to provide good height estimates, but their performance can be severely impacted if only a small number of channels is available.The use of contextual information, e.g., in a maximum a posteriori (MAP) framework, can boost the performance, usually at the expenses of computation cost (Ferraiuolo et al., 2009).
For standard TanDEM-X DEM products, an approach to correct unwrapping errors rather than perform a joint phase unwrapping is included in the operational processor (Lachaise et al., 2012;Fritz et al., 2011).Specifically, a dualbaseline configuration is employed using data from the two global coverages, with HoAs of 30 to 35 m and 45 to 50 m, respectively.The approach relies on the easier unwrapping of the differential interferogram, which has a larger HoA of around 100 m.Even if the unwrapped differential phase contains errors, available radargrammetry shifts are accurate enough for their identification and correction.Therefore, an error free reference can be generated and used to correct the data from the individual coverages.The efficiency of the method for the small HoA case considered in this paper is compromised since the differential interferogram has also small HoA and, consequently, cannot always be considered as a reliable reference.Moreover, as discussed before, small HoA data are less coherent, which also impairs the performance of the operational algorithm.
For the TanDEM-X large-baseline experiment, we propose an adapted dual-baseline region-growing algorithm first developed for airborne repeat-pass InSAR (Pinheiro et al., 2015).The approach intends to obtain unwrapped phases rather than a common height map, and aggregates the dualbaseline redundancy to the spatial growing of unwrapped regions (Xu and Cumming, 1999).Moreover, the quality parameters used to choose the unwrapping path are extended to include all available information.
Similarly to the single-baseline region-growing algorithm, the proposed approach is congruent, i.e., only multiples of 2π are corrected.The ambiguity number of a certain pixel, n amb [p], is calculated based on the phase difference between the pixel and the already unwrapped neighbors in a predefined search window.Here, this search window is extended over a third dimension, i.e., it considers simultaneously the data of the two different baselines.The unwrapped phase values of a certain pixel in both datasets are predicted using three distinct strategies.The first estimation is inherited from the single-baseline region-growing strategy and considers only the local 2-D information, i.e., where k corresponds to a certain unwrapping direction and w k accounts for the reliability of its data.ψk {1,2} [p] are the unwrapped values estimated from the kth direction, and are obtained assuming a local linear slope model, i.e., where the index [p − 1] describes an immediate neighbor, and k {1,2} represents the slope in k calculated considering only the already unwrapped samples.Note that this first estimation assumes a certain smoothness of the solution, avoiding or mitigating pixel-wise errors.A simple example of the first prediction strategy considering a 5 × 5 window and a single unwrapping direction is shown in the top row of Fig. 3.In the depiction, the already unwrapped pixels appear in grey.Note that the number and position of available pixels is always the same in both phases since the growing is simultaneous.On the other hand, the estimations of ψ 1,a [p] and ψ 2,a [p] are performed independently.
The second prediction considers the data from one of the two acquisitions as the reference, a choice based on the statistics of the search window.For this reference, the prediction is calculated using Eqs.( 3) and ( 4).The estimation of the unwrapped pixel value in the complementary dataset considers a flattening strategy, i.e., where K scl is a scaling factor accounting for the different baselines.Note that, if the scaling factor K scl is too large, e.g., if the interferometric baseline of one dataset is much larger than the one of the other, the noise scaling might be dominant over the slope reduction, discouraging the flattening.This is accounted for in the dual-baseline scheme by properly weighting the estimation in Eq. ( 5) according to expected phase statistics.The plot in the middle row of Fig. 3 illustrates the second prediction strategy considering that ψ 1 was assigned as reference.The dependence between the two estimates is emphasized by the blue colors.Note that no local information is considered for the computation of ψ2,b [p].
Analogously to the previous case, the third prediction strategy also considers the phase with the better local statistics as the reference.Once again, the estimation of the unwrapped value for this dataset is extracted from Eqs. ( 3) and (4).Additionally, the local slopes of the complementary dataset are evaluated using the reference phase, i.e., for each unwrapping direction The unwrapped pixel value is then extracted from the average of all available directions, as in Eq. (3).If the phase statistics are known and the linear slope model applies, the third guess has an improved slope estimation for the more challenging phase.Moreover, it does not include the assumption of an identical topographic content for both datasets.The plot in the bottom row of Fig. 3 illustrates the third prediction strategy.As before, it is considered that ψ 1 was assigned as reference.In this case, the slope is only estimated for the reference Figure 3.A simple example of phase unwrapping considering a 5 × 5 window and a single unwrapping direction.(a) depicts the first prediction strategy, i.e., the estimation is performed independently on both datasets.(b) illustrates the second prediction strategy, i.e., only the reference dataset is locally unwrapped and the complementary phase is extracted directly from this unwrapped value.(c) illustrates the third prediction strategy, i.e., both phases are unwrapped using local information, but the slope estimation is extracted from the reference phase.
phase and re-used for the complementary one, as emphasized by the blue colors.The use of proper weights is fundamental, and these can be obtained by locally evaluating the interferometric phase statistics.In the following, the computation of weights for the 5 × 5 search window case represented in Fig. 3 is discussed.For simplicity of notation, it is assumed that the dataset 1 is set as reference in the particular search window, also in accordance to the example presented in Fig. 3.
For the first strategy, the expected variances of the estimated unwrapped values are tied to the slope predictions.Considering Eq. ( 4) and, additionally, assuming the independence between neighboring samples, 1/w {1,2},a is readily found as where K is the number of available unwrapping directions and σ {1,2} are the phase standard deviation values estimated from the interferometric coherences (Bamler and Hartl, 1998).
For the second prediction strategy, the estimated variances are calculated as and Additionally, the following condition is imposed on Eq. ( 9), where σ 12 is the expected standard deviation of the differential phase given by Hence, the second prediction is dismissed if the differential noise level is elevated, avoiding noise scaling.Similarly to Eqs. ( 7) and ( 8), the variances corresponding to the third prediction strategy are calculated using the slope statistics, but here considering the reference phase only, i.e., and Using Eqs.
(3)-( 13), the final prediction of the unwrapped value can be obtained as and the ambiguity number can then be estimated as where φ {1,2} [p] are the wrapped phase values.The single-baseline region growing unwrapping considers the variation of the prediction in different unwrapping directions as a reliability measurement for the growing.If the variance is larger than a pre-defined threshold, then the pixel is deemed invalid for the growing iteration, and will be reevaluated in a later step.For the dual-baseline case, a new reliability metric can be introduced by checking the consistence between the three prediction strategies.In this way, a better unwrapping path choice is favored, and, consequently, a more robust unwrapping can be performed.Analytically, the following deviation is computed For a reliable unwrapping, ε p has to be smaller than a fixed threshold t ε p .Note that if two individual predictions differ by more than π , their associated ambiguity numbers are distinct, i.e., at least one of them would cause an unwrapping error.To promote an easier unwrapping, t ε p should not exceed π , and should be preferably kept at a fraction of that during the first growing iteration (good results were obtained with ε p = π/4).As the growing evolves, more pixels become available for the prediction, and situations of more challenging unwrapping can be solved.If a pixel fails all the reliability tests up to the final growing iteration, it is marked as invalid.
In order to validate the unwrapping approach, a case of study over the region of Kaufbeuren, Germany, is considered.The imaged area is mainly characterized by grassland, agricultural fields and forest.For this experiment, two datasets acquired within a week were available with corresponding height of ambiguities of around 9 and 14 m.The interferometric coherences are shown in the first row of Fig. 4. The effect of volume decorrelation is clear over agricultural fields and forested areas, the latter presenting an average coher- ence value of around 0.3 and 0.4 in the first and second interferograms.The second row of Fig. 4 shows, in the left, the residual unwrapped phase of the first dataset using the Statistical-cost/Network-flow Algorithm for Phase Unwrapping (SNAPHU) (Chen and Zebker, 2001).In the right, the dual-baseline region growing result is given.Note that the single-baseline algorithm diverged once it reached the forest, due to the strong decorrelation.Consequently, its result contains large unwrapping errors (red and blue regions).On the other hand, the dual-baseline algorithm is able to profit from the weaker decorrelation of the second dataset, providing a better phase unwrapping.Finally, note that even considering the dual-baseline approach, localized residual phase unwrapping errors remain, e.g., in the urban areas and should be corrected in a posterior step.It is also noteworthy that although the approach is described here considering a dual-baseline scenario, it is also applicable for dual-frequency configurations (Pinheiro et al., 2015).
Interferometric phase calibration
Phase calibration is essential to ensure the absolute accuracy of the height estimates.Moreover, when employing multichannel approaches, it is crucial that all phases are calibrated in relation to each other or to a common reference.An alternative is the use of the global TanDEM-X DEM to create a synthetic phase to be used as reference for the calibration.
Assuming that terrain changes are negligible or limited to a small portion of the image, the majority of the interferometric phase content after the removal of the synthetic phase corresponds to a global offset or trends due to, e.g., orbital errors (Lachaise and Fritz, 2016).A typical model for the phase error caused by orbital inaccuracies consists of a planar phase ramp, i.e., where x and y represent the range and azimuth coordinates and [a, b, c] are the unknowns to be estimated.Since the procedure has to be carried out prior to the phase unwrapping, the parameters have to be estimated from the complex data.This can be accomplished by exploiting the relationship between range and azimuth local frequencies (f x , f y ) and the derivatives of the expected phase error, as discussed in Pin-heiro et al. ( 2015) for airborne interferometry.In particular, considering the error model in Eq. ( 17), (f x , f y ) are given by An estimation of the frequencies (f x , f y ) can be obtained by locating the maximum of the spectrum of small data blocks.
Given the estimated frequencies, the parameters b and c are retrieved by solving Eq. ( 18) in average.After the removal of the linearly varying components, the estimation of the global offset, e.g., the parameter a in Eq. ( 17), is straightforward.
The experiments
As briefly mentioned in Sect.2.1, the first experiment corresponds to data acquired over Kaufbeuren, Germany.The second experiment corresponds to a mountainous region in the Atacama Plateau, Argentina.Relevant acquisition and processing parameters are presented in Table 1.
For both experiments, the approach proposed in Sect.2.1 was employed to jointly unwrap the interferometric phases.However, the DEMs were generated individually for each dataset, i.e., they correspond to a single coverage.On the other hand, the global TanDEM-X DEM is constructed from the average of two or more coverages.In fact, for the test-site over Kaufbeuren, the global TanDEM-X DEM was constructed from 4-5 individual coverages, while for the Atacama case 2-3 coverages were used.Finally, note that each experimental DEM was constructed on a grid of 6 m × 6 m posting, i.e., half of the one employed for the global TanDEM-X DEM generation.
Figure 5 shows shaded relief images of a region of interest containing agricultural fields and grassland.On the left, the global TanDEM-X DEM is presented.On the right, the large-baseline experimental DEM is shown.The increase in the level of detail is noticeable not only due to improved vertical accuracy, but also due to the reduced posting.In the left column of Fig. 6, the histogram of the difference between a reference airborne laser (ALS) terrain model1 and the global TanDEM-X DEMs is shown in black.The difference between the DEM corresponding to the largest baseline and the ALS model appears in red.For the comparison, an outlier removal was carried out to dismiss forest and urban areas, since their information is not contained in the laser terrain model.The corresponding standard deviations are around 32 cm for the global-DEM/ALS difference and of 17 cm for the largebaseline-DEM/ALS, i.e., an improvement is observed even considering the reduced number of looks and coverages.The plots on the middle and right columns of Fig. 6 show two profiles through the DEMs corresponding to grassland and forest, respectively.From the latter, the decrease in vertical accuracy in the large-baseline DEM due to volume decorrelation is clear, i.e., the DEM profile in red shows strong height variability caused by the superposition of multiple scatterers in the resolution cell.Note that global offsets were introduced in the profiles in order to improve the visualization (see legend).The first row of Fig. 7 shows relief images of a region of interest on the Atacama Plateau containing flat to moderate terrain, with total height variation of around 330 m.In the second row, the region within the yellow rectangle is enlarged in order to better visualize the noise reduction.A general improvement of the experimental data in comparison to the standard one in terms of height noise is noticeable.Again, since the experimental DEM was constructed on a grid with 6 m × 6 m sampling, finer details can be resolved.Finally, note that missing data and unwrapping artifacts due to geometrical effects could not be corrected in the experimental DEM, since it was constructed using data from a single coverage (and viewing geometry).A few profiles are shown in Fig. 8, attesting for the good agreement between standard and experimental elevation models, and the improved accuracy of the latter.For this experiment, no external reference was available for the quality control and only a relative assessment could be performed.In this case, the large-baseline dataset with lower vertical accuracy (smaller baseline) was chosen as reference (h ref ), and the differences between the global TanDEM-X (h TDX ) and this reference, and the complementary experimental DEM (h expTDX ) and this reference were evaluated.Assuming that the noise in the elevation models are mutually independent, the standard deviation of the differences are given by and -24.79 -24.78 -24.77 -24.76 -24.75 -24.74 -24.73 Lat -24.69 -24.68 -24.67 -24.66 -24.65 -24.64 -24.63 Lat shows a standard deviation of around 27 cm, confirming the quality improvement of the experimental data.
Conclusions
This paper proposed a new dual-baseline region-growing approach for the phase unwrapping of the data acquired during the TanDEM-X science phase.A detailed analysis of largebaseline DEMs from two experiments has been carried out, attesting the validity of the method.The coherence loss due to volume scattering prevents significant improvement over forested regions, as demonstrated with the Kaufbeuren experiment.Nevertheless, for regions covered by low vegetation and bare surfaces, an improvement of the standard deviation by a factor of two is achieved.Moreover, the largebaseline DEM was constructed on a finer grid, i.e., it contains 4 times more samples than the standard TanDEM-X product.
By means of the proposed approach, a future interferometric SAR mission can be designed with the goal of producing an updated topographic map with an accuracy comparable to that of airborne SAR systems2 .Last but not least, existing SAR missions can be enhanced including a constellation of three or more receive-only SAR satellites having small and very large baselines.Such a multistatic SAR concept would allow to generate global high-accurate DEM of the Earth's surface and to detect topographic changes in the order of decimeters.
Figure 1 .
Figure1.Pictorial representation of an interferometric SAR system composed of one master (in black) and two slaves (in red and purple).The larger the baseline, the greater the difference between master and slave wavevectors is and, consequently, the higher the sensitivity of the interferometric phase to increments in the z and y directions.When penetration occurs, e.g., over vegetated areas, multiple scatterers fall into the same resolution cell decreasing the quality of the interferometric measurements.
Figure 2 .
Figure 2. On the left, the percentage of lost bandwidth due to the spectral shift as a function of the height of ambiguity (HoA) is given for different local terrain slopes (α).On the right, the effect of volume correlation in the relative height accuracy as a function of the HoA is shown for different volume extents.
Figure 4 .
Figure 4. Large-baseline experiment over the region of Kaufbeuren, Germany.(a, b) presents the coherence of the two available datasets, with heights of ambiguity of approximately 9 and 14 m.(c, d) in the left, the result of the single-baseline Statistical-cost/Network-flow Algorithm for Phase Unwrapping (SNAPHU).In the right, the result of the dual-baseline region growing algorithm is shown.The large phase errors introduced by the SNAPHU algorithm (red and blue areas) are clearly visible.
Figure 5 .
Figure 5. Large-baseline experiment over Kaufbeuren, Germany.The figures show shaded relief images of a region of interest containing agricultural fields and grassland.(a) The result concerning the global TanDEM-X is presented (posting of 12 m).(b) The result of the large-baseline experiment is shown (posting of 6 m).
Figure 6 .
Figure 6.(a) The histograms of difference between global DEM and airborne laser (ALS) (black), and DEM constructed from the dataset acquired with the largest baseline and ALS (red) are presented.(b, c) Profiles of the derived elevation models for different regions of interest in Kaufbeuren are shown.Offsets of 2 and 4 m were introduced in the experimental TanDEM-X and ALS DEMs to improve the visualization.
)Figure 7 .
Figure 7. Large-baseline experiment over the Atacama Plateau, Argentina.The figures in the first row show relief images of a region of interest containing flat to moderate terrain (total height variation of around 330 m).(a, c) The result concerning the global TanDEM-X is presented (posting of 12 m).(b,d) the result of the large-baseline experiment is shown (posting of 6 m).In the second row, the region within the yellow rectangle is enlarged in order to better visualize the noise reduction.
Figure 8 .
Figure 8.(a) The histograms of difference between global DEM and assigned reference (black), and DEM constructed from the dataset acquired with the largest baseline and assigned reference (red) are shown.(b, c) Profiles of the derived elevation models for two regions of interest in the Atacama Plateau are presented.An offset of 10 m was introduced in the experimental DEM to improve the visualization. | 2018-12-12T17:12:47.926Z | 2017-09-21T00:00:00.000 | {
"year": 2017,
"sha1": "138511a452a13564b23b5864be2bac4978f91e9d",
"oa_license": "CCBY",
"oa_url": "https://www.adv-radio-sci.net/15/231/2017/ars-15-231-2017.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "138511a452a13564b23b5864be2bac4978f91e9d",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234031067 | pes2o/s2orc | v3-fos-license | The Interactive Applications (IAs) in Academic Libraries: Challenges and Opportunities
Presentation tools of academic content are increasing in popularity for educators in Higher Education Institutions (HEI) who want to share ideas and information in a more creative and interactive environment using more effective tools and demand to involve. Interactive Applications are becoming lot more common and is more integrated into our everyday activities, like using mobile apps. The features of the Fourth Industrial Revolution (4IR) began to emerge through Interactive Applications (IAs) such as the applications of Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR). Information resources development is no longer restricted and residing within the realm of speculative fiction. By using AR, VR and MR, academic libraries could already deliver a massive revolution in information retrieval. However, the biggest challenge that need to be tackled perhaps remains in how we could tune between these resources and the users so that the greatest possible benefit could be achieved in the light of accelerated technological development. This chapter uncovers the challenges and opportunities in using Interactive Applications (IAs) technologies and should be an eye opener for academic libraries that Interactive Applications technology are important to transform the use of traditional resources to interactive resources.
Introduction
Interactive Applications (IAs) are becoming lot more common and is more integrated into our everyday activities. The ability of IAs to enhance what already exists is what makes it an ideal fit for libraries, educational institutions, museums, and similar institutions. It can be used for resources wayfinding, shelf-reading, upgrade services, technological integration, and community engagement. New technology services are making it easier than ever for libraries to create their own free or low-cost IA content without having to download a Software Development Kit (SDK) or transact with complicated Application Programming Interface (API) codes [1]. In addition, the development of open science (OS) movement and methods has supported scientific research data and has managed to make its information accessible to the scientific society and to the overall public. This wide global recognition towards OS has made the demand of making data more open
Augmented reality (AR)
Augmented reality (AR) can be defined as "an enhanced version of the real physical world that is achieved through the use of digital visual elements, sound, or other sensory stimuli delivered via technology" [14]. Furthermore, AR is a system that fulfills three basic features: a combination of real and virtual worlds, realtime interaction, and accurate 3D registration of virtual and real objects (see Figure 2).
Virtual reality (VR)
Virtual reality is one of the most popular technologies currently, which can allow experiencing things that may be difficult to happen in the real world. VR can be defined as "an artificial environment that is created with software and presented to the user in such a way that the user suspends belief and accepts it as a real [3][4][5][6][7][8][9][10][11][12][13].
The Interactive Applications (IAs) in Academic Libraries: Challenges and Opportunities DOI: http://dx.doi.org /10.5772/intechopen.95767 environment" [14]. Furthermore, it is the computer-generated simulation of a three-dimensional image or environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors (see Figure 3) [15].
Mixed reality
Mixed Reality, also called the merged reality, is a term coined by technology giants Intel and Microsoft to describe their proprietary VR project. MR is defined as "the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real-time" [16]. Figure 4 indicates Mixed reality takes place not only in the physical world or the virtual world but is a mix of reality and virtual reality [15]. Simply, Mixed reality is a hybrid of VR and AR and aims to offer the best of both worlds. For instance, while it uses a headset just like VR, seeing through a translucent viewport or glass, it also projects visuals on top of our environment. The concept of augmented reality [15]. The concept of virtual reality [15]. Digital Libraries -Advancing Open Science 4
Interactive applications (IAs) in academic libraries in the digital age
Forty years and more the future of the library has been questioned by people, in addition it has been predicted by some the end of the library. This is due to being incapable to deal with the digital and social transformation, unsustainable by the classic Gutenberg era; having made a dead end, they "may disappear like the dinosaurs" [17]. But one thing is for sure, which is the development of the modern world of information technologies and digital developments, connectivity has changed the future advancement of libraries, and libraries must offer advanced solutions if they want to exist. Integrating IAs, such as VR, AR, and MR into higher education institutions and their libraries, are essential to the advancement of learning in the digital age. Advanced learning platforms through technology are already available and in higher education, their use is catching on. In fact, the use of IAs is already becoming more popular in higher education. Since 2015, for instance, first-year medical students at Case Western Reserve University have been learning from home using an MR app called HoloLens and Holo-Anatomy, created by Case Western Reserve University and Cleveland Clinic in cooperation with Microsoft. Through 3D learning, medical students are learning about the human body in a way that would otherwise not be possible [18]. Similarly, San Diego State University Instructional Technology Services has used virtual immersive teaching and learning since 2017. Students' learning is enhanced through the opportunity to interact with 3D graphics in what appears to be a real-world environment. Instead of placing the student or a camera within a physical learning environment, virtual reality places the student in a simulated environment where senses such as vision, hearing, and touch foster learning.
In 2015, the University of North Texas (UNT) Media Library began offering access to VR and AR devices. This collection is growing as new technology, games, and devices evolve to support students, faculty, and staff interested in research and recreation. IA's in UNT Media Library can be used for various forms of simulations and entertainment, for instance, by using VR headsets such as HTC Vive to let students walk around 3D visualizations or reconstructions of archeological sites. Moreover, museum visits, view artwork from different angles or up close, or view designs in 3D and gain a better understanding of how they work [19].
Two years later, in 2017, Harvard University Library opened the AR/VR studio to further the growth of the ventures being built at Harvard using inspiring AR, MR, and VR tech, as well as, to give students from across the university a space to experiment with and create projects and ventures in the virtual, augmented, and mixed reality spaces [20]. In the same year, North Carolina State University (NCSU) Libraries launched the Virtual Immersive Teaching and Learning (VITaL) initiative, providing a variety of VR, AR, MR, and 360°-video immersive tools for use across the NCSU pedagogical spectrum. Today, "VITaL serves as an incubator to enable experiences that would be out of reach, if not impossible in a traditional learning environment, including low-frequency, high-risk scenarios simulating life-threatening medical conditions, celestial events in outer space, and scientific phenomena occurring at the micro scale" [21]. Thus, there are many university libraries around the word have used these technologies to enhance their services and functions. Hence, information resources development is no longer restricted and residing within the realm of speculative fiction. By using IAs academic libraries and learning centres could already deliver a massive revolution in information retrieval. However, according to Rotolo, Hicks, & Martin, (2015) the biggest challenge that needs to be tackled perhaps remains in how we could tune between these resources and the users so that the greatest possible benefit could be achieved in the light of accelerated technological development. Given the perceived lack of available research material regarding the impact of emerging technologies in real-life application since they are new and still developing [22]. This development of information resources leads the researchers to introduce a new term titled Interactive Information Resources (IIR).
The definition of interactive information resources (IIR)
Ghuloum, Allamki, and Alhabashi, during the Digital Transformation Conference in the State of Kuwait in 2018, presented a new concept of IIR which is; "a type of electronic resource that is faster and more flexible in information retrieval than both the traditional and the electronic information resources due to the wearable form-devices and its complex algorithms. It is used to instantly map your information environment to create photorealistic, shareable, and collaborative 3-D digital models of the contents" [23].
The wearable devices and software incorporate digital and holographic data into the real-physical environment and streamline existing use of the information resources processes in a collaborative context to enhance and empower the experience of the beneficiaries. In other words, it is a way to simulate the content of traditional resources into an augmented electronic environment, where the new shape of the content could be interactively browsed using the physical hand-waving of users. Information resources, over the time has gone through many changes, starting with Traditional Information Resources (TIR), then Electronic Information Resources (EIR), and finally Interactive Information Resources (IIR). Table 1 clarifies the comparison criteria between the different types of information resources.
Open science and IAs
The importance of the resources is determined by contribution and sharing. In other words, sharing of information is part of the basic principles of libraries, therefore, librarians and other information specialists must provide access to information in any medium or format for library users. They also encourage the concepts of open access, open source, and open licenses [24].
Throughout history, scientists develop the best research by building on the work from others. The essential role of accessible information in the development of science and technology naturally gives growth to the Open Science (OS) movement that aims at disregarding access barriers to scholarly communications (Open Access, Digital Libraries -Advancing Open Science 6 OA), research data (Open Data), and the proprieties and other software tools that gather and process the data (Open Source) [25].
Research contributions are recognized in the age of OS by the way how technologies have changed [26]. For instance, scientific literature contains acknowledgments and comments that are a form of peer reviews on the cited work. Even, software and datasets are cited work too and not only articles.
OS is a movement to make scientific research, data, and spreading accessible at all levels of an investigative society. It is also a transformation of an approach of how research performed, documented, and distributed. The goal of OS is to make research outputs; methods and software are openly accessible. It can be well-defined as a sequence of procedures that, under the proper requirements, it improves the quality of research by making results shared and accessible. One of the main qualities of OS is sharing research data among researchers. Therefore, the advancement of OS affects various strategic, theoretical, and technical disputes to numerous scientific societies that carry out data-driven research [27].
Open science researchers
There are career-driven essential reasons to apply and promote OS methods. Besides there are benefits that specifically involve those who perform the research that are known as Early Career Researchers (ECRs). Generally, OS methods are expected to address concerns around duplication, are progressively expected, and ECRs can gain from being involved early on [28]. Thus, the OS movement provides opportunities to access unrestricted high-quality data. During the past years, the world has witnessed outstanding technological developments, specifically in the field of artificial intelligence (AI) power-driven by the access to big data and cloud computing [29,30].
OS methods is known that it could improve the quality and consistency of scientific work. Such methods that are developed become extensively recognized, in addition ECRs who adopt OS early, the progress of the research should reflect confidently in the quality. An important aim of the OS movement is to make science more reliable and trustworthy. Sharing of procedures and data leads to repetition, reproduction of analyses, and exploration. This increased exploration can also be an influence to guarantee good quality data and analyses [28]. In addition, in an educational prospective, once code and data presented the researcher replicate results presented in papers, which simplifies understanding of the study. Scientists and public at all levels can benefit when replication of results found, as it is crucial to OS and vital in increasing trustworthiness.
Furthermore, for researchers to promote collaboration among them, configurations must be established around OS. These configurations include a variety of software tools, and publishing mechanisms. OS software such as web-based, version-controlled repositories like GitHub archivist [31] can help with maintaining and sharing code. In other words, ECRs can form well-documented and strong code where libraries that may use over again for impending studies and for educational purposes [28]. Therefore, new open tools can help with strong data analysis in a manageable manner.
Placing more research and data in an unrestricted domain is fundamental to OS and increases ECRs' opportunities for recognition, interchange, collaboration, and development. Moreover, articles that are published and share open data by researchers obtain more citations than articles that do not share data [32], thus, ECRs can obtain citations for their work when deposited at unrestricted open repositories such as the OS Framework. Setting research and data in the public domain is essential to OS and increases ECRs' opportunities for recognition, exchange, and cooperation [33].
Early implementation of OS practices encourages and drives career advantages for researchers in the future. With open data, it is open to everyone, therefore, OS can expedite wide contribution for ECRs and to the public in general. And therefore, early OS implementation will have equal benefits for science and to the public.
IAs platform as a tool for open science in academic libraries
Cloud-based technologies have become an important tool and are extensively used by scientists all around the world to perform their research. The European Open Science Cloud (EOSC) is supported by the European Commission as a source for advocating OS and research. Cloud resources are raised according to different usage patterns, and decreased costs for individual groups of scientists to sustain their own foundation, therefore, they can be delivered up on request [34].
In Europe experts outlined the basic principles of the cloud of OS for the European Open Science Cloud (EOSC) [35]: 1. Other electronic infrastructures and projects are needed to be combined with EOSC by establishing organized system of services and information that suits the centralized standard.
2. The accessibility of services and data in agreement with applicable and nonbiased policy describes the term "open" (although not all data and tools may be open nor the existence of free data and services).
3. EOSC-hub should include academic fields in its cloud.
4.
The term "cloud" should relate to worldwide access to scientific data, software, standards, expertise, and policy frameworks and not to ICT structure.
Most participants for the European Open Science Cloud (EOSC) agree in the fact that this cloud needs [36]: 1. Different suppliers should provide the system of services.
2. The developer efforts should concentrate on the integration of cloud services, therefore, depend on current electronic infrastructure.
3. New services should be freely distributed to users developing and incorporating new services and tools when they are available.
To make it a primary motive for the development of the European cloud of open science by prioritizing the needs of users.
To solve research difficulties, modern science need support from computing societies, consequently many European and national associations deal with cloudbased infrastructures. One of them is the European Network Infrastructure (EGI). European Network Infrastructure (EGI) is an innovative computing engine for research designed to improve computing services for research. The state primarily funds the EGI and has over 300 data centers and cloud providers all around the world. Open academic community is its basic principle, open results for research and research infrastructures is its mission and that is by establishing and providing openness through combining digital abilities, resources, and knowledge between communities and across national limits. EGI structural design is organized in platforms [32]: 1. Managed distributed infrastructure is a basic Infrastructure Platform.
2. Managing the merging of Cloud infrastructure and regional infrastructure.
Easy access to large and distributed data sets is provided by an open data
platform.
4. It is a platform for the exchange of information, collaboration, and community coordination.
5. Cooperative platforms and specialized service are designed for certain academic communities.
The most common area of OS in many academic and research institutions have actively engaged is Open Access (OA). OA to scientific peer-reviewed publications has led the trend of OS, which is now also expanding to original research data. Still, there are some difficulties to OS, which now impede the full understanding of its benefits. In theory, OS includes the public spread of all aspects engaged in scientific investigation, ranging from lab journals and research notes to publications, materials, data, methods/protocols, models, code, and software [37]. Although not all these aspects may be freely available in all cases, a commitment to enable the sharing of these resources reinforces the OS movement. OS is new to all academic institutions even in one of the world's foremost research performing academic institutions which is UCL (University College London), nevertheless, this structure supports the leadership role of the Library [38].
• UCL's Open Science Policy Platform
The Interactive Applications (IAs) in Academic Libraries: Challenges and Opportunities DOI: http://dx.doi.org /10.5772/intechopen.95767 Open Science is a growing area, and it is a challenge in how universities and research institutions can co-operate with it. A new role for the academic library has been developed in sharing research and informative outputs. In other words, the academic Library is now more than a supervisor and a cataloger of information. The Library provides access to data and information, which allows for the integration and creation of new knowledge. This new role of the academic library is also played in part by the research coordination office and places it directly in the frontlines of these developments at an institutional level to create new methods to the delivery of OS such as UCL (University College London) experience [38].
University College London (UCL) has initiated an OS Policy Platform. It is directed by the Pro-Vice-Provost (UCL Library Services). The purpose of the Platform is to look at the institutional approach and to distinguish areas which would promote from configuration with the concept for OS. In terms of application, the Platform has found 6 main sections for preliminary action and implementation: • Open Access and OA Publishing
Challenges and opportunities of IAs in academic libraries
To implement a new technology in academic libraries such as IAs, we need to understand the strength and weakness aspects in this type of technology. Hence, Figure 5 presents the challenges and opportunities of IAs in academic library.
• Security and Privacy:
Although the development of IAs provide great benefits, the practical use of it in academic libraries require user acceptance. One issue with respect to user acceptance is preserving ethical issues such as security and privacy. Privacy and security strategies need to consider different aspects, including the ability to gather user information, using IAs information provided by third parties, the ability to share these systems, and providing security in the environment of these applications.
• Network Issues:
The network is an important part of IAs architecture in academic libraries, which provide a connection between the users and the server via a configuration mechanism. When the IP of the network is achieved, each user can communicate with others and the server to access the IAs package containing the virtual model [39]. Hence, network issues may be an obstacle to implement IAs in Academic Libraries.
• Substantial Time Commitment:
Substantial time is required in using IAs technology and related hardware/software and creating services for academic library users. Many librarians may find this process too time consuming and lacking in added value [40]. Towards OS, there are theoretic reasons why OS methods could save time. Nevertheless, these reasons hardly come to completion in the existing system. The additional requirements for research that use the OS method often take more time, this all goes back to the traditional procedures like Archiving, documenting, and quality controlling of code and data [28].
• Lack of 3D Design Interface: The biggest barrier to wide adoption of immersive IAs in academic libraries is the lack of good user experience design. 3D interface design is difficult and expensive, and there are few people with the necessary design skills to overcome these issues [41].
• User Acceptance: Getting people to use IAs such as AR and VR may be more challenging than expected, and many elements play a role in use acceptance of IAs ranging from unobtrusive fashionable appearance (gloves, helmets, etc.) to privacy concerns [42].
• High Cost: The market indicates that, IAs equipment and devices are costly, which is hard for the academic library to balance between the number of equipment and user The Interactive Applications (IAs) in Academic Libraries: Challenges and Opportunities DOI: http://dx.doi.org /10.5772/intechopen.95767 demand. Furthermore, the IAs such AR, VR and MR industry are developing fast, which leads libraries to keep up to date with these changes. In addition, maintenance and repair cost can be another challenge for libraries as some of them have limited budget to afford acquiring this type of technology [40].
• Motion Sickness:
Several studies confirm that, some people experience motion sickness in VR and MR which means when they put on a headset and enter a virtual world, they feel dizzy or nauseous. This challenge makes decision-makers in academic libraries hesitant to acquire IAs [40,42].
Opportunities
• Enhance Library Services: IAs contribute to improving the quality of services provided by academic libraries to users. For instance, Indiana's Premier Urban Public Research University (IUPUI) believes in the power of transformation. They are committed to providing educational opportunities that transform the lives of students, community, and the changing world. Therefore, the IUPUI University Library provides a Virtual and Augmented Reality Lab (VR/AR Lab) that has been provided through a generous federal grant from the Library Technology Services Act. The VR/AR lab includes two HTC Vive HMD's, an MSI VR One backpack PC, and one META 2 developer kit. The lab is available to all students, faculty, and staff of IU to experience and gain a better understanding of this emerging technology [43].
• Support Teaching Information Literacy: IAs such as VR, AR, and MR are valid additions to the toolkit that may be used by Academic libraries to engage its users, not only with the latest technology but also with the goal in mind of ensuring a proper approach to teaching information literacy. Users such as students will gain immeasurably from the enhanced delivery of information on a particular topic through IAs and the multiple means by which the student can become proficient in the basic information literacy skills culminating is successful search for information, using every tool at his or her disposal to complete their academic assignments [44].
• Effective Platform for the 21st Century: There are many opportunities for implementing IAs technologies into today's and future academic libraries which closely match the life and education styles of Generation Z users. That lead several academic institutions to acquire IAs equipment and devices in their libraries such as Harvard university library, Cleveland state university library, and others [40].
• Encourage Active Learning: IAs technologies support the active learning style in academic libraries which is becoming popular among the current academics in most disciplines. For example, Microsoft is showing again how HoloLens can help engineering designers via collaboration with the University of Cambridge's construction IT lab. "We have never been able to bring 3D models from buildings and bridges off our screens and onto
Author details
Husain Ghuloum* and Zuwainah Al-lamki Department of Library and Information Science, PAAET, Kuwait *Address all correspondence to: hf.ghuloum@paaet.edu.kw the real structure," says Cambridge's Ionnis Brilakis. Using the HoloLens, however, engineers can overlay a design onto a real-world bridge or building (or vice-versa), making inspections simpler and safer [45].
• Attractive Platform for Users
Several Studies indicate that, integrating IAs such as AR, VR, and MR in academic libraries are increase the number of users and make academic Library more Attractive. In fact, via IAs library users can learn, play, share, collaborate in an attractive environment [ref ]. David King, Digital Services Manager, Topeka & Shawnee County Public Library say that "a lot of people they think of the library as the place to go to learn about emerging technology, [so] people will come to check out the new equipment maybe they can't afford, or they want to know or don't know what it is." [46].
Conclusion
IAs in academic libraries has become necessary and considered a new norm to enhance academic activity in research whether through traditional ways of research or if considering sharing research data through OS. For these activities to succeed, the academic library should recognize the challenges and opportunities of this type of technology before going through the process of implementation and adoption. Academic libraries need to establish policies, processes, and guidelines to promote IAs and OS usage in the academic institution and this would begin by recognizing the challenges and promoting its opportunities. This transformation may not be easily made. However, taking the first step would begin to change the whole academic environment and by understanding the users' needs from this technology.
© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 2021-05-10T00:03:29.735Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "98ba7dbca1726b8d95ef781673d804d2f28f18cb",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/75034",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "afbef00f8fd9aa53d7153f978aff319cb32d3a6a",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
214693469 | pes2o/s2orc | v3-fos-license | Cancer Stem Cell-Exosomes, Unexposed Player in Tumorigenicity
Cancer is a well-known, yet poorly understood disease. In which, a healthy tissue is morphed into a cancerous tissue through an intricate, multistep process. This polymorphism has been the focus of cancer research for many decades. Scientists have agreed on a set of traits that are thought to be shared by all cancer tissue types, these traits include; enabling proliferation, evading growth suppressors, resisting cell death, replicative immortality, inducing angiogenesis, and initiating invasion and metastasis, along with other enabling characteristics (Hanahan and Weinberg, 2000; Hanahan and Weinberg, 2011). As researchers investigate the development and propagation of these traits, or as they are called the “hallmarks” of cancer, it became evident that cancer cell-derived extracellular vesicles (EVs), particularly exosomes, play a major role in almost all of them. In the late 1940s, it was recognized that cells release spherical shaped particles called EVs (Chargaff and West, 1946). Then, almost 40 years later, “exosomes” were acknowledge as a distinct sub-type of EVs (Trams et al., 1981). Up tell now, it is technically challenging to obtain a pure fraction of a specific EV sub-type, due to similarities shared amongst these vesicles. However, the International Society for Extracellular Vesicles, have released a position statement on the minimal experimental requirements for definition of EVs and their functions (MISEV2014; updated in 2018; MISEV2018) (Lotvall et al., 2014; Théry et al., 2018). The MISEV distinction between the different EV sub-types realize on size, density, morphology, subcellular origin, and composition. This was done in order to make scientific reporting on EV biology more consistent and reliable. Most published literature on EVs, including the literature on the role of EVs in cancer, use the term “exosomes” to refer to the EV sub-type under study. These studies include a section that describes the method of “exosome” isolation, and at least a couple of characterization techniques, to justify their nomenclature. Characterization of exosomes in published literature is often based on size and “exosome-enriched” proteins content verification. On the other hand, the concept of “cancer stem cell” (CSC) only emerged in the 1990s (Lapidot et al., 1994), with a lot of controversy and a number of proposed theories following it. Some say that CSC arise as a result of normal stem cell mutation, while others suggests that CSC arise as a result of a somatic cell acquiring erroneous stem cell characteristics, turning it into cancerous stem cell that can differentiate into heterogeneous population of cancer cells (Baccelli and Trumpp, 2012).
INTRODUCTION
Cancer is a well-known, yet poorly understood disease. In which, a healthy tissue is morphed into a cancerous tissue through an intricate, multistep process. This polymorphism has been the focus of cancer research for many decades. Scientists have agreed on a set of traits that are thought to be shared by all cancer tissue types, these traits include; enabling proliferation, evading growth suppressors, resisting cell death, replicative immortality, inducing angiogenesis, and initiating invasion and metastasis, along with other enabling characteristics (Hanahan and Weinberg, 2000;Hanahan and Weinberg, 2011). As researchers investigate the development and propagation of these traits, or as they are called the "hallmarks" of cancer, it became evident that cancer cell-derived extracellular vesicles (EVs), particularly exosomes, play a major role in almost all of them.
In the late 1940s, it was recognized that cells release spherical shaped particles called EVs (Chargaff and West, 1946). Then, almost 40 years later, "exosomes" were acknowledge as a distinct sub-type of EVs (Trams et al., 1981). Up tell now, it is technically challenging to obtain a pure fraction of a specific EV sub-type, due to similarities shared amongst these vesicles. However, the International Society for Extracellular Vesicles, have released a position statement on the minimal experimental requirements for definition of EVs and their functions (MISEV2014; updated in 2018; MISEV2018) (Lotvall et al., 2014;Théry et al., 2018). The MISEV distinction between the different EV sub-types realize on size, density, morphology, subcellular origin, and composition. This was done in order to make scientific reporting on EV biology more consistent and reliable. Most published literature on EVs, including the literature on the role of EVs in cancer, use the term "exosomes" to refer to the EV sub-type under study. These studies include a section that describes the method of "exosome" isolation, and at least a couple of characterization techniques, to justify their nomenclature. Characterization of exosomes in published literature is often based on size and "exosome-enriched" proteins content verification.
On the other hand, the concept of "cancer stem cell" (CSC) only emerged in the 1990s (Lapidot et al., 1994), with a lot of controversy and a number of proposed theories following it. Some say that CSC arise as a result of normal stem cell mutation, while others suggests that CSC arise as a result of a somatic cell acquiring erroneous stem cell characteristics, turning it into cancerous stem cell that can differentiate into heterogeneous population of cancer cells (Baccelli and Trumpp, 2012).
Nevertheless, CSCs are now recognized as distinct population of cancer cells, and the CSC-model, is accepted as one of the two most popular models of cancer. The other model being the "clonal evolution-model", which was described earlier in 1970s. It was postulated that cancer results from the accumulation of mutations in a given group of somatic cell population, within a tissue, thus given raise to heterogeneous population of cancer cells (Nowell, 1976). As the CSC-model becomes more popular, the role of CSCs, as a sub-type of cancer cells, within the tumor microenvironment has recently come to light, especially with advances in stem cell research during the last couple of decades. However, the role of CSC-exosomes, as a sub-type of cancer exosomes, is still under the shadow. Thus, in this article we aim to provide a standpoint on the possible role of CSC-exosomes, and why it should be examined as a separate group of cancer cellexosomes, based on published literature.
Exosomes, Devoted Messengers for Good or Bad
Exosomes originate from the inward budding of the early endosomes, which later mature into multivesicular bodies (MVBs) (Doyle and Wang, 2019). Depending on their content, MVBs are either sent to the lysosome to be degraded or released into the extracellular space, forming what's called exosomes (Doyle and Wang, 2019). Cells of different tissue types were found to release exosomes in order to facilitate intercellular communication, thus initiating different biological actions . Cancer cells, and cancer-associated cells, within the tumor micro-environment were also found to release exosomes. This allows them to commute their message to malignant and non-malignant cells, and initiate pathways that support tumor survival and propagation (Wortzel et al., 2019). The exosome mediated intercellular communication is enabled through "exosomal cargo". This includes functional proteins, microribonucleic acid (miRNAs) and messenger RNAs (mRNAs) (Hessvik and Llorente, 2018). Exosomes will deliver its cargo from the releasing cell into the recipient cell, which contains the encrypted message. There is a growing body of published literature on the role of cancer cell-exosomes in promoting cancer progression through enabling recipient cells to acquire the mentioned "hall marks" of cancer. A number of studies, have repeatedly shown that cancer cell-exosomes, of different cancer types, significantly increase cancer cell proliferation and inhibit apoptosis by activating various proposed cellular pathways Qian et al., 2019). Studies have also shown that cancer cell-exosomes stimulate angiogenesis by stimulating endothelial cells viability, migration, and tube formation via the transfer of pro-angiogenic proteins and miRNAs (Yi et al., 2015;Bao et al., 2018;Lin et al., 2018;Yukawa et al., 2018). Likewise, it was reported that cancer cell-exosomes induce replicative immortality via the transfer of telomerase reverse transcriptase mRNA from the telomerase activate cancer cell to the telomerase silenced somatic cell (Gutkin et al., 2016). As for metastasis, it is projected that cancer cells induce metastasis by packing its exosomes with promoters of the epithelial-mesenchymal transition (EMT) cascade, to initiate EMT in the neoplastic epithelial cells, within the tumor microenvironment (Webber et al., 2015;Rahman et al., 2016;Xiao et al., 2016). It is also projected that cancer cells will establish a "pre-metastatic" niche through its exosomes. Cancer cells will release its exosomes into the circulation, where they travel to the metastasis site (Costa-Silva et al., 2015;Liu et al., 2016;Syn et al., 2016). There, cancer cell-exosomes will up-regulate the pro-inflammatory molecules, and vascular leakiness, to mobilize cells that constitute the premetastatic niche (Costa-Silva et al., 2015;Liu et al., 2016;Syn et al., 2016). Finally, it is projected that while traveling through the circulation, and engraftment into the new tissue, cancer cellexosomes support cancer cells by allowing them to escape immune surveillance (Mrizak et al., 2015;Muller et al., 2016;Song et al., 2016). Moreover, in addition to the classical hall marks of cancer, it was reported by a recent study that prostate cancer cell-exosomes play a role in transforming local prostate tissue stem cells into CSCs (Ngalame et al., 2018). While another study reported that glioma cell-exosomes induced a "tumor-like" phenotype in bone-marrow mesenchymal stem cells (BMMSCs) . This was reported to be based on increased proliferation, migration, and invasion rates of treated BMMSCs. In addition to alteration in BMMSCs protein production, including the production of the metastasis-related proteins.
Cancer Stem Cell, the Black Sheep of the Stem Cell Family
CSCs are cancer cells (found within tumors) that possess characteristics associated with normal stem cells, specifically self-renewal and the ability to differentiate and give rise to different cell types found in a particular cancer specimen i.e. CSCs are tumor-forming cells (Sun et al., 2018). CSCs can be identified by using a set of unified surface markers (i.e. clusters of differentiation (CD); CD44, CD24, CD133), in addition to added tissue specific markers depending on cancer type (Phi et al., 2018). Within the tumor microenvironment, the CSCs are rear and reside in highly specialized niches (Sreepadmanabh and Toley, 2018). The CSCs niche is designed to maintain and protect the CSCs, allowing them to resist many current anticancer treatments (Prieto-Vila et al., 2017). The CSCs niche will also allow the cells to stay dormant for long periods of time, before initiating local recurrent and/or distant metastatic tumors (Plaks et al., 2015). Thus, it is hypothesized that targeting the whole tumor will only slow down tumor expansion while targeting the CSCs, in particular, will jeopardize tumor growth (Garcia-Mayea et al., 2019). At the same time, in regenerative medicine research, it was reported that stem cells and progenitor cells exert their tissue regeneration effects through the release of paracrine factors, mainly exosomes. Studies are consistently showing that injecting the cell-derived exosomes alone, is enough to induce the same regenerative effect as the "whole-cell" transplant approach. For example, it was reported that exosomes derived from embryonic stem cells (Khan et al., 2015), BMMSCs (Zou et al., 2019), and cardiac progenitor cells (Kervadec et al., 2016), all mimic the benefits of injecting their parent cells in a chronic heart failure and myocardial infarction animal models. Thus, it is logical to assume that CSCs function through the same mechanism as other cancer cells and non-cancer stem cells. We can project that CSCs fulfill its "stemness duties" through the release of paracrine factors, with exosomes as a key player.
What Is Proposed?
As discussed above, cancer cell-exosomes are crucial for tumor initiation, maintenance, and propagation. However, published literature on this subject matter often don't describe the subtype of cancer cells that these exosomes were derived from. It is well established by now that cancer cell-exosomes mediate cell to cell communication within the tumor microenvironment, to support and promote tumorigenesis. It is also well established by now that any alteration to parent cell, alters exosome secretion and content, which in turn alters its message. For example, when cancer cells were subjected to hypoxia prior to exosome isolation, to reflect the tumor's hypoxic environment, these exosomes significantly increased migration and invasion of cancer cells (Li et al., 2016), and tube formation by endothelia cells (Kucharzewska et al., 2013;Hsu et al., 2017), compared with exosomes derived from normoxic cancer cells. Therefore, it could be hypothesized that the sub-population of cancer cells, CSCs, produce exosomes with unique characteristics, and thus functions. Currently, there are only few reports on "CSC-derived exosomes", and their role in cancer propagation, compared to "non-stem cancer cell-derived exosomes" ( Table 1). One of the first studies to address this issue reported that the "macrovesicles" that had the in vitro and in vivo angiogenic effect, in renal cancer, were those driven from the CD105 + cancer cell sub-population (Grange et al., 2011). Then later on, one study did a miRNA content comparison, and reported that prostate CSC-derived exosomes have in fact a different miRNA content compared with non-stem prostate cancer cell-derived exosomes (Sánchez et al., 2016). Then, a following study reported that glioma stem cell-derived exosomes promoted angiogenesis by containing a particularly high levels of miRNA-21, which upregulates the vascular endothelial growth factor (VEGF) . While another study identified 11 miRNAs that are characteristic of gastric CSC-derived exosomes, and suggested that a measurement of these miRNAs in patient serum could be used as a predictor of cancer metastasis . Other recent CSC-exosomes investigations focusing on their role in metastasis, reported that CSC-derived exosomes promote metastasis by promoting EMT in renal cell carcinoma and thyroid cancer (Hardin et al., 2018) via the transfer of miRNA-19b-3p and non-coding-RNAs respectively. Whereas other reported on CSC-exosome role in creating a protumoral microenvironment. For example, it was reported that glioblastoma stem cell-derived exosomes direct monocytes toward the immune suppressive "M2" phenotype, through the signal transducer and activator of transcription-3 (STAT3) pathway, creating an immunosuppressive microenvironment (Gabrusiewicz et al., 2018). While colorectal cancer stem cellderived exosomes promote a pro-tumoral phenotype in neutrophils by increasing interleukin-(IL)-1b expression (Hwang et al., 2019) Since tumor-host cross-talk is believed to be initiated by CSCs, and communication between cancer cells and other cells is conducted through exosomes, it's of great importance to take a closer look at the role of CSCs-exosomes, and its involvement in tumor aggressiveness. Also, to examine their miRNA content, compared to non-stem cancer cell-exosomes, in order to postulate mechanisms of actions. Then finally, develop a cancer management strategy that targets CSCs, and involves blockage of the CSC-exosome release channels.
DISCUSSION
CSCs generate tumors through the stem cell processes of selfrenewal and differentiation into multiple malignant cell types. Based on advances in cell signaling biology, it's expected that these CSCs function through its exosomes. The term "exosome" was used in this article due to the fact that published literature describing EVs role in cancer often refer to the EV sub-type being examined as exosomes. These publications offer reasonable evidence that the EV sub-type being examined is in fact exosomes, via various methods of characterization. Other sub-types of EVs i.e. ectosomes, microvesicle particles, and apoptotic bodies, could be released by cancer cells/CSCs, and could play a role as well. However there is no adequate reporting on this in the literature. Therefore, based on findings on the role of cancer cell-exosomes, and the role of CSCs in cancer, the role of "CSC-exosomes" should be investigated as a separate entity. Such studies will encounter a significant technical and quality control issues related to harvestation of a pure CSC population, and subsequent yield of pure CSC-exosome fraction. Nevertheless, the knowledge provided by these studies will be crucial in developing a more effective approaches to control progression and metastasis of tumors and prevent recurrence.
AUTHOR CONTRIBUTIONS
BA-S conceptualized and wrote the article. Other authors were involved in manuscript review and editing. | 2020-03-30T13:19:09.984Z | 2020-03-30T00:00:00.000 | {
"year": 2020,
"sha1": "8cc4244aa4b618adfc3d087955e2ceac3c7ca200",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.00384/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfc27845209587f251702a7c913d13049a891e27",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
118475041 | pes2o/s2orc | v3-fos-license | A significant hardening and rising shape detected in the MeV/GeV nuFnu spectrum from the recently-discovered very-high-energy blazar S4 0954+65 during the bright optical flare in 2015 February
We report on Fermi Large Area Telescope (LAT) and multi-wavelength results on the recently-discovered very-high-energy (VHE, $E>$ 100 GeV) blazar S4 0954+65 ($z=0.368$) during an exceptionally bright optical flare in 2015 February. During the time period (2015 February, 13/14, or MJD 57067) when the MAGIC telescope detected VHE $\gamma$-ray emission from the source, the Fermi-LAT data indicated a significant spectral hardening at GeV energies, with a power-law photon index of $1.8 \pm 0.1$---compared with the 3FGL value (averaged over four years of observation) of $2.34 \pm 0.04$. In contrast, Swift/XRT data showed a softening of the X-ray spectrum, with a photon index of $1.72 \pm 0.08$ (compared with $1.38 \pm 0.03$ averaged during the flare from MJD 57066 to 57077), possibly indicating a modest contribution of synchrotron photons by the highest-energy electrons superposed on the inverse Compton component. Fitting of the quasi-simultaneous ($<1$ day) broadband spectrum with a one-zone synchrotron plus inverse-Compton model revealed that GeV/TeV emission could be produced by inverse-Compton scattering of external photons from the dust torus. We emphasize that a flaring blazar showing high flux of $\gtrsim 1.0 \times 10^{-6}$ photons cm$^{-2}$ s$^{-1}$ ($E>$ 100 MeV) and a hard spectral index of $\Gamma_{\rm GeV}<2.0$ detected by Fermi-LAT on daily time scales is a promising target for TeV follow-up by ground-based Cherenkov telescopes to discover high-redshift blazars, investigate their temporal variability and spectral features in the VHE band, and also constrain the intensity of the extragalactic background light.
Introduction
In the diverse family of active galactic nuclei (AGN), blazars stand out due to their extreme variability in all wavebands and over a broad range of timescales. Their predominantly non-thermal emission arises in relativistic jets that are pointed close to our line of sight. The resulting Doppler boosting is responsible for their short-timescale variability, apart from boosting their flux and creating the illusion of superluminal motion (e.g., Urry & Padovani 1995). This broadband variability presents both a challenge and an opportunity. On the one hand, the variability makes it difficult to construct a physical model of high-energy emission from blazars. On the other hand, the variability also provides important constraints on the many open questions about the origin of blazar emission. With continuous monitoring of the sky by the Fermi Gamma-ray Space Telescope, and observations by X-ray satellites as well as ground-based telescopes in the radio through TeV bands, we are able to make near-simultaneous observations that contribute to addressing these questions (e.g., Abdo et al. 2011a;Abdo et al. 2011b).
Blazars are typically divided into BL Lac objects and flat spectrum radio quasars (FSRQs) with the formal distinction being the absence or presence, respectively, of emission lines with a rest frame equivalent width ≥ 5Å (e.g., Marcha et al. 1996). S4 0954+65 is a blazar at a redshift z = 0.368 (Stickel et al. 1993;Lawrence et al. 1996). Although a recent paper by Landoni et al. (2015) reported a more distant lower limit to the redshift at z ≥ 0.45, our preliminary result for the source spectrum taken with the Telescopio Nazionale Galileo 3.58 m telescope confirm the z = 0.368 (Becerra Gonzalez et al. in prep.). This object clearly meets the formal definition of a BL Lac (see Table 35 and Lawrence et al. 1996). However, its archival (non-simultaneous) multi-wavelength spectral energy distribution (SED) hints at the presence of a "blue bump" more typical of a FSRQ. Past X-ray observation by ROSAT (e.g., Comastri et al. 1997) shows a flatter energy distribution than typical for a radio-selected BL Lac leading to the suggestion that S4 0954+65 may be a transition object with properties that lie in between the BL Lac and FSRQ classes. This idea has also been explored by BL Lac object) based on the luminosity of the broad-line region in Eddington units, rather than the emission lines' equivalent width.
A powerful γ-ray flare was detected from S4 0954+65 by Fermi Large Area Telescope (LAT) on 2014 November 25 (Krauss 2014) when its daily averaged γ-ray flux (E > 100 MeV) was about 32 times its average flux in the Fermi-LAT third source catalog (3FGL catalog, see Acero et al. 2015). In late January 2015, Carrasco et al. (2015a) reported an increase by a factor of three in its near-infrared (NIR) emission. This heralded the beginning of unprecedented optical/NIR activity in this object with its V -band magnitude brightening by two magnitudes (Stanek et al. 2015), continued flaring in the NIR band (Carrasco et al. 2015b), and its brightest ever optical state reported (Spiridonova et al. 2015a;Spiridonova et al. 2015b). Rapid intra-night variability in the R-band was detected on 11-15 February 2015 (Bachev 2015). An increase in the degree of optical polarization in the R-band was also observed from 14% on 18 February 2015 to 25% on 19 February 2015 (Jorstad 2015).
On 2015 February 13/14 (MJD 57067) the MAGIC telescopes detected very-high-energy (VHE; E > 100 GeV) emission from S4 0954+65 (Mirzoyan 2015b). This coincided with the detection of an unusually hard γ-ray (E > 0.1 GeV) spectrum by Fermi-LAT along with an elevated γ-ray flux (Ojha et al. 2015). In this paper, we make a detailed study of the evolution of the γ-ray spectrum and its relationship to activity in the X-ray and optical bands. We first present our observations in §2. Then we show the results in §3, and discuss them in §4. Throughout this paper, we use the cosmology H 0 = 70 km s −1 Mpc −1 , Ω m = 0.3, and Ω Λ = 0.7 (Komatsu et al. 2009). Note that S4 0954+65 is listed in the second Fermi-LAT catalog of high-energy sources (2FHL catalog, see Ackermann et al. 2016) as 2FHL J0958.3+6535.
Fermi-LAT
The LAT on board the Fermi satellite monitors the entire γ-ray sky every 3 hours in the energy range from 20 MeV to > 300 GeV (Atwood et al. 2009). We selected Pass 7 reprocessed source-class events, from 4 August 2008 to 30 April 2015, within a 10 deg circular region centered at the location of S4 0954+65. The analysis was performed with the ScienceTools software package version v9r33p0 using the instrument response function P7REP SOURCE V15 (Ackermann et al. 2012a). A zenith angle cut of < 100 • was applied to reduce the contamination from the Earth Limb. The appropriate Galactic diffuse emission model (gll iem v05 rev.fit) and isotropic component (iso source v05.txt) 4 were used 1 . The normalizations of both components in the background model were allowed to vary freely during the spectral fitting. The unbinned maximum-likelihood method implemented in the gtlike tool was used. For a first likelihood fit, the model included all the 3FGL (Acero et al. 2015) sources within a 15 • circular region around S4 0954+65. Spectral indices and fluxes were left free for the fit for sources within 10 • , while sources from 10 • to 15 • were frozen to the catalog values. The significance of each source was evaluated using the test statistic TS = 2 (log L 1 − log L 0 ), where L is the likelihood of the data given the model with (L 1 ) or without (L 0 ) the source and TS is interpreted as a detection significance of ∼ √ TSσ (e.g., Mattox et al. 1996). A maximum-likelihood analysis was performed with several iterations to remove sources not contributing to the Region of Interest (low TS values, up to a maximum of TS = 10). The light curve has been calculated in 30, 7, and 1-day time bins modeling the source with a single power-law spectrum (as described in the 3FGL catalog). Both the flux and spectral index of S4 0954+65 were left free during the light curve calculation, while the rest of the point sources were fixed and only the diffuse Galactic and isotropic models were allowed to vary.
The LAT SEDs were calculated for four time intervals which show different characteristics in the multi-wavelength light curve (see § 3 for details). In all cases the spectrum is well-fit by a single power law (PL). A curvature test was performed on the SEDs in each time interval assuming a logparabolic (LP) fit for comparison with the power law. As defined in Nolan et al. (2012), the curvature test statistic can be expressed as TS curve = (TS LP − TS PL ). We do not find significant curvature in any of the above periods.
X-ray
The Swift X-Ray Telescope (XRT, Burrows et al. 2005) observed S4 0954+65 many times since July 2006, and all the XRT data presented here were taken in photon counting (PC) mode. Data reduction and calibration were performed with HEASoft v6.4 standard tools. We selected events of 0.3-8 keV and grades 0-12 for analysis. Source spectra were binned to include a minimum of 20 counts in each bin to allow χ 2 minimization fitting. Response files were generated with xrtmkarf, with corrections applied for point-spread function losses and CCD defects. For spectral analysis we used the XSPEC software package version 12.3.0.
We fit the Swift/XRT data by assuming an absorbed single power-law model where hydrogen column density for the direction of S4 0954+65 is fixed to the Galactic value of N H = 4.8 × 10 20 cm −2 , which is estimated from the Leiden/Argentine/Bonn (LAB) Survey of Galactic HI (Kalberla 1 Available at http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html 5 et al. 2005). All the data were well represented by the absorbed power-law model except that taken on MJD 57077 (obsID: 00033530018) for which a broken power-law model is applied (see §3 for details).
Optical and ultraviolet photometry
We analyzed optical and ultraviolet data in V , B, U, UV W 1, UV M2 and UV W 2 bands taken with the Ultraviolet and Optical Telescope (UVOT, Roming et al. 2005) onboard Swift. The UVOT data were reduced following the standard procedure for CCD photometry. Source counts were extracted from a circular region of 5 arcsec radius, while background counts were measured from an annulus centered on the target position with inner and outer radii of 27.5 and 35 arcsec, respectively. The net source counts were converted to flux densities using the standard zero points (Poole et al. 2008).
The fluxes were corrected for Galactic extinction (Schlegel et al. 1998) to obtain the intrinsic fluxes The source was observed in the optical R-band as part of the Tuorla blazar monitoring program 2 (Takalo et al. 2008 Optical Telescope (NOT) in SDSS (Sloan Digital Sky Survey) u and z bands. The data were reduced (de-biasing, flat field correction) using standard IRAF routines. By using aperture photometry with the typical aperture radius 1.0 − 1.5 arcsec, we measured the source magnitudes against the stars 3 and 6 in Raiteri et al. (1999). Note here that the 4-year averaged power-law index of the LAT spectrum is 2.38 ± 0.04 (Acero et al. 2015) and that a similarly hard GeV spectrum was observed on MJD 57059. Interestingly, the quasi-simultaneous (< 1 day) Swift/XRT spectrum showed a clear softening (Γ x = 1.72 ± 0.08) compared to that measured on the other days during the high state shown here (Γ x = 1.38 ± 0.03, see Table 1). The simultaneous R-band flux was almost at the brightest level during this outburst.
Note also that Fermi-LAT detected a 51 GeV photon from close vicinity of S4 0954+65 on MJD 57066.98, which was exactly simultaneous with the time of the MAGIC VHE detection. The angular separation between this 51 GeV event and the position of S4 0954+65 was only 0.013 • and the probability that the event belongs to S4 0954+65 was > 99% based on the gtsrcprob tool available in the ScienceTools. The quasi-simultaneous SED on MJD 57066.5-57067.5 (period A), which is selected to include the MAGIC VHE detection time, is shown in the upper-left panel in Fig. 3.
On the next day (MJD 57068-57069, period B), the 0.1-300 GeV flux slightly decreased and the LAT spectrum became softer (Γ = 2.3 ± 0.2), while the X-ray spectrum became harder. In addition, the optical flux showed a sharp decrease. On MJD 57069-57070 (period C), GeV γ-ray, X-ray and optical fluxes increased again. The Fermi-LAT and Swift/XRT spectra were intermediate with power-law indices of Γ GeV = 2.0 ± 0.1 and Γ x = 1.41 ± 0.08, respectively. After that, fluxes in the MeV/GeV, X-ray, and optical bands showed a gradual decrease with an almost similar spectral shape, but on MJD 57077-57078 (period D), the X-ray spectrum showed the hardest index during this outburst. Note here that the limited statistics of Fermi-LAT makes it hard to draw strong conclusions on the evolution of the γ-ray spectral index between period B and D. We checked the XRT data on Period D and found that larger systematic residuals are present in the lower and higher energy and hence we fitted the data using a broken power-law model. The broken power-law model is statistically favored over a single power law (p-value of 5.1 × 10 −4 from an F -test Ghisellini et al. (2011) also claimed from Swift/XRT data accumulated over 2006 to 2010 that a broken power law is a better representation for the X-ray spectrum of S4 0954+65 (see Table 2 of their paper).
9
To derive physical quantities at the emission site, the broadband spectra for the four selected periods are modeled by a one-zone synchrotron plus inverse-Compton model (Finke et al. 2008;Dermer et al. 2009). The electron distribution is assumed to have a broken power-law shape, where γ ′ min , γ ′ max , and γ ′ brk are the minimum, maximum, and break electron Lorentz factors, respectively. s 1 and s 2 are the power-law indices of the electron distribution below and above the break electron Lorentz factor γ ′ brk . Primed quantities indicate those measured in the jet comoving frame. The model curves and derived parameter values are shown in Fig. 3 and Table 2, respectively. The SEDs were well represented by changing only the electron distribution and the magnetic field (see also e.g., Dutka et al. 2013;Ackermann et al. 2014). Note that the spectral break in the electron distribution cannot be understood in terms of radiative cooling, because s 2 − s 1 does not correspond to the canonical value of 1.0 (e.g., Longair 2011). We found that the γ rays can be modeled by an external Compton (EC) component, rather than synchrotron self-Compton (SSC), despite the BL Lac classification for this object (Mukherjee et al. 1995). We modeled the seed photon source for this process as a monochromatic isotropic external radiation field with energy density u seed = 2.4 × 10 −4 erg s −1 and energy ǫ 0 = 7.5 × 10 −7 in m e c 2 units. This corresponds to a dust temperature of T dust = 1500 K and, for a disk luminosity of 3.0 × 10 43 erg s −1 and, using the relation from (Nenkova et al. 2008, equation (1)), a dust radius of 2.1 × 10 17 cm. Note that, as shown in Fig. 3, the SSC component is lower than the EC one by two orders of magnitude under the parameter values tabulated in Table 2.
Note also that once we assume that SSC emission is responsible for the X-ray and MeV/GeV γ-ray emissions, the required magnetic field becomes very small (B ∼ 1 mG) because of the relatively large Compton dominance of L IC /L sync ∼ 10. Since this is much weaker than the typical magnetic field derived from blazar SED modeling (∼ 1 Gauss, see e.g., Ghisellini et al. (2010)), our modeling under the EC assumption seems reasonable. There would be another option that the X-ray and MeV/GeV emissions are from SSC and EC components, respectively. However, given the lack of evidence of a spectral break between the X-ray and MeV/GeV data points, it is simpler to assume that only a single EC component is responsible for both X-ray and MeV/GeV emissions. In this regard, more precise flux measurements are needed to determine whether our assumption is valid or an alternative SSC+EC modeling is required.
During the GeV spectral hardening (MJD 57066.5-57067.5, period A), the break energy of the electron distribution γ ′ brk increased about one order of magnitude (up to 8 × 10 3 from 6 × 10 2 ) 10 due to the rising shape of the LAT νF ν spectrum, indicating a rapid injection of high-energy electrons with γ ′ ∼ 10 3 -10 4 . The observed softer X-ray spectrum in period A would result from the modest contribution of synchrotron photons emitted by the highest energy electrons instead of the inverse-Compton X-rays produced by the lowest energy electrons (see upper left panel of Fig. 3). We note that the spectral break at E break = 2.66 +0.70 −0.48 keV seen in period D can be modeled by setting the minimum Lorentz factor of the electron distribution to be 1.5. Note also that a similar X-ray break seems to be present in the X-ray data during Period D (MJD 57068-57069), which is again reasonably Fig. 3 and Table 2). Therefore, we stress that X-ray spectroscopy is a powerful tool to constrain the minimum electron Lorentz factor γ ′ min of the emitting electron distribution (see also e.g., Celotti & Ghisellini (2008)). We also point out that the observed spectral break is a good indication that the EC component indeed dominates over SSC in the X-ray band, because it is difficult to produce such a break by assuming SSC.
From SED modeling, we also found that the jet power in the magnetic field (P B ) dominates over the jet power in emitting electrons (P e ) by a factor of 10-100 (see Table 2). Here we define the jet power components as in Finke et al. (2008); P i = 2πR ′2 Γ 2 βcU ′ i (i = B, e), where Γ = (1 − β 2 ) −1/2 is the bulk Lorentz factor of the emitting blob, U ′ B = B 2 /8π and U ′ e = (m e c 2 /V ′ ) are the energy densities of magnetic field and electrons, respectively, and V ′ = (4/3) πR ′3 is the volume of the emitting blob. Note that this definition assumes a two-sided jet. This Poynting-flux dominance is robust under our EC assumption and not unprecedented considering there are several blazars showing a similar feature of P B > 10P e such as 0234+285 and 0528+134 (see Table A2 of Celotti & Ghisellini (2008)). There is some evidence that cold protons in the jet (P p = 2πR ′2 Γ 2 βc(m p c 2 /V ′ ) p is a proton distribution and N ′ p = N ′ e is assumed, see e.g., Ghisellini et al. 2014) can carry much larger (as large as 100 times) power than the emitting electrons (e.g., Sikora & Madejski 2000;Ghisellini et al. 2014;Tanaka et al. 2015). Hence, it is possible in the context of the models presented here, that P B ∼ P e + P p . This paper serves as a case study for the capability of detecting new VHE sources based upon follow-up of flaring LAT sources showing spectral hardening (i.e. fluxes above 1.0 × 10 −6 photons cm −2 s −1 above 100 MeV and Γ GeV < 2.0). The capabilities of the LAT (specifically the daily all-sky monitoring and the improved high-energy performance from Pass 8 (Atwood et al. 2013)) are well suited to these types of efforts and we can expect many such discoveries in the next few years. In fact, several spectral hardening events have been seen from Fermi-LAT FSRQs (e.g., Tanaka et al. 2011;Pacciani et al. 2014) which would have been excellent candidates for VHE follow-up at the time.
Additionally, recent theoretical and observational studies of the extragalactic background light (EBL) indicate that the horizon of 100 GeV photons is z ∼ 1 (e.g., Finke et al. 2010;Domínguez et al. 2011;Ackermann et al. 2012b;Inoue et al. 2013). The current capabilities of the LAT are allowing us to probe beyond this edge. For example, Tanaka et al. (2013) report the detection of two VHE photons from the z = 1.1 blazar PKS 0426-380 (see also Figure 13 of Ackermann et al. (2016) for the Fermi-LAT detection of E > 50 GeV photons from blazars beyond the horizon). But the current generation of ground based VHE observatories have not yet detected a source beyond a redshift of 1. MAGIC recently reported the detection of two high-redshift blazars S3 0218+35 at z = 0.944 (Mirzoyan 2014) and PKS 1441+25 at z = 0.939 (Mirzoyan 2015a;Abeysekara et al. 2015;Ahnen et al. 2015), but, depending on the spectrum of these sources at VHE energies, might not challenge the current understanding of the EBL. Triggering VHE observations of moderately-high redshift blazars with the Fermi-LAT when they are in high-and hard-flux states is a way to push the redshift limit of VHE detections further and allow us to learn more about the EBL. This will become even more important when the next generation instrument, CTA, comes online and provides a lower energy threshold combined with better sensitivity. | 2016-04-19T04:27:14.000Z | 2016-04-19T00:00:00.000 | {
"year": 2016,
"sha1": "66fbf20baa9cc3d5ff37147358a7a8cfcea517e9",
"oa_license": null,
"oa_url": "https://academic.oup.com/pasj/article-pdf/68/4/51/6847584/psw049.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "66fbf20baa9cc3d5ff37147358a7a8cfcea517e9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259277847 | pes2o/s2orc | v3-fos-license | Sequence level genome-wide associations for bull production and fertility traits in tropically adapted bulls
Background The genetics of male fertility is complex and not fully understood. Male subfertility can adversely affect the economics of livestock production. For example, inadvertently mating bulls with poor fertility can result in reduced annual liveweight production and suboptimal husbandry management. Fertility traits, such as scrotal circumference and semen quality are commonly used to select bulls before mating and can be targeted in genomic studies. In this study, we conducted genome-wide association analyses using sequence-level data targeting seven bull production and fertility traits measured in a multi-breed population of 6,422 tropically adapted bulls. The beef bull production and fertility traits included body weight (Weight), body condition score (CS), scrotal circumference (SC), sheath score (Sheath), percentage of normal spermatozoa (PNS), percentage of spermatozoa with mid-piece abnormalities (MP) and percentage of spermatozoa with proximal droplets (PD). Results After quality control, 13,398,171 polymorphisms were tested for their associations with each trait in a mixed-model approach, fitting a multi-breed genomic relationship matrix. A Bonferroni genome-wide significance threshold of 5 × 10− 8 was imposed. This effort led to identifying genetic variants and candidate genes underpinning bull fertility and production traits. Genetic variants in Bos taurus autosome (BTA) 5 were associated with SC, Sheath, PNS, PD and MP. Whereas chromosome X was significant for SC, PNS, and PD. The traits we studied are highly polygenic and had significant results across the genome (BTA 1, 2, 4, 6, 7, 8, 11, 12, 14, 16, 18, 19, 23, 28, and 29). We also highlighted potential high-impact variants and candidate genes associated with Scrotal Circumference (SC) and Sheath Score (Sheath), which warrants further investigation in future studies. Conclusion The work presented here is a step closer to identifying molecular mechanisms that underpin bull fertility and production. Our work also emphasises the importance of including the X chromosome in genomic analyses. Future research aims to investigate potential causative variants and genes in downstream analyses. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-023-09475-2.
Background
Northern Australia represents a critical region for the Australian beef breeding industry, and bull fertility is an important contributor to profitability [1][2][3]. However, bull fertility has yet to benefit from the advancements in genomics and selective breeding, which has further contributed to improving female fertility [4]. The Bull Breeding Soundness Evaluation (BBSE) provides a comprehensive assessment of male fertility-related traits linked to the number of calves a sire produces in the subsequent mating season [5,6]. The BBSE traits which consist of assessment of body conformation, testicular development and sperm motility and morphology assessment, are heritable and can be used for selection and genetic improvement programs [7]. Previous genomewide association studies (GWAS) have identified candidate genes for Scrotal Circumference (SC) and semen traits which are recorded in BBSE [8][9][10][11]. Identifying these critical genomic regions expands the current understanding of the underlying genetics of bull fertility and can also be used to inform genomic predictions and improve their accuracy [12].
Previous work used medium or high-density SNP arrays, such as the Illumina 50 K panel or the BovineHD chip. Thus, genetic variants associated with GWAS are usually not causal mutations but are single nucleotide polymorphisms (SNP) in Linkage Disequilibrium (LD) to causal variants [13]. With advancements in genome sequencing and imputation methodologies, lower-density panels can be accurately imputed to sequence level [14,15]. This allows the genome to be viewed in finer detail, which increases our chances of detecting a causal variant. This study aimed to conduct GWAS on seven BBSE traits to identify genetic variants and candidate genes underpinning bull fertility. The variants identified in this analysis could be incorporated into genomic predictions to improve the rate of genetic improvement in bull fertility and production traits.
Animals and phenotypes
A total of 6,422 animals of six breeds with BBSE measurements of seven phenotypes were used in this study. These animals are from two research populations and four stud herds from the industry. The two-research populations consisted of animal data obtained from the Cooperative Research Centre for Beef Genetic Technologies (Beef CRC) project [16], which included 1,051 Brahman (BRH) and 1,819 Tropical Composite bulls (TRC). Animal data for these four stud herds were contributed by four properties in Queensland, which included 1,288 Santa Gertrudis (SGT), 760 Droughtmasters (DMT), 844 Ultra blacks (UBK), and 660 Belmont Tropical Composite (BTC) [17]. The seven BBSE phenotypes used in this study included four physical measures on the animal and three semen measurements. These measurements were conducted according to the standards prescribed by Australian Cattle Veterinarians [5], which have been covered extensively in the literature [16]. Details on the seven phenotypes can be found in Table 1. Summary statistics and heritabilities of each trait are shown in Table 2. The breed-wise summary statistics for each trait are available in Additional file 6.
Phenotypic measures for all six populations were collected from 2003 to 2020. Each bull was assessed once, and the year of measurement was recorded as the fixed effect -year of birth. The individuals involved in the assessment and collection of phenotypes are different for the two research populations and the four stud herds. Phenotypes for the two research populations were collected and assessed by two experienced veterinarians who worked together throughout the collection period. In the four stud herds, an experienced animal scientist and veterinarian conducted the examinations. For semen morphology traits, semen samples from the two research populations were analysed in the same laboratory. Whereas, sperm samples from the four stud herds were In the four stud herds, all phenotypes for SGT and DMT bulls were obtained at around 600 days, whereas UBK and BTC bulls had their phenotypes measured around 440 days old and 390 days old, respectively. Phenotypes were pre-adjusted using a generalised linear model analysis (PROC GLM) for their fixed effects (year of birth, breed, property) and covariates (age at measurement, PC1 and PC2) using SAS ® software 9.4 (SAS Inst. Inc.). A subset of this population was previously used in a multibreed analysis [18].
Genotypes, quality control and genomic relationship matrices
SNP genotype imputation up to whole-genome sequence level was conducted in two rounds. In the first round, the reference population was established using Beef CRC and industry cattle. This reference population consisted of 2,452 animals made of BTC, BRH, DMT, SGT, UBK, Angus, Bonsmara, Boran, Composite, and Tuli breeds that were genotyped with the bovine high-density chip (~ 700 K) and phased using Eagle 2 (v2.4.1) which formed the imputation targets for the next step [19]. In the target population (n = 6422), genotyping was first done using a variety of commercial 50 K SNP chips (Bovine SNP50 v1 or v2 or Neogen Tropical Chip v1 and v2). The genotypes from these animals were also phased with Eagle 2 (v2.4.1) [19], and formed the imputation targets for the next step. Imputation of targets from low to high density using the phased reference was conducted using Minimac3 for the autosomes and Minimac4 for the X chromosome [14]. For imputation to sequence level, genotypes from 668 animals in the 1000 bull genome project run 7 [20] were filtered to keep only bi-allelic markers and minor alleles with at least four copies. SNPs for X), and a second GRM using the remaining SNP from the X chromosome. Heritability estimates for each trait were obtained using the restricted maximum likelihood (REML) analysis in GCTA [21].
Genome-wide association analysis and quantitative trait loci analysis
The first two principal components calculated PLINK 1.9 [22], and the GRMs, were used to account for the underlying genetic structure of the multi-breed population under study. As bias could be introduced when a tested SNP is also included in the GRM that is fitted in the model [23], the Leave One Chromosome Out (LOCO) approach to GWAS was also implemented by building a different GRM when testing each chromosome, leaving out any SNPs that are on the tested chromosome [24]. The MLM method implemented in GCTA is as follows: Where y represents the phenotype in question, a represents the mean, b represents the additive genetic effect of the tested SNP, x represents the SNP genotype indicator variable which is coded as 0, 1 and 2, grepresents the joined effect of all variants, excluding any variants on which the chromosome of the tested SNP is located, and e is the residual variance. A genome-wide significance threshold of 5 × 10 − 8 was used, which is a conservative Bonferroni correction. After the first round of GWAS was completed, the most significant SNPs in each chromosome were refitted as a discrete covariate in the second round of GWAS for each trait in GCTA [21]. This was done to determine if the most significant SNP in each chromosome could account for the entire peak for that chromosome. GWAS Manhattan plots were created in R [25] using the Scattermore package [26]. Using bedtools [27], SNPs within 50 Kbp that met the significance threshold (5 × 10 − 8 ) were merged into a Quantitative Trait Loci (QTL). Using GALLO [28], genes found in each region were reported using gene annotation data of the Bos taurus ARS UCD 1.2 genome assembly obtained from Ensembl version 105 [29]. The find_genes_qtls_ around_markers function was used to identify the genes located in each region. The following parameters were used: the method was set to gene, marker was set to haplotype, and the interval was set to 0. Similarly, the same regions were used to identify any previously reported QTL in the Animal QTL database (https://www.animalgenome.org/cgi-bin/QTLdb/BT/ index) [30] that overlapped with regions reported in this study. Ensembl Variant effect prediction (VEP) was conducted on all significant SNPs to ascertain the impact of each variant [31]. Pairwise LD was calculated between the high impact variants and the top variant for their respective QTL using PLINK 1.9 [22]. We considered variants that meet the R 2 threshold of 0.4 to be in LD. Finally, the percentage of genetic variance explained by each SNP was calculated using a formula made available in a previous report [32]: Where p i and q i are the SNP's allele frequencies a 2 i is the estimated additive effect of the trait studied, and σ 2 g is the estimated genetic variance.
Results and discussion
In this study, we conducted GWAS using sequence-level genotypes and targeted seven bull fertility and production traits measured in a multi-breed population of 6,422 bulls. This section discusses important regions and candidate genes identified through GWAS and QTL analysis. A summary of GWAS results for the most significant genomic region discovered for each trait is provided in Table 3. A complete table of GWAS summary statistics for all tested SNP and each trait is available in Additional file 1. Additional file 2 contains SNP that were significant for at least one trait. Manhattan plots for GWAS in SC and Sheath are shown in Figs. 1 and 2. The remaining Manhattan plots can be found in Additional file 3. A vast number of previously published QTL were identified for some traits. As such, we have summarised these results in Figs. 3 and 4. The sperm morphology traits (PNS, PD, and MP) did not have normally distributed residuals. This is not ideal for GWAS, but it is expected as sperm morphological abnormalities affect only some bulls. The majority of breeding bulls present a high percentage of normal sperm.
Heritability estimates of individual traits
Heritability estimates across traits range from low (0.07, CS) to high (0.59, Sheath) ( Table 2). The estimates we report for our traits were similar to those published in [8,33,34]. Our estimates for SC were similar to those measured in TRC bulls at 12 months (0.46) and slightly higher than those measured at 24 months (0.44) [34].
Single trait associations
The number of associated SNP varied enormously, depending on the target trait ( [35][36][37]. The strongest SNP association for CS (p = 1.79 × 10 − 9 ) was located at 6.7 Mb of BTA 23. This is a new discovery as there are no CS QTL in BTA 23 currently recorded in the cattle QTL database (https://www.animalgenome. org/cgi-bin/QTLdb). The strongest SNP association for SC (p = 1.15 x − 79 ) was located at 79 Mb on the X chromosome. This is not the first time we detect SNP associations on X for SC, and so this result confirms previous GWAS carried out with smaller datasets [8,9,11]. The strongest association (p = 1.98 × 10 − 288 ) for Sheath was located at 47.8 Mb of BTA 5. This finding is consistent with previous GWAS work in sheath score that used a subset of the data included in this study [32]. The subset of data contained only BRH and TRC bulls, which differs from our multi-breed analyses [32]. In short, the larger dataset is expanding on the initial findings and in subsequent sections of this discussion, we detail the QTL, genes, and variants uncovered with sequence-level data.
Similarly, the current dataset enhanced our ability to detect associations for the three semen traits: PNS, PD and MP. The strongest SNP association (p = 3.35 × 10 − 14 ) for PNS was located at 46.4 Mb at BTA 5. Previously, we had only identified SNPs on X for PNS [8,9,11]. A recent study on American cattle did identify SNP associations in chromosome 5 for PNS, corroborating our new finding in tropical breeds [38]. The strongest SNP association (p = 1.98 × 10 − 13 ) for PD was located at 46 Mb of BTA 5. A total of 173 significant SNP associations were detected for MP. The strongest SNP association (p = 2.77 × 10 − 10 ) was located at 6.2 Mb of the X chromosome. This multibreed dataset confirmed that chromosome X harbors SNP associations for semen traits as expected [8,9,11]. It also allowed the discovery of significant SNP on BTA 5, pointing to new candidate genes (described below).
The most significant SNP for a trait may not account for all the variation at a particular locus, and multiple causal variants may exist at a given locus [39]. As such, we verified the most significant SNPs for each chromosome in each trait by refitting these SNPs back into the mixed model. In general, the most significant SNP in each chromosome accounted for the entire variation in that locus for most traits, as seen in Figs. 1 and 2. However, in Sheath, the most significant SNP did not account for all the variation in BTA 5 (Fig. 2). Perhaps more than one causal SNP exists in that BTA 5 region and this is important because it overlaps with significant QTL discovered for SC and semen traits. The SNP associations across traits found in BTA 5 are discussed in more detail below (see Table 4).
QTL analysis
The GWAS literature includes accounts of false positives: QTL or SNP associations that are seen once and not validated (winners curse) [40]. To mitigate this issue, we focused on reporting QTL regions that overlap with known QTL from previous work. We used the QTL database [30] to identify consensus between our current analyses and published work. The number of significant QTL identified per trait and the total sum of these QTL can be found in Table 4.
A total of 1,120 previously reported QTL overlapped with the regions identified in this study for weight. While most of the identified QTL were associated with weightrelated traits, some of these regions were also important female traits such as milk fat yield, health-related traits, and reproductive traits.
A total of 20,095 previously reported QTL overlapped with the significant regions reported for SC. Most QTL were associated with SC reported in Canchim bulls [41], and the age of puberty was reported in our previous study with TC bulls [11]. This result is not surprising as a bull is considered to have reached puberty after achieving a SC of 26 cm [42].
In Sheath, 2,671 previously reported QTL overlapped with the significant regions in this study. Most QTL were associated to female traits such as milk protein percentage or milk yield. However, QTL were also associated to male traits such as inhibin hormone levels and SC. Inhibin hormone levels are considered an early indicator of sexual development, and genes such as INHBE and INHBC are located in BTA 5 [11,43]. Of note, the GWAS on blood hormone levels of Inhibin used 50 K genotype data for Brahman and TRC cohorts that were included in the current larger dataset [11].
Previously reported QTL for PNS mirrored the result for SC (Figs. 3 and 4). This is not surprising given that both BTA 5 and chromosome X were associated with both traits, and a positive genetic correlation has been reported in a previous study between the two traits [16]. For PD, previously reported QTL were associated with reproduction traits such as inhibin level and SC, whereas most QTL were associated to meat and carcass and female traits for MP. Recent studies in dairy populations reported a QTL in BTA 6 associated with sperm abnormality traits in Brown Swiss bulls [44,45]. Studies in Holstein bulls identified regions in BTA 1, 2, 4, 6, 7, 8, 16, 23 and 26 associated with progressive and total motility [46]. However, none of these regions overlapped with the QTL reported in our studied population. The dissimilarities in QTL reported could be due to genetic differences between beef and dairy cattle at a genome-wide level [47].
Significant QTL mapping to the X chromosome for SC, PNS and sperm abnormalities highlights its importance in male fertility and spermatogenesis. The X chromosome is a candidate region for species divergence genes which are highly expressed in the testis of mice and humans [48]. Sexual antagonism and sex-chromosome meiotic drive have been suggested as a possible reason for the large number of genes associated with spermatogenesis found in the X chromosome [8].
Overlapping regions across traits
Due to the vast number of genes detected for some traits, we have included the list of genes in each associated region for each trait in Additional file 4. To facilitate further use of our findings, we have included a list of all genes across associated regions in Additional file 5. A list of genes across associated regions that map at least four traits is shown in Table 5. Across traits, we observed that BTA 5 is an important region for male fertility in bulls. Regions in BTA 5 that have overlapping results point to SNP and genes associated with five out of the seven studied traits: SC, Sheath, PNS, PD and MP (Table 5). 16 candidate genes were identified within these significant regions as associated with at least four traits. Next, we reviewed the literature to discuss how the known function of these genes could be related to SC, Sheath, or sperm morphology traits.
Three candidate genes (DYRK2, CAND1, and GRIP1) listed in Table 5 have known biological roles linking them with spermatogenesis. Spermatogenesis is likely to underpin most bull fertility traits, so these genes warrant further discussion. The DYRK family of kinases displayed high expression in the testis and was suggested to play a role in the later stages of spermatogenesis [49]. The CAND1 protein is highly expressed in the brain and testis in humans and has been reported to be highly expressed in spermatozoa of fertile men [29,50]. In mice, GRIP1 is necessary for the adhesion of Sertoli cells to germ cells and plays an important role in efficient spermatogenesis [51]. Mice without GRIP1 appeared to suffer from impaired fertility due to abnormalities in the testis [51]. However, little is known about the role of GRIP1 in bull fertility, although its gene and protein expression in different stages of the oestrous cycle have been covered previously [52]. Perhaps these genes are similarly involved with spermatogenesis in bulls. However, further research is required to ascertain their effects on bovine spermatogenesis and testicular function.
The remaining candidate genes from Table 5, do not have a known function that directly links them to spermatogenesis. However, they are ubiquitously expressed in reproductive tissues. The CPNE (copines) gene group of membrane-bound proteins have multiple functions in membrane transport, signal transduction and cancer [53]. CPNE8 is a gene expressed ubiquitously in the prostate, testis, heart, and brain tissues [53,54]. It was previously suggested that CPNE8 might be an important gene for prostate regulation and development [54]. The PTPRR gene may have a tumour-suppressive function in prostate cancer, and prostate cancer samples often contain lower levels of PTPRR compared to regular tissue samples [55,56]. In addition, the PTPRB gene was expressed in porcine and equine spermatozoa and found mainly in the plasma membrane of sperm heads, acrosome, and tail [57]. The expression of PTPRB mainly in the tail of spermatozoa, suggests its involvement in sperm motility regulation [57]. Previous literature has highlighted the different functions of tyrosine phosphorylation in spermatozoa, which are crucial for successful fertilisation [58][59][60]. The expression of the BEST3 gene in the form of bestrophin 3 is ubiquitous in human muscle but found in low levels in the bone marrow, testis and retina [61]. At the same time, BEST3 plays a role in regulating cell proliferation and apoptosis, both of which are important features in mammalian spermatogenesis [62][63][64][65]. Most of these genes appear to be involved in cancer literature, which is consistent with reproductive physiology that often involves cell proliferation [66].
Notably, three genes within BTA 5 regions (Table 5) play an important role in tropical adaptation, which is expected in this cattle population. Between 47.3 Mb and 47.9 Mb is a common region in BTA 5 that contains several genes, including HELB, which is suggested to influence tropical cattle adaptation, which helps cattle cope with harsh temperatures and high intensity of ultraviolet light [67]. IRAK3 is suggested to be involved in intramuscular fat disposition and systemic inflammation regulation, HMGA2 regulates body size. The region containing HMGA2 has been previously associated with navel length in Nellore cattle and has also been reported to regulate body size [68,69]. A copy number variant (CNV) in the HMGA2 gene has been proposed to be a functional variant associated with naval length [68]. This CNV is within a detected QTL and may play a role in sheath score, SC, PNS, and PD in the studied population. We conducted a preliminary analysis to observe the same region (5:47,840,005-47846215, reference genome ARS UCD 1.2) and explored whether this CNV segregates in our population. Using 138 whole genome sequenced cattle, that were part of the reference panel for the SNP imputation, we observed in 79 of them an increased coverage depth which likely indicates the presence of a CNV. Future studies are required to confirm whether this region of increased coverage depth is due to a CNV segregating in our population and whether this CNV is the same as previously described. Additional efforts should be made to impute this CNV for the entire multibreed population and verify it's contribution to these traits. Regions in BTA5 have been consistently reported in previous studies. BTA 5 is evidently harbouring important regions for fertility traits and production traits. Dissecting the genes and mutations implicated in fertility as opposed to heat tolerance or growth could further inform selective breeding.
Variant effect prediction (VEP): candidate genes
For the most significant variant in each trait (Table 3), VEP did not reveal any variants that will have a moderate or high functional impact on a protein. Instead, most variants were labelled as modifiers which either have effects that are difficult to predict or have little evidence of protein impact.
When VEP was expanded to include SNPs within candidate regions listed in Table 4, similar results were observed with significant variants categorised as modifiers (Fig. 5). This is logical as most traits examined in this study are complex. As such, the effects of these variants segregating in various loci across the genome have little effect on the protein or phenotype [70]. However, we observed one variant of high functional impact located (Table 6). While this SNP may not have an equivalent quantitative effect on the trait compared to the peak SNP, it could still have a high functional impact which should be considered. As mentioned previously, IRAK3 plays a role in immune suppression. A rodent study reported a negative relationship between IRAK3 and TNF-α expression and suggested that IRAK3 is associated with immune suppression during cases of sepsis [71]. IRAK3 may also be a factor produced by Sertoli cells that causes inflammatory effector T-cells to develop regulatory functions which reduce the number of available T-cells [72]. A recent review highlighted that Sertoli cells aid in creating and maintaining an environment that shields germ cells from autoimmune destruction [73]. This is due to the presentation of antigens on the surface of end-stage germ cells, which are detected as foreign, and can lead to autoimmune destruction resulting in suboptimal fertility or sterility [73,74]. Perhaps, a variant of high impact on IRAK3 may affect the protein's ability to regulate autoimmune destruction efficiently, leading to decreased fertility. However, further downstream work is required to verify this speculation.
Variants prioritized with the variant effect predictor
We identified 17 high-impact variants, predicted with VEP, as shown in Table 6. High-impact variants are predicted to have a disruptive effect on a protein, which may have a potential downstream impact on the associated phenotypes [31]. Pairwise LD calculation between highimpact variants and the top variants for their respective QTL are available in Additional file 6 (Tables S7 to S10). Among these high-impact variants, 15 variants were in BTA 5, and the remaining two were found in BTA 2 and the X chromosome. All variants were associated with either SC, Sheath, PNS or PD. Seven high-impact variants were in LD with the top variants for their respective QTL with an R 2 ranging from 0.41 to 0.97. The highimpact variant rs479267746 lies within the coding region of a gene (IRAK3) which has been previously associated with fertility. The expression of IRAK3 by Sertoli cells, which play an important role in spermatogenesis, has been discussed in detail in the previous section. The high-impact variant rs439285466 lies within the proteincoding region of a gene called RLIM. Although RLIM has not been associated to bull fertility or bull production traits, it has been previously associated with the regulation of cell proliferation which is fundamental process for spermatogenesis [75]. Considering the LD with top QTL variants for SC and other bull traits, together with the VEP results and the known function of IRAK3 and RLIM, we would prioritize the 2 high-impact variants in these genes for future work. These variants should be further tested for their impact on bull fertility.
The remaining 10 high-impact variants, while not in LD (R 2 < 0.4) with the top variants of the corresponding QTL, were significantly associated with either SC or Sheath themselves. Some of the high impact variants identified in this study, lie within known genes (NUDT4, SMUG1, KRT77, BIN2, ARHGAP9, and CFAP54) previously not connected with bull traits or male fertility [75][76][77][78][79][80][81][82]. We proposed these 17 variants be further investigated in subsequent analysis to ascertain variant effects in other populations. A Reference SNP cluster ID, B Chromosome and Base Pair, C Predicted Protein Consequence, D Gene name, E Trait associated also would like to acknowledge John Bertram for his contributions to the collection of phenotypic records throughout this project. The authors also recognise that a small subset of the GWAS Manhattan plots relating to sperm morphology traits has been submitted for publication in the proceeding of the World Congress Genetics Applied to Livestock Production [83].
Authors' contributions
AT performed the analyses, wrote the main text, and prepared the figures and tables. MF, LPN and AR conceptualised and supervised the work and aided AT with statistical analysis and interpretation of results. MM was involved in the collection of phenotypic records used in this study. All authors read and approved the final manuscript.
Funding
This project was co-funded by CSIRO, the University of Queensland and Meat and Livestock Australia (L.GEN.1818).
Data Availability
The raw data on which the conclusions of the paper rely are available from the CSIRO https://www.csiro.au/) under a Data Use Agreement. Summary statistics for every tested SNP are available as supplementary files (Additional files 1 to 5). The datasets can be accessed in ScienceDB using the following link: https://www.scidb.cn/s/7Jbi2e. Addtionally, the unique links for each additional file has been listed in the supporting information section below.
Original phenotype data can be obtained from respective producers under circumstances where a data-sharing agreement has been reached, this can be arranged through the corresponding author.
Declarations Ethics approval and consent to participate
The authors confirm that all methods were performed in accordance with the relevant guidelines and regulations. The animal data used in this paper were obtained from two separate projects overseen by two different institutional review boards. In the first project, the authors confirm that the institutional review board JM Rendel Laboratory Animal Experimentation Ethics Committee (CSIRO, Queensland, Australia) approved the protocols involved in handling and sampling of the two research populations (TBC107 and RH225-06, 1999RH225-06, -2006RH225-06, and 2006RH225-06, -2010. For the second project, the authors confirm that the institutional review board CSIRO Animal Care and Use Committee granted waiver of ethics approval on the four industry herds, as ethics approval was not required for archived historical samples of these animals obtained from producers. No animals were handled by the authors, only existing data was used in this project.
Consent for publication
Not applicable. | 2023-06-29T13:57:12.453Z | 2023-06-29T00:00:00.000 | {
"year": 2023,
"sha1": "9dadc87e0dcb4e32895afeafa4ad9ea2c1965b15",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "9dadc87e0dcb4e32895afeafa4ad9ea2c1965b15",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247488685 | pes2o/s2orc | v3-fos-license | Synthesis of Rigid Polyurethane Foams Incorporating Polyols from Chemical Recycling of Post-Industrial Waste Polyurethane Foams
The preparation and characteristics of rigid polyurethane foams (RPUFs) synthesized from polyols obtained by glycolysis of post-industrial waste RPUFs have been studied. More precisely, waste rigid foams that have been chemically recycled by glycolysis in this work are industrially produced pieces for housing and bracket applications. The glycolysis products have been purified by vacuum distillation. The physicochemical properties of the polyols, such as hydroxyl value, acid value, average molecular weight (Mn) and viscosity have been analyzed. The chemical structure and thermal stability of the polyols have been studied by means of FTIR and TGA, respectively. Partial substitution of the commercial polyol (up to 15 wt.%) by the recycled polyols increases the reactivity of the RPUFs synthesis, according to short characteristic times during the foaming process along with more exothermic temperature profiles. Foams formulated with recycled polyols have a lower bulk density (88.3–96.9 kg m−3) and smaller cell sizes compared to a conventional reference RPUF. The addition of recycled polyols (up to 10 wt.%) into the formulation causes a slight decrease in compressive properties, whereas tensile strength and modulus values increase remarkably.
Introduction
Polyurethane (PU) is one of the most versatile polymers, offering a wide variety of commercial applications. PUs can be classified mainly into foams and CASEs (Coatings, Adhesives, Sealants, Elastomers) [1]. Foams are further subdivided into flexible foams, such as those used in mattresses, car seats or packaging; and rigid foams, which are generally used as insulation in buildings and in commercial and domestic refrigeration [2,3]. However, one of the main drawbacks to their great commercial success is the challenging management of the large amount of waste that is generated when the products including those foams reach their end of life [4]. Current environmental legislation and transition to a circular economy model point out an alternative to landfilling or disposal of polymer waste: recycling [5]. Polyurethane recycling processes can be divided into physical and chemical treatments. Physical recycling processes do not modify the internal structure of the polymer; instead, the polymer residues are mechanically processed into flakes, granules or powder to be used in the production of new materials [6,7]. These physical processes can be successfully applied to thermoplastic polymers but are ineffective for most types of polyurethane due to their thermosetting nature. Regarding chemical recycling processes, using thermochemical or solvolysis reactions, such as glycolysis [8][9][10], aminolysis [11][12][13], alcoholysis [14,15] or hydrolysis [16,17], polymers are broken down into basic hydrocarbon units or monomers that can be used in the chemical industry as raw materials. Glycolysis is undoubtedly the most important polyurethane material recovery process with remarkable advances in all different classes of polyurethane, including rigid foams [18][19][20][21][22][23][24]. The glycolysis of RPUFs (Equation (1)) consists of treating the residues with a low molecular mass glycol, thus obtaining a homogeneous, single-phase product with low viscosity and high hydroxyl value that can be used as a partial substitute for commercial polyether polyols in the synthesis of new rigid foams [25,26].
Regarding prior research carried out on glycolysis of post-industrial waste RPUFs, Morooka et al. [18] chemically recycled RPUFs from refrigerators using diethylene glycol (DEG) as solvent and BaO or diethanolamine (DEA) as catalysts. They observed that the obtained glycolysate could be added to commercial polyol (up to 10 wt.%) to produce new RPUFs with thermal conductivities and compressive strengths similar to those of conventional foams. Zhu et al. [20] obtained higher yields by using ethylene glycol (EG) instead of DEG in the glycolysis of rigid foams from refrigerators. They observed a higher catalytic efficiency of NaOH and defined the optimal reaction conditions to be an EG:PU ratio of 1:1 by mass, a catalyst concentration of 1 wt.%, a temperature of 198 • C and a reaction time of 2 h. Recovered polyols were incorporated up to 10 wt.% with respect to the total amount of polyol in the new foam formulations.
Polyurethane chemistry is fundamentally based on the condensation reaction between polyols and diisocyanates. These isocyanate groups are very reactive to species with active hydrogens, such as hydroxyl groups, urethane groups and water. For this reason, during the foaming process of RPUFs, many exothermic reactions of isocyanate groups occur consecutively [27]. The main reaction that takes place is shown in Equation (2), known as a gelling reaction, in which the isocyanate groups and the hydroxyl groups of the polyol produce a cross-linked internal structure due to the urethane group that is generated.
Equation (3) shows the reaction of isocyanate groups with water, known as blowing reaction. The isocyanate group reacts with water to form an amine and CO 2 . The gas expands and cells are formed in which the carbon dioxide is encapsulated. This reaction is highly exothermic and provides the main source of heat value required in the expansion and drying of the foam.
Furthermore, isocyanate reacts additionally due to its high reactivity in the presence of protons. These reactions are known as crosslinking reactions. For instance, amines formed during the blowing reaction react with the free isocyanate, yielding substituted urea (Equation (4)).
Although previous research has identified glycolysis as one of the most suitable chemical recycling processes for PU recycling, applications developed from those findings mainly focus on clean flexible PU foams with known composition. Recovery of polyols is easier and higher purity can be achieved due to a low intensive post-treatment required by glycolysis products of flexible PU. In the case of rigid PU, the reaction product is a single phase, and for that reason, the recovered polyols are mixed with the solvent and with other chemical by-products of the reaction and compounds derived from the composition of the PU waste. Hence, it is necessary to concentrate the recovered polyols and validate their use in the synthesis of new PUs. Furthermore, the present work deals with real PU waste that is currently being generated and landfilled in large quantities, so this paper aims to present a technically feasible solution based on the principles of the circular economy, thus allowing the manufacture of new value-added polyurethanes and closing the cycle of PU material through the application of a chemical recycling process.
The purpose of the research is to understand the steps involved in the chemical recycling of post-industrial complex waste RPUFs for housing and bracket applications, and the synthesis of new recycled foams based on sustainable polyols. After purifying the reaction products by vacuum distillation, the recycled polyols have been incorporated into new formulations of RPUFs. The recovered polyols have been analyzed through various characterization techniques to examine their composition. Certain compounds, such as amines, that significantly influence the foaming process have also been identified and quantified. Foam synthesis reactions have been monitored, performing an analysis of the reactivity and exothermicity of the process. Temperature profiles and characteristic times of the reactions have been observed as a function of the amount of recycled polyol incorporated. Microscopic structures of foams obtained with recycled components have been compared to conventional foams and physical, chemical and mechanical properties of interest have been verified.
Materials
RPUFs that have been chemically recycled by glycolysis in this work are pieces for housing and bracket applications (70-80 kg m −3 ). They are industrially produced pieces, manufactured and provided by the company Arcesso Dynamics S.L. (Barcelona, Spain). Ethylene glycol (EG) and NaOH were used as the solvent and catalyst for glycolysis, respectively. For the synthesis of foams, Arcesso Dynamics S.L. provided a mixture of Lupranol ® 3300, Lupranol ® 3422 and Lupranol ® 3402, used as conventional commercial polyol (CP) with a hydroxyl value of 395 mg KOH g −1 . A polymeric 4,4'-diphenylmethane diisocyanate (pMDI) with an isocyanate (NCO) content of 31% was also provided by Arcesso Dynamics S.L. Dimethylaminoethanol (DMAE) and distilled water were used as foaming catalyst and blowing agent, respectively. All chemicals were used as received.
Glycolysis of Waste RPUFs and Purification
Waste polyurethane foams were milled by means of a blade mill to particle sizes smaller than 2 mm in diameter prior to the glycolysis reaction. The selected glycol was fed into a 500 mL three necked glass reactor, equipped with a stirrer that was set at 100 rpm, a thermometer, and reflux condenser. Ethylene glycol as the glycolysis reagent and NaOH as the catalyst were added at different mass ratios given in Table 1 and preheated to the boiling temperature of the glycolysis reagent used. Finally, waste PU pieces were fed. Glycolysis conditions given in Table 1 are optimum values determined in previous works carried out by Gaiker to reach full depolymerization of PU waste. The final reaction product was filtered under pressure and distilled by means of vacuum distillation in a rotary evaporator, under a vacuum close to 50 hPa and an oil bath at 140 • C, for approximately 3 h and 30 min. Finally, two products were obtained: the distilled glycolysis agent and the recycled polyol, which were collected separately to analyze them and, in the case of the polyol, to use it in the synthesis of new RPUFs.
Synthesis of New RPUFs
New RPUFs were synthesized in 200 mL polypropylene cups by mixing different amounts of commercial polyol (CP), recycled polyol (RP), water (foaming agent), catalyst and isocyanate by a two-step method. First, the mixture of polyols, water and catalyst (component A) was stirred at 2000 rpm for a given time. Once a homogeneous mixture was obtained, the appropriate amount of pMDI (component B) was added, stirred for only 10 s and then free foaming was observed. Finally, the obtained foams were left to fully cure for at least 24 h at room temperature before performing the measurements. The NCO index was fixed at 1.1. The contents of the RPs with respect to total polyol weight were set at 0%, 5%, 10% and 15% in order to validate the inclusion of recycled polyols into new PU formulations. The RPUF prepared only with commercial polyol was designated as COM. The formulations of the RPUFs with different contents of RPs are summarized in Table 2.
Characterization of Polyols
Hydroxyl and acid values of the polyols were determined by titration methods according to ASTM E1899-16 and ASTM D4662 standards, respectively. The viscosity of the polyols was determined by means of an Anton Paar rotational rheometer, model MCR 501. Gel Permeation Chromatography (GPC) was used to determine the average molecular weight (M n ) of the polyols. Measurements were performed with a Viscotek GPCmax VE-2001 TDA 302 Detector. The chemical structure of the products obtained was measured by means of a Fourier transform infrared (FTIR) spectrometer (Shimadzu IRAffinity-1S) in transmittance mode in the wavenumber range from 600 to 4000 cm −1 at a resolution of 4 cm −1 . Thermogravimetric analysis (TGA) was carried out to determine the thermal stability of the materials by means of a Mettler-Toledo thermobalance, model TGA/DSC 1 Stare System. The analysis was conducted by heating the sample from room temperature to 600 • C, with a ramp of 20 • C min −1 under a constant N 2 flow (50 mL min −1 ).
Characterization of RPUFs
To characterize the reactivity of the RPUFs on foaming and polymerization, characteristic times (cream time, gel time, rise time, tack-free time) of PU foam synthesis were recorded according to ASTM D7487-18, and temperature profiles were evaluated with the aid of a thermocouple. The cell structures of the RPUFs were studied using a scanning electron microscope (SEM, EVO50, ZEISS) at the accelerating voltage of 20 kV. The samples were coated with gold/palladium 80/20 wt.% before observation. Bulk densities of RPUFs were measured according to the ASTM D1895-17 standard. Tensile strength and compressive strength of the foams were determined according to the ASTM D790-10 and ASTM D1621-00 standards, respectively. Table 3 shows the results for hydroxyl, acid and average molecular weight values, along with the viscosity at 25 • C of the commercial and recycled polyols. RPs present significantly higher hydroxyl values than the commercial polyol because of the presence of glycolysis by-products and ethylene glycol (hydroxyl value of the EG: 1807.6 mg KOH g −1 ) remaining in the polyol after purification of the glycolysate. The presence of by-products can be explained by considering side reactions in the glycolysis process besides the transesterification reaction between PU and glycol, such as the glycolysis of urea groups producing a carbamate and an aromatic amine [3]. These urea groups are present in the PU structure due to the amines formed in the blowing reaction during the foaming process, which in turn react with the free isocyanate to produce substituted urea [27]. Although the hydroxyl values of the recycled polyols are high, they are within the range for polyols used in the synthesis of RPUFs [28]. The acid value is higher in RPs compared to the commercial polyol, although it does not exceed the maximum acidity (10 mg KOH g −1 ) that polyols should possess [28]. The acid value mainly indicates the concentration of carboxyl groups present in the polyols, therefore, a limitation of the glycolysis of waste RPUFs is observed [29]. RPs are much more viscous than commercial polyol, possibly due to the higher number of hydrogen bonds present and directly related to a higher hydroxyl and average molecular weight values of the RPs [30]. Although the viscosity of RPs is high, its value is similar to the ones of commercial polyols used in the synthesis of RPUFs [31]. It must be noted that average molecular weight (M n ) values of the RPs are higher compared to the commercial polyol, directly related to their higher viscosity and hydroxyl functionality. Figure 1 shows the FTIR spectra of the glycolysis products compared to the corresponding RPs after distillation. A reduction in the intensity of the peak at 3345 cm −1 (corresponding to O-H bond) was observed in the polyols after purification by vacuum distillation. The peak at 1614 cm −1 indicates the presence of amines in the RPs, a by-product of PU glycolysis due to the urea groups present in the polyurethane structure. These amines could accelerate the synthesis reaction of new rigid foams, leading to undesired reactions during foaming. Distilled products were also analyzed and compared with EG (Figure 3), the glycolysis agent of depolymerization. It is observed that FTIR spectra of the distillates are practically identical to the EG spectra, although the peak at 3345 cm −1 in RP2 distillate was slightly more intense due to possible impurities. Thermogravimetric analysis was performed to study the thermal stability of polyols. Figure 4a shows the degradation range (200-400 • C) of the commercial polyol, which is smaller than the RPs range (130-500 • C). In other words, the commercial polyol seems to present higher thermal stability compared to the RPs recovered via glycolysis. The DTG curve indicates the temperature at which the rate of thermal degradation is the highest (Figure 4b). For the CP, this temperature is about 320 • C, while for RP1 is 280 • C and for RP2 is 245 • C. This phenomenon may be a consequence of the relatively low thermal stability of urethane groups, which are present in the RPs because of the limitation of glycolysis. After thermal degradation, a higher percentage of solid residue was remaining in RPs compared to the CP. Solid residue in RP1 and RP2 represented 13% and 11%, respectively, whereas the undegraded solid did not exceed 1% for the CP. These results could be due to the high concentration of hydroxyl groups in the RPs.
Synthesis of PU Foams Based on Recycled Polyols
The absence of catalyst in formulations of RPUFs is remarkable (Table 2) since the amines present in the RPs accelerate the reaction without the need to add the additive. The physical appearance and internal structure of the obtained foams are shown in Figure 5. A greater cross-linking is observed in the structure of the foams as the amount of RP incorporated increases. Figure 6 shows the temperature profiles registered during the synthesis of RPUFs with both RPs. More exothermic profiles are observed as the amount of RP incorporated in the formulations increases, due to crosslinking reactions between the amines present in the recovered polyols and the isocyanate producing substituted urea [27]. This particular reaction at room temperature in the absence of catalyst is much faster than the reaction with alcohols, a conclusion that the reduced characteristic times of the reactions also confirm. However, it should be noted that reactions with 5 wt.% and 10 wt.% of RP2 are not as exothermic as reactions when the same amount of RP1 is added. Characteristic times for each RPUF synthesis are compared in Figure 7. Remarkable differences are observed when RPs are added to the formulation. As previously discussed, foaming is accelerated when both RPs are incorporated due to the possible presence of amines. As the content of the RP increases the characteristic times become shorter, even the tack-free time is shorter than the rise time for 10 wt.% and 15 wt.% formulations of both RPs. This phenomenon can be attributed to the catalytic effect of the amines present in the RPs. Although these time values are really short, RPUFs with 5 wt.% RP in their formulation present characteristic times within the range of the foaming process. (Table 2). All samples present a polyhedral and closed-cell structure. Cell size decreases and the heterogeneity of the sample structure increases while RP content is increased due to the fast reactivity and high viscosity of the RPs. Cell size range of the RPUF designated as COM is between 219-547 µm, significantly higher values than those of the samples containing recycled polyols, designated as RP1-5 (180-532 µm), RP1-10 (155-387 µm) and RP1-15 (132-349 µm) for samples synthesized with RP1, and RP2-5 (162-393 µm), RP2-10 (135-376 µm) and RP2-15 (118-362 µm) for samples synthesized with RP2. Thus, higher reactivity and viscosity of the RPs caused an acceleration of foaming and influenced the smaller cell size of the samples while increasing the RP content [32]. As previously discussed, recovered polyols are much more viscous than the commercial polyol, so the CO 2 generated during the blowing reaction (Equation (3)) gets trapped in the foam structure. This causes a greater expansion of the PU sample, reaching lower bulk density values as the amount of recycled polyol is increased. The mechanical properties of the synthesized foams are evaluated just for RP1 since both recycled polyols present similar characteristics in terms of reactivity, viscosity, foaming parameters and chemical structure. RPUFs synthetized with RP1 present lower bulk density values (88.3 ± 0.75 kg m −3 for RP1-10 and 96.9 ± 0.92 kg m −3 for RP1-5) than those of the conventional foam (102.5 ± 0.84 kg m −3 ). The introduction of RP1 in the formulation of RPUFs involves increasing the molecular mass of the polyol mixture, resulting in lower bulk density foams [32,33]. Tensile properties improved notably when RP1 was added to the formulation of new RPUFs. The tensile strength value of the commercial foam was 0.91 ± 0.05 MPa, whereas for foams with 5 wt.% and 10 wt.% of RP1 were 1.43 ± 0.04 MPa and 1.04 ± 0.04 MPa, respectively ( Figure 9). Equally, tensile modulus values were higher when RP1 was added to the formulation. An increase in the RP content leads to an improvement of tensile properties of RPUFs, probably due to a higher cross-linking of the foams with recycled components [34]. Partial substitution of commercial polyol by RP1 up to 10 wt.% caused a slight decrease in the compressive strength values.
Conclusions
New RPUFs were prepared using two types of RPs with different properties. The RPs were obtained via glycolysis of post-industrial waste polyurethane foams and subsequent purification by vacuum distillation. The RPs were mixed with conventional polyether polyol to prepare RPUFs.
Recycled polyols presented significantly higher hydroxyl, acid, average molecular weight and viscosity values than the commercial polyol because of the presence of glycolysis by-products (carbamates, amines) and ethylene glycol remaining in the polyol after purification of the glycolysate. FTIR analysis concluded that RPs present C=O (1737 cm −1 ) and N-H (1514 and 1614 cm −1 ) bonds of urethane and urea groups due to the limitation of the PU glycolysis reaction. It is possible that the N-H bond indicates the presence of amines, a by-product of PU glycolysis due to the urea groups present in the polyurethane structure.
TGA determined that RPs exhibit lower thermal stability since their degradation range is much wider (130-500 • C) than the commercial polyol range (200-400 • C) due to the low thermal stability of urethane groups, which are present in the RPs because of the limitation of the glycolysis reaction.
The amines that remained in the RPs promoted the reactivity, judging by the values obtained for characteristic times and more exothermic temperature profiles during the foaming. Higher reactivity and viscosity of the RPs caused an acceleration of the foaming reaction and influenced smaller cell size according to SEM images.
Bulk density and compressive strength values of the RPUFs decreased slightly with the incorporation of 5 wt.% and 10 wt.% of RP1. However, the tensile properties of RPUFs were remarkably increased with the addition of recycled polyols.
All in all, prepared RPUFs show an improvement of tensile properties when recycled polyols are introduced, probably as a consequence of a higher cross-linking of the samples with recycled components. This is relevant due to the housing and bracket applications that the RPUFs are used for, being an improvement over conventional foams. Nevertheless, the inclusion of recycled polyols into prepared RPUFs is still a small percentage, since the reactivity of the RPs is too high to introduce more than 15 wt.% of recovered polyol into the formulations. Incorporation percentages are usually lower in consulted papers regarding recycling of RPUFs because the polyols are mixed with other chemical species and their intense purification is technically difficult. Therefore, the inclusion of more than 10 wt.% of recovered polyol in the final PU formulation with adequate properties can be considered an interesting and significant advance in the field of chemical recycling of rigid PU waste. In addition, future research to be carried out by our research group will modify and optimize the formulation of the foams synthetized with RPs, introducing a less reactive isocyanate to the formulation of the foams, thus enabling the incorporation of higher concentrations of polyols coming from the polyurethane wastes. Funding: This research was funded by the Department of Economic Development and Infrastructures of The Basque Government by its ELKARTEK 2020 Program (NEOPLAST Project, Reference KK-2020/00107), by the Spanish Ministry of Science and Innovation by its R&D Program "Retos Colaboración" (FOAM2FOAM Project, Exp. RTC-2017-6755-5) and also by CDTI (Centro para el Desarrollo Tecnológico Industrial), within the framework of grants for Technological Centers of Excellence "Cervera" (OSIRIS Project, CER-20211009).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-03-17T15:29:59.004Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "b8ab22773b92663559944b6dc590701001e6d402",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/14/6/1157/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfb29f054ee472d3e7e042342d1eee27aa09f163",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19119869 | pes2o/s2orc | v3-fos-license | The Anti-Plasticization Effects of Dimethyl Carbonate Plasticizer in Poly ( Methyl Metacrylate ) Electrolyte Containing Rubber
Various plasticizers have been used in polymer electrolyte systems, mainly to enhance the ionic conductivity of the electrolyte. Therefore, in this study, the effects of dimethyl carbonates, DMC plasticizer on the ionic conductivity of poly (methyl methacrylate), PMMA electrolyte film when blended with 50% epoxidized natural rubber and ENR 50 were investigated. Unfortunately, the addition of DMC plasticizer reduced the ionic conductivity of this blend system at any amounts of plasticizer added. In addition, this DMC-plasticized system also exhibited higher activation energy than the unplasticized system. The effects of DMC plasticizer on the conductivity of this electrolyte system were investigated and explained using Field Emissions Scanning Electron Microscope, FESEM and Fourier Transform Infra Red, FTIR Spectrophotometer analyses. From these analyses, it can be concluded that the dielectric constant of a plasticizer is important when dealing with an electrolyte system containing rubber.
Introduction
Plasticizer is defined as a relatively low molecular weight substance of low volatility, which, when added to another material, changes the physical and chemical properties of the material in such a manner that the finished product is in a more useful form [1].In polymer electrolytes, plasticizers or mixed plasticizers are added to soften rigid polymers and to lower the glass transition temperature, T g , of a polymer or polymer blend by increasing the segmental motion of the polymer backbone, hence assisting the transport of ions along the polymer chain.Furthermore, the addition of plasticizers helps to increase the dissolution of salts and dissociation of ion pairs and hence increase the number of free ions.
To date, there are several plasticizers that had been used as a plasticizer for the above mentioned purposes, such as ethylene carbonate (EC) [2], propylene carbonate (PC) [3], dimethyl carbonates (DMC) [4], dimethyl formamide (DMF) [5], etc. Amongst them, the most widelyused plasticizers are EC and PC, due to their low molecular weight, low viscosity, high dielectric constant and high boiling point properties.
However, not all plasticizers are found to be suitable in any polymer electrolyte system resulting in poor ionic conductivity.According to Bernardo and Burell [1], antiplasticization effects of plasticizer may occur in certain amount of plasticizer.This phenomenon exists in our previous work [6] in which the addition of EC reduced the ionic conductivity of PMMA/ENR 50/LiCF 3 SO 3 from 10 −6 S/cm to 10 −8 S/cm at room temperature.In this EC plasticized system, we found that it was incompatible with epoxidised natural rubber in which it caused the rubber to be coagulated, hence reduced the ionic conductivity of the unplasticized system.The high dielectric constant of EC (ε = 95) may increase the number of "free" mobile ENR 50 chains resulting in chain entanglement hence leading to the formation of coagulates.Since ENR 50 was able to enhance the ionic conductivity (10 −5 S/cm) [7] and improve the mechanical strength and adhesion properties of PMMA film electrolyte, it is important to further improve the ionic conductivity of the PMMA film since it was proven to exhibit good interfa-cial properties towards the lithium electrodes [8,9].Therefore, DMC plasticizer that has much lower dielectric constant (ε = 3) than EC was chosen in this study to avoid the formation of excessive ENR 50 coagulation that hinders the migration of ions in the blend system.Though DMC also exhibited lower ionic conductivity than the unplasticized blend system, the phenomenon that occurred in this DMC plasticized system was not the same as in the EC plasticized system.To the best of our knowledge, DMC plasticizer may exhibit a high ionic conductivity of 10 −3 S/cm at room temperature in other PMMA-based electrolyte system [10] but not when ENR 50 was blended with it.Therefore, this work emphasized the factors that occurred in this kind of polymer blend system.
Film Preparation
(ALDRICH) and dimethyl carbonate, DMC plasticizer (BDH) were used without further purification.ENR 50 was obtained from Guthrie Polymer Sdn.Bhd.Siliau, Negeri Sembilan, Malaysia.PMMA and ENR 50 stock solutions were prepared separately by dissolving the polymers in THF by continuous stirring with magnetic stirrer.Fixed volume of the two polymer solutions were then mixed in a beaker containing fixed amount of LiCF 3 SO 3 salt.Various amounts of DMC plasticizer was added into the solution mixture.The mixtures were then stirred for about 24 hours.All the preparation steps were done in a glove box.The electrolyte solutions were then cast into glass petri dishes and left to dry by solvent evaporation at room temperature.The films obtained were further dried in an oven at 50˚C for another 48 hours.The solvent free films were then kept in a desiccator until further use.
Material Characterizations
The morphology of the films was investigated under LEO Field Emissions Scanning Electron Microscope.The FTIR spectra of the thin films was obtained from SHIMADZU FTIR 8300 Fourier Transform Infra Red Spectrophotometer in the frequency range of 4000 -400 cm −1 .The conductivity measurement was performed by Hioki 3532 -50 LCR HiTester Impedance Spectroscopy over a frequency range of 100 Hz to 1 MHz from room to 359 K.
Results and Discussions
All DMC plasticized PMMA/ENR 50/LiCF 3 SO 3 electrolyte films were transparent and stable at room temperature.Interestingly, though the phase separation can still be observed on the surface of the films, it was almost diminishing and insignificant.
FESEM Studies on the Morphology of
Plasticized PMMA/ENR 50/LiCF 3 SO 3 Films From observation, as the volume of DMC plasticizer increased, the film became congested (Figure 1(b)) with dissolved lithium salt that spread in the entire volume of the blend hence reduced the gap between the PMMA and the ENR 50 phase.This may explain why at higher volume of DMC plasticizer, the two phases became almost invisible and a physically better appearance of the blend film was obtained.Large craters were formed in the SEM micrographs of the blend when DMC plasticizer was added (Figure 1).The formation of craters were also observed in the un-doped blend [6] when ENR 50 suppressed the globular structures of PMMA [6] as it penetrated the PMMA phase.The larger craters that were formed in these plasticized blends support the increment in the polymers chains flexibility that allows them to merge into each other's phase.Therefore, more globular structures of PMMA were suppressed by ENR 50.
FTIR Analysis on Plasticized PMMA/ENR 50/LiCF 3 SO 3 Electrolytes
It was found that the intensity of the -OH band at ~3500 (Figure 2) that relates to the occurrence of H-bonding have been reduced in the presence of plasticizer indicating a reduction in the number of interchain crosslinking between the two polymers.This confirmed that the addition of plasticizer increased the polymers chains flexibility.
It was found that the intensity of V s (CF 3 ) and V s (SO 3 ) at ~1350 cm −1 and ~1033 cm −1 peaks were reduced when DMC plasticizer was added into the system (Figure 3) indicating a marked reduction in the number of free ions due to the formation of ion pair or ion aggregates as a result of ion congestions.Therefore, there was no significant shifting and change in the intensity of the carbonyl peak of PMMA at ~1720 cm −1 (Figure 3) as the amount of DMC plasticizer was increased hence suggesting that no further complexation that occurred between the polymer and the salt.
The reduction in the intensity of the epoxy band of ENR 50 at ~1250 cm −1 and ~838 cm −1 (Figure 3) and the disappearance of the carbonate band of the plasticizer at ~1759 cm −1 (Figure 3) may suggest the occurrence of polymer-plasticizer interaction via polar linkages [6] due to its affinity towards ENR 50.However, this polar linkages are weaker than the inter-or intra molecular hydrogen bonding.
The data obtained in Table 1 supports the results obtained in the FTIR analyses where the conductivity obtained for the plasticized systems is lower than the unplasticized system due to the lower number of the charge carrier due to the formation of ion congestion that was observed in the SEM micrograph of the plasticized system at higher amount of plasticizer.The presence of craters may also trap the lithium ion in its vicinity hence hindering the migration of ion.These may also explain why the activation energies, Ea obtained from these plasticized systems are slightly larger than the un-plasticized system (Table 1).
Though various factors contribute to the lower ionic conductivity of this DMC plasticized PMMA/ENR 50 LiCF 3 SO 3 system, it is still slightly higher than the EC plasticized system (3.84 × 10 −7 S/cm) [6] due to no large coagulate structures were formed in the system.
Since the regression, r 2 value for all the plots of ln (σ) versus 1000/T (Figure 4) for these plasticized system lie in the range of 0.96 to 0.999, therefore, it can be considered that the points were almost in a straight line.This implies that the conductivity behaviour of this system as a function of temperature can be fitted by the Arrhenius law in which the ion transport was similar to that in ionic crystals.
Conclusion
The addition of DMC plasticizer in PMMA/ENR 50/ LiCF 3 SO 3 was found to reduce the ionic conductivity of the unplasticized electrolyte because it contained lower number of charge carrier due to the formation of ion pairs and ion aggregates or trapping of charge in the vicinity of the craters.Therefore, it can be concluded that the value of dielectric constant of a plasticizer is important when dealing with polymer electrolyte system containing ENR 50. | 2018-05-07T14:13:10.198Z | 2013-11-15T00:00:00.000 | {
"year": 2013,
"sha1": "c765ae1d7626d2cb3a59946f75f3ea739bd1719e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=39632",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c765ae1d7626d2cb3a59946f75f3ea739bd1719e",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260853972 | pes2o/s2orc | v3-fos-license | Old Photos Restoration by Using VAE
. VAE is a generative model that "provides a probabilistic description of observations in potential Spaces". Put simply, this means that VAE stores potential attributes as probability distributions. The idea of variational auto-encoders or VAE is deeply rooted in the methods of variational DB Bayesian and graphical models. This piece of work will discuss VAE Structure, VAE Loss Function, VAE Translation, and our final effects.
Photos are important to people as they are taken to store memories and important moments. They are deemed as flashback points to people. As technology proceeds, contemporary people tend to take digital photos and store them online, which preserves the quality of photos and makes them easily accessible. However, many old photos taken from the last century or even earlier were taken by cameras and printed on paper. Although they were preserved, many of them are now degraded through time.
Though some light degradation can be removed by specialists, photos experiencing severe scratches, loss of color, and holes are not likely to be restored. Hence, it requires the usage of the algorithm to restore old photos. Currently, there are multiple algorithms for old photos in painting, among which the method [1], proposed by the Microsoft team, involves variational auto-encoders that render decent effects that can bring old photos back to high resolution and remove most scratches.
When This work thinks of machine learning, the first thing that most likely comes to mind is various algorithms. Discriminant models, which predict labels or categories of input data based on their features, are at the heart of all classification and prediction solutions. In contrast to these models, generative algorithms help us tell a story about the data and provide possible explanations for how the data was generated. Unlike various algorithms that map features * Pma14@jh.edu a 2814559307@qq.com to labels, generative models attempt to predict features given labels [2].
A standard auto-encoder consists of two similar networks, an encoder and a decoder. The encoder takes the input and transforms it into a smaller representation that the decoder can use to transform it back to the original input. The latent space into which they transform the input and the space in which their encoding vectors reside may not be continuous. This is a problem for generative models, since we all want to variations in the input image to be generated randomly from the latent spaces, or from consecutive latent spaces. This work will share the progress about how two autoencoders work and the essence for the discriminant models.
2
Method for Old Photo Restoration
VAE Structure
The key factor in the method is that it stores synthesized old photos with degradation in one space X, real photos in one space R, and old photos without degradation in space Y. Figure 1 shows the whole procedure of VAE structure. In figure 1, it puts real photos and synthesized old photos into their own latent spaces that share the same domain because these types of photos are both corrupted, sharing some constraints. Then, real photos are put into one latent space. Then, it trains two VAEs. with VAEs, images are transformed to compact latent space [3].
VAE Loss Function
In the method, it utilizes two part of loss functions as matrix to identify the quality of the output. The first part is generative loss, which compares the model output with the model input. The second part is latent loss. This loss compares the latent vector with a zero main, unit variance Gaussian distribution. It penalizes the VAE if it starts to produce latent vectors that are not from the desired distribution [4].
VAE Translation
When each domain has photos, the process of VAE translation begins. Figure 2 shows the process of VAE Translation. In figure 2, VAE leverages data from three domains: real old photos, synthetic images, and restored photos. The translation is performed in latent space. The mapping between the two latent spaces is then learned with the synthetic image pairs, which restore the corrupted images to clean ones [5].
Outcome of VAE
The method synthesized old photos from the Pascal VOC data set and human-made degradation. After running, it outperforms most state-of-art algorithms. The detailed results are shown in Table 1. This work managed to reproduce their method by using their open-source code [6].
This work uses partial photos from the 2012 Pascal VOC data set and gathers our own old photos to train the model. The result of this work run is similar to the result provided by the team. However, the current deficit of the model is that it fails to restore photos with severe starches and discards some details of old photos. This work managed to change some parameters in the code, but no significant result was found [6].
Conclusion
In recent years, the craze of artificial intelligence has swept the world, and many excellent and iconic achievements have been achieved. However, the development of AI has to rely on huge databases. In AI computing, the quality of the database sometimes outweighs the importance of the algorithm itself. At present, there is a big problem in the database: how to update the samples so that the computer can recognize the old samples and the new samples at the same time, which has become a challenging and significant research topic and task. In order to solve the above problems, people start to zero-sample learning. The so-called zero sample learning is to identify the categories that do not exist in the training set. For example, in order to identify cats, dogs, and pigs, it is necessary to provide a large number of pictures of cats, dogs, and pigs for model training, and then given a new picture, it is possible to determine which category of cat, dog, or pig belongs to [7]. However, for the categories of cattle and tigers that did not appear in the previous training pictures, this model could not identify the cattle and tigers. In the past decade or so, people have begun to work on zero-sample learning, including the basic definition of the subject, the evaluation of algorithm performance, algorithm improvement and innovation. However, most of the previous zero-sample learning algorithms only test the new categories, and there is no exact evaluation of the old categories. That is, when testing, the old categories of cat, dog and pig are not tested, but the new categories of cow and tiger are tested. This does not fit the real-life scenario of both old and new categories being tested. Therefore, people further put forward the more realistic concept of generalized zero-sample learning, that is, the old category of assessment was added to the test, and the cat, dog, pig, cow and tiger were put together for the test.
The main method of zero-sample learning is the connection between visual features and semantic features. Generalized zero-sample learning also forms three methods to solve this problem on the basis of previous research work. The first is the mapping from visual features to semantic features. In this method, visual features are generally extracted through simple features, then mapped to semantic features through the full connection layer, and finally, the target recognition is realized. However, the mapping of information from high latitude to low latitude inevitably leads to the loss of information. In order to alleviate this problem, a second method is proposed, that is, the mapping of semantic features to visual features. The method mainly maps semantic features to visual features through data generators. The third method is the crossmapping of two characteristic variables. The typical method is to map the two variables to a hidden space. In the same hidden space, the connection between the two characteristics is realized [7]. Fig. 3. shows two photos after restoration using our training models Therefore, to sum up, there are few methods for generalized zero sample identification at present, and the accuracy and speed of the existing relevant identification methods are not good, and a large number of methods appear to have uneven accuracy and comprehensive performance evaluation results. Figure 3 shows partial outcomes after the implementation of VAE.
Old photos restoration using VAEs is a significant method that can successfully restore many old photos. It provides a valid result that shows it outperforms many state-of-art methods. Through reproduction, this work confirmed their result. However, the method fails to restore photos containing significant starches and discards some important details. Although this work tried to adjust some parameters in the code, no significant changes were made. | 2023-08-13T15:05:15.907Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "c18b74e67e6bcefcf93b181ae5136f1035eceda7",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/23/shsconf_seaa2023_02001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "596f9a38bacbd989592d4a1e988012f2fdfa5f28",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
255696232 | pes2o/s2orc | v3-fos-license | Review of the Impact of Biofuels on U.S. Retail Gasoline Prices
: This study aims to provide a review of the state-of-the-art literature regarding the impacts and contributions of corn ethanol on retail gasoline prices in the US. For this, a systematic literature review following PRISMA statement was carried out, seeking to answer four research questions: (1) What are the main characteristics of the literature regarding the impact and contributions of ethanol on US retail gasoline prices? (2) What are the main article clusters identified in the evaluated literature? (3) What was the numerical impact of the Volumetric Ethanol Excise Tax Credit/Renewable Fuel Standard (VEETC/RFS) mandate on the price of gasoline and what are the main methods used for calculation in the literature? (4) What are the main trends and possibly new research directions for this literature? As a result of the characterization of the sample, driving themes, such as energy policy, costs, price dynamics, trade and energy market, were identified. Furthermore, three main clusters were identified in the sample: (i) impacts of biofuels on commodity prices and general price dynamics; (ii) impacts of public policies on the implementation of ethanol and flexibility in formulating fuel blends; and (iii) impact of biofuels on environmental aspects. As a practical implication, the prevailing result in the analyzed literature is that the addition of ethanol reduces the price of gasoline at the pump, and estimates range from no effect to nearly 10% off the price of gasoline. Finally, the topic on the impacts of biofuels on commodity prices and on the general dynamics of prices is the most relevant research line and the trend suggested by the proposed research agenda.
Introduction
The biofuel industry has been significantly growing in recent years around the world, most prominently in USA, EU, and Brazil.Originally, biofuels sparked the interest of agricultural economists and policymakers in the last century in the context of replacing fossil fuels and providing energy security, and later also to address climate change, food security, and rural development [1].Since the turn of the century, biofuels have become a controversial topic in the public domain and in agricultural and energy research, which evolves into two main trends.The first main body of literature concerns food security and crop prices [2,3], since the primary use of agricultural production has been food consumption.The second concerns ecology and environmental topics [4][5][6][7], such as greenhouse gas emissions (GHG), use of land and water compared to just using conventional fossil fuels, and leaving land for food production or provision of environmental services.
The literature on commodity food prices is mostly concerned with econometric analysis and investigates relationships and common dynamics between the prices of food and biofuels.The main concern is that using agricultural production as a feedstock for biofuels rather than food consumption drives food prices up and causes nutrition crises, particularly in low-income countries.The food crisis between 2008 and 2010 motivated extensive research on this topic [8][9][10][11].The literature generally finds that the relationship between food and ethanol prices is relatively weak, but ethanol prices are affected by both food and fuel prices.Reference [12] offers a comprehensive review of studies and critically compares their results.The authors of [12] argue that standard time-series analysis does not capture the effect of biofuels on food well and that the impact is, in fact, quite heterogeneous across crops and geographical locations.The presented review further argues that the impact of biofuels on food commodities is, in fact, lower than the impact of economic growth and can be well offset by using genetically modified crops.
Condon et al. [13] provides a meta-analysis of estimates of corn-ethanol on corn prices and shows that increasing the production of corn-ethanol by one billion gallons increases corn prices by three to four percent.Persson [14] then presents a systematic review of the literature similar to ours but explores the effect of biofuels' energy demand on agricultural commodities, whereas we focus on the so far much-less-investigated effect of ethanol on gasoline prices.
Recently, Lark et al. [15] assessed the environmental effects of the Renewable Fuel Standard (RFS) program, which is the main policy driver behind the increased biofuel production since 2005, even more so after the expansion of the program in 2007.Lark et al. [15] calculated that the mandates motivated higher use of fertilizers and reduced the diversity of U.S. soil by reducing rotation in favor of producing corn.This, in turn, produced substantially greater GHG emissions.Additionally, Lark et al. [15] estimated that higher demand for corn caused inflation of soybean and wheat prices and disputed the potential of the current corn-ethanol production in mitigating climate change.This study, along with [16,17], forms strong criticism of the RFS program, which is well summarized in [18].These studies argue that while corn-ethanol provides profits for corn farmers and ethanol producers, it comes at a much greater expense to the U.S. taxpayer in the form of financing the subsidies, higher gasoline and food prices, and the overall high costs of climate change and other environmental damage, such as that to water and air quality.Those recent studies presented contradictory conclusions to the meta-analysis presented by [19].Consider also the GHG discussion in [20].One of the substantial changes in time between the studies is the shift in the U.S. position from a net oil importer to an exporter in 2020, which according to [18], reduces the necessity of the RFS program.
The biofuel policy debate is ongoing and evolving rapidly and substantially.We take the rich discussion presented above as evidence not only of the complexity of the biofuel topic but also of the evolution of results over time.In this article, we add to the discussion on price impacts; more specifically, we review the literature concerning the impact of blending ethanol into gasoline in the U.S. Our systematic literature review identifies the methods used in the research and their contribution to modeling ethanol's effect.This study aims to provide a review of the state-of-the-art literature regarding the impact and contributions of corn ethanol on retail gasoline prices in the US.To assist in achieving this goal, we propose four research questions (RQ): 1.
What are the main characteristics of the literature regarding the impact and contributions of ethanol on US retail gasoline prices?
2.
What are the main article clusters identified in the evaluated literature?This article is structured into four sections.In Section 2, we present the used methodology, along with the descriptors.Each step of the methodology and the descriptors are carefully explained.The results and a discussion are presented in Section 3, which is divided into four subsections.Finally, conclusions and corresponding recommendations are provided in Section 4.
Materials and Methods
The systematic literature review (SLR) can be defined as a structured review process that allows others to replicate and validate the research conducted and exactly follow the path chosen for the research [21].In this way, SLR differs from a traditional exploratory review, reducing the researcher's subjectivity, and resulting in a scientific, transparent, and replicable process [22].In the SLR proposed in this study, we followed the instructions of the PRISMA statement, in addition to five steps recommended in the literature [23]: In simple terms, SLR can be defined as a systematic process composed of three phases: input (i), processing (ii), and output (iii) [24,25]; as shown in Figure 1.In the input phase, we define the research problem and objectives.During the processing part, we search for studies in the databases, construct search strings, and define exclusion or inclusion criteria, using which, we then apply filters to assist us in the analysis of results.We then proceed to document the results.In the output phase, we produce tables and figures which summarize the obtained results.
Output phase
Figure 1.Model for conducting a systematic literature review.Adapted from [25,26].
This section is dedicated to providing a detailed description of the steps we followed in conducting the SLR used to answer the research questions (RQ) presented in the previous section.
In the Input phase, we define the research problem and its objectives along with studies relevant to the literature.We identify the main keywords of the publications that would contribute to the discussion about the appropriate search strings for performing the SLR.It is important to note that the proposed research questions serve to guide the development of the research and the presentation of results.For this, due to its sufficient acceptance and breadth, the Scopus database (from Elsevier) was selected.
After carrying out exploratory attempts, we adopted the search strings presented below, considering the Boolean logic "and" between levels (1.), (2.), and (3.).The use of the symbol " " guarantees the exact sequence of words.Finally, some variations as plural and singular were considered.
3.
Paper title, keywords or abstract ("gasoline price" or "fuel price" or "gas price" or "petrol price" or "petroleum price" or "retail price" or "gasoline market" or "fuel market" or "gas market" or "petrol market" or "petroleum market" or "petroleum product market" or "wholesale" or "price support") It is pertinent to point out that we used the term "corn", since the research focuses on North American ethanol, along with the use of "Midwest".In this way, we used the term "corn" in the geographic section of the filter to capture studies that deal with corn ethanol and that, for some reason, do not have the U.S. (or similar) as a descriptor in the title, abstract, or keywords.We used the bibliometric analysis software VOSviewer and the R package Bibliometrix [27]; for evaluation, synthesis of results and information, and graphical interpretation of the results, we used Microsoft Excel.
In the processing phase, we proceeded to define the eligibility criteria while ensuring that the sample responds adequately to the formulated RQs.The inclusion and exclusion filtering procedure was conducted by all co-authors of this study in sequence, thereby ensuring the quality of the final sample.
Figure 2 illustrates the delimiting filters of the sample used.In a search carried out in September 2022, the search strings resulted in 202 publications in the Scopus database.After reading the title, abstract, keywords, and search results, we reduced the list to 130 articles, since part of the initial sample was outside the scope of the research.After an initial read of the results and conclusions, we applied the second filter and obtained a sample of 112 articles.Finally, the articles were subjected to a complete reading, and we narrowed down the sample to 109 articles.We list the most important exclusion criteria used in the processing phase: (a) Studies from foreign countries (such as Brazil, Argentina, Mexico, EU, Thailand, etc.) whose ethanol comes primarily from sugar-related feedstocks; (b) Evaluation of different biofuel feedstock (cellulosic, lignocellulosic, agricultural biomass, oilseeds, etc.); (c) Studies focused on other issues (food price impact, greenhouse gas impact, ethanol blending, government impact and opinions about subsidies, etc.); (d) Studies of other fields (chemistry, the technology of production, etc.).
The output phase is dedicated to the analysis and synthesis of the results, which we interpret and discuss in detail in the following section.
Sample Characterization
To answer RQ1 (what are the main characteristics of the literature regarding the impact and contributions of ethanol on US retail gasoline prices), we start with the temporal distribution of the articles.Figure 3 presents the annual distribution of articles in the sample.This figure also displays the percentage of the sample in the general literature on the topic, that is, when search string (ii) is removed, without any restriction by country or area (obtaining the ratio of the publications related to the U.S. to the World).It is important to highlight the interest in the subject in the U.S. in comparison to the worldwide literature.Even though we can observe a greater interest in the topic between 2009 and 2012, the following analysis will show that this topic is still very relevant and important to researchers.Figure 4 presents the main scientific journals that have at least three articles present in our sample.The journals with the highest number of publications are Energy Policy, Energy Economics, and the American Journal of Agricultural Economics.There is an evident dominance of journals in the area of energy, agriculture, and others more specific to ethanol and biofuels.Interestingly, the shortlists also include the Journal of Environmental Economics and Management, which has a broader scope and is not exclusively focused on the above-mentioned areas.Figure 5 represents the fourteen most cited articles in the sample.The average citation per year provides a view of citations over time and interprets the results in a way that shows the most recently published articles.he authors Hill [6] and Demirbas [7] dominate the figure, surpassing more than 2000 and 800 citations, respectively.Studies as Zilberman [12] and De Gorter and Just [28] are also very relevant, with over 140 citations each.
In view of the extensive number of citations of the articles presented in Figure 5, we present below a brief summary of their contents.These include different scopes, such as existing relationships and the impact of biofuels on commodity food prices [12,[29][30][31], the environmental impacts of biofuels [6,32,33], policy issues and their implications [13,34].1.
[6] • The study carried out an environmental and economic assessment of energy costs and the benefits of biodiesel and ethanol biofuels.Through life cycle assessment, the study evaluated corn ethanol and soybean biodiesel.The main finding is that compared to fossil fuels, biofuels have a lower environmental impact.However, no biofuel had the ability to replace oil without affecting food supplies, and subsidies are needed to make biofuels profitable. 2.
[7] • The manuscript presents definitions, details, compositions, production information, use, and future perspectives that address biofuel sources, biofuel policy, biofuel economy, and global biofuel projections.The study considers scenarios of the impacts of biomass on the world economy.
3.
[35] • The authors argue using the conceptual model with back-of-the-envelope estimates that ethanol subsidies in the short run actually pay for themselves and that the impact of the production of biofuels from food feedstock will be bigger on food prices rather than energy prices.
4.
[12] • The study used time series econometrics to assess the impact of biofuels on commodity food prices.The main finding is that the price of ethanol increases as the prices of corn and gasoline increase.The study also found that ethanol prices are positively related to sugar and oil prices in equilibrium.
5.
[28] • The study presents a conceptual framework that allows analyzing the economics of a mandate for biofuels and evaluates the economic implications of the combination with a tax credit.Results indicate that tax credits result in lower fuel prices than under a mandate for the same level of biofuel production.If tax credits are implemented along with mandates, tax credits would subsidize fuel consumption instead of biofuels, thereby creating a contrary effect to the energy policy objectives.
6.
[29] • The study evaluated price relationships and transmission patterns in the US ethanol industry between 1990 and 2008.The research describes the relationships between corn, ethanol, gasoline, and oil prices.Overall, the results indicate a strong relationship between food prices and energy.
7.
[36] • In an extensive literature review, the article assesses the impacts of biofuel production and other supply and demand factors on rising food prices.The results indicate that the production of biofuels had a smaller contribution to the increase in the prices of food commodities until 2008.
8.
[32] • The study assessed the environmental impacts of biofuels.The results indicate that ethanol produced from biomass offers environmental and economic benefits and is considered a cleaner and safer alternative than fossil fuels.
9.
[30] • The study proposes a multivariate modeling framework to assess short and longterm relationships among corn, soybean, ethanol, gasoline, and oil prices.The paper evaluates if these relationships change over time.The results indicate that in recent years, there have been no long-term relationships between agricultural commodity prices and fuel prices.
[34]
• This study proposes a framework to assess the effects of a tax exemption on the biofuel consumer and the interaction effects with a price-contingent agricultural subsidy.The authors found that the tax credit reduces the costs of the loan fee program, but this increased the costs of the tax credit. 11.
[37] • This study analyzed whether farmers prefer a direct subsidy for corn production or rather a subsidy for the ethanol produced from corn.The study used a vertical model of ethanol, byproducts, and corn and found that farmers are better off with direct corn subsidies. 12.
[33] • The authors propose the use of economic models applied especially in the US to assess the effects of biofuel policies on petroleum product markets and their consequences for greenhouse gas emissions.
[13]
• The study proposes a literature review and a meta-analysis model to assess the impacts of ethanol policy on corn prices between 2007 and 2014.The results indicate that an expansion of the corn ethanol mandate can lead to an increase of 3 to 4 percent in next year's corn prices. 14.
[31] • The study, through a literature review, evaluated the corn ethanol industry, its impacts on food prices, and the role of biotechnology in the U.S.Among their findings, the authors identified that biotechnology had little impact on the biofuel sector.
We consider a number of citations of each publication in Figure 6, where the Citation Treemap presents hierarchical data (structured tree) as a set of nested rectangles.The area of each rectangle is proportional to the number of citations the manuscript has in the sample.This map aims to visually represent the disproportion in the number of citations of the two most cited articles in the sample and the other included studies.The discrepancy shown in Figure 6 justifies the removal of the studies proposed by [6,7] for the elaboration of Figure 7, whose objective is to present the distribution of citations over time of the most cited articles in the sample, complementing the information provided in the enumeration above.For example, authors such as Rajagopal et al. [35] de Gorter and Just [28,34] have high numbers of absolute citations but have lost their influence in more recent publications, given the reduction in citations per year.Another example is a study by [32], which received a large number of citations in 2011 and 2012, establishing itself among the most cited in the sample.However, in recent years, it has received a low number of citations.At the same time, other authors, such as [29,36], have maintained their influence in recent publications.Finally, ref. [12], and more recently ref. [13], has stood out in recent years.Differently from the previous graphs that were dedicated to publications, Figure 8 presents the authors or co-authors (individually) most representative in the sample with the largest number of publications.Among these, Zilberman D. and Thompson W. stand out, with ten and eight articles each, respectively.In the sequence, Hochman G., and Rajagopal D., present in seven publications each, are identified.Figure 9 shows the tree-field plot, establishing relationships between the most frequent journals in the sample, the main authors, and the keywords.Thompson, one of the most relevant authors in the sample, has had his studies published in journals such as Energy Policy, Eurochoices, and The Economics of Alternative Energy Sources and Globalization.This author has used terms such as "ethanol", "greenhouse gas emissions", "renewable fuel standard", "biofuel mandates", and "gasoline" as keywords in his studies.From the same perspective, Zilberman, another relevant author on the topic, has published in journals such as Agricultural Economics, the American Journal of Agricultural Economics, and Agbioforum.The main keywords included in his works are "biofuels", "greenhouse gas emissions", "energy prices", "energy policy", "climate change", and "corn ethanol".Figure 10 represents the thematic mapping, allowing the visualization of different types of themes [38].In the thematic map, we use keywords of the articles in the sample, where the keywords are defined by a semi-automated algorithm under the responsibility of Thomson Reuter's specialists, which is capable of capturing the content of an article with greater variety and depth [39].The upper right quadrant of Figure 10 represents themes with a higher degree of development (density) and relevance (centrality), seen as key themes in the literature, among which "Energy Policy" and "costs" stand out.As expected, another key theme found in this analysis was "United States", defined as one of the keywords in the search strings.Apart from those, other driving themes are "price dynamics", "commerce", and "energy market".Declining or emerging themes are located in the lower left quadrant.In this research, the results suggest that the topic "energy utilization" is an emerging topic.The lower-right quadrant shows sample basic themes.These themes refer to general themes in the different areas of investigation.They include "ethanol", "biofuel", "zea mays", "biomass", "carbon dioxide", and "biodiesel" from our sample.Finally, the upper-left quadrant shows themes of high density but of lesser importance to the sample or limited importance to the field (low centrality).Within these themes, "agriculture", "economic development", "energy independence", "energy security", "Environmental Protection Agency", and "fuel prices" are the ones that stand out.
In sequence, we created Figure 11 using the VOSviewer software, and it is based on the co-occurrence information of the authors' keywords [40].In this figure, the node sizes represent the number of times these keywords were used by the articles in the sample; the connecting lines indicate that these keywords were used in the same publication, and the colors are related to the year of publication.The relevance of the topics "Renewable Fuel Standard" and "policy" protrudes, even though they were not included in the search strings.This network also allows the identification of trending topics for the area, as they represent interests in recent research, such as "retail fuel spreads", "pass-through", "fuel markets", "E85", or even "energy prices" and "meta-analysis".Finally, Figure 12 was elaborated from a multiple correspondence analysis, an exploratory multivariate technique of the keywords and the articles that make up the sample.The conceptual structure map identifies clusters from articles that express interrelated concepts [27].The results of this figure are to be interpreted based on the distribution of points and their positions along the dimensions.The closer the keywords are in the figure, the greater their similarities in distribution.The figure allows the identification of new latent variables from the formation of clusters in a set of categorical variables.In this way, we identify two distinct clusters.The first cluster (in red), seems to be more relevant due to its size and centrality in relation to dimensions.The red cluster contains important keywords, such as "price dynamics", "commodity price", "gasoline prices", "blending", "taxation", and "subsidy system", which are terms associated with the price and market dynamics of biofuels in the U.S. In the second cluster (in blue), keywords such as "economics", "energy security", "public policy", and "gas emissions" are highlighted as terms associated with the development of public policies for the implementation of biofuels and their environmental impact.This split corresponds to the exploratory and introductory review we provide in the Introduction.
Predominant Cluster Structure
In order to answer RQ2 (what are the main article clusters identified in the evaluated literature?), content analysis and mapping and clustering techniques were used, as they are frequently used in SLR studies [41,42].
Through the use of clustering techniques, it is possible to present a map that highlights areas corresponding to the clusters of nodes identified.Using VOSviewer software, we calculated a bibliographic coupling network (for more, see [41]), whose graphical results are shown in Figure 13.In this analysis, the relationship between studies was determined based on the degree to which these articles are cited in the same publication.Upon establishing the clusters, we analyzed the content of the articles and focused on the title, abstract, introduction, and conclusion.This analysis aims to identify common interests and themes, from which the following predominant clusters were identified: 1.
Impacts of biofuels on commodity prices and overall price dynamics;
2.
Impacts of public policies on the implementation of ethanol and flexibility in the formulation of fuel blending; 3. Impact of biofuels on environmental aspects.
It is important to note that, as the clustering technique was elaborated from the use of coincidental references, articles located in the transition region between the main clusters can be dedicated to evaluating themes inherent to more than one cluster.
Impacts of Biofuels on Commodity Prices and Overall Price Dynamics
Among the authors of the first cluster, those of [43] considered the North American scenario and evaluated how the increase in corn-ethanol production impacts natural gas prices.The authors presented a two-stage least squares structural model for projecting two scenarios: (i) current policies, including tariffs, tax credits, and mandates, were disregarded; (ii) established the production of ethanol only for the use of mandatory additives.The results indicate that the price of natural gas can be increased by up to 0.25% and 0.5% for the first and second scenarios, respectively.
In another study, Whistance et al. [44] analyzed the effects of the ethanol policy on the prices and quantity of natural gas, especially focusing on the impacts of the ethanol tariff, mandates, and tax credits.The results indicated an increase in corn production, which will consequently tend to raise natural gas prices.
Zilberman et al. [12] investigated the relationship between food and fuel markets.According to the authors, the ethanol market provides a strong link between the corn and energy markets, and the price of ethanol increases as corn and gasoline prices increase.Finally, the study concludes that ethanol prices are positively related to sugar and oil prices.
Whistance and Thompson [45] also analyzed the price relationship between ethanol and that between gasoline and between corn and gasoline in the scenarios of a mandatory and non-mandatory RFS.The authors found evidence that price relationships are weaker when RFS is mandatory.
Another example of a study that makes up this cluster is that of [46], which assesses the impacts on fuel prices and compliance costs associated with the RFS.In this article, a regional market model is proposed to quantify the impacts of prices for several market variables.Among the results, Christensen and Siddiqui [46] identified that the RFS not have a substantial impact on the retail prices of gasoline and diesel.
Impact of Public Policies for the Implementation of Ethanol and Flexibility in the Formulation of Fuel Blending
Based on the second cluster identified, Liu and Greene [47] argues that a good understanding of the factors that affect demand for E85 is needed in order to develop effective policies for promoting E85 and to develop models that predict sales of this product in the U.S. In this way, the authors estimated the sensitivity of aggregate demand for E85 to the prices of E85 and gasoline, and the relative availability of E85 versus gasoline, and concluded that the latest data allow for a better estimation of demand and indicate that the price elasticity of E85 is substantially higher than previously estimated.
Lade and Bushnell [48] studied the pass-through of the E85 subsidy to U.S. retail fuel prices.The authors argued that the RFS relies on taxes and subsidies to be passed on to consumers to stimulate demand for biofuels and decrease demand for gasoline and diesel.They concluded that between 50% and 75% of the E85 subsidy was passed on to consumers and that the pass-through takes approximately 6 to 8 weeks, with retailers' market structure influencing both the speed and level of pass-through.
Ghoddusi [49], through a quantitative assessment, measured the risks of price changes for biofuel producers in a deregulated market.The authors presented a set of risk management strategies that are fully applicable to the protection of the biofuels sector.
From a different perspective, Westbrook et al. [50] assessed whether the U.S. is able to meet the RFS targets without an enforcement mechanism.The authors proposed a parametric analysis of ethanol use for the domestic vehicle sector.The results indicate that the RFS program's goals to reduce fossil-fuel consumption, and consequently, GHG emissions, can be achieved by improving vehicle efficiency.
Impact of Biofuels on Environmental Aspects
Allocated to the third cluster, Sexton et al. [51] analyzed the impacts of increased production of biofuels on food and fuel markets.They argue that the current production of biofuels generates a conflicting relationship between food and fuel, as it generates an increase in the cost of food and a reduction in the cost of gasoline.In this way, the study concludes that agriculture has to provide food and fuel, generating a need for constant improvement in its productivity.They argue that biotechnology has a fundamental role in allowing the achievement of this improvement.
Acquaye et al. [52] used four scenarios to analyze the potential of biofuels to reduce UK emissions.The authors used a hybrid lifecycle assessment developed in a multi-regional input-output (MRIO) framework and concluded that in order to achieve the emission reduction determined by the Low Carbon Transition Plan (LCTP), it would be necessary that 23.8% of the transport fuel market would be served by biofuels by the year 2020.
Piroli et al. [53] applied a time-series analysis for the five main agricultural commodities, the cultivated area, and the price of crude oil in order to study the impacts of changes in land use caused by the production of biofuels in the US.The authors conclude that the markets for crude oil and cultivated agricultural land are interdependent.Apart from that, the authors claim that the increase in biofuel production causes changes in land use, which subsequently causes food commodities to be replaced by crops intended for biofuel production.
More recently, Suh [54] examined the effects of replacing fossil fuels with biofuels on carbon dioxide emissions in the U.S. transportation sector.The author proposes that ethanol is a substitute for oil and a complement to natural gas, while natural gas is a substitute for oil.Furthermore, the author concludes that the price-induced substitution of fossil fuels for biofuels is a critical factor in predicting biofuel-related carbon-dioxide emissions.
Numerical Estimates
We now turn to our sample to analyze numerical estimates of changes in gasoline prices caused by changes, or rather a lack of changes, to ethanol mandates.We extracted 20 articles that provide numerical results that relate to our research question.After the initial inspection, we noticed many of the articles included in our sample are also included in the meta-analysis article by [19].Consequently, we have decided to include four missing articles that were not a part of our sample but were included in [19] to further our understanding of the numerical interpretation of the results.It is important to highlight that these four studies are relevant and recognized for the field of research, but they were not identified in the search due to the fact that they were not present in the Scopus database.
First, we briefly discuss the approach, methodologies and models that were used in the aforementioned articles.Figure 14 shows the most frequent models used.The most popular are general and partial equilibrium models, biofuel and environmental policy analysis models (BEPAM), and supply-demand models.When it comes to the policies that affect the price of gasoline, the articles mostly use the Volumetric Ethanol Excise Tax Credit (VEETC) created by the American Jobs Creation Act of 2004 and the Renewable Fuel Standard for corn ethanol established in 2007 as the driver of the change in the price of gasoline.Some studies, such as [55], inspected many possible outcomes based on different scenarios where either there are no mandates in place for the baseline price or where VEETC or RFS or their combination are introduced, changing the outcome by 1-2 percentage points.Some other articles, such as [56], took into account only the RFS ethanol mandate and its impact on gasoline prices.
Overall, we managed to identify 13 papers that provide us with exact numerical results for the answer to our research question RQ3 (what was the numerical impact of VEETC/RFS mandate on the price of gasoline and what are the main methodologies used for calculation in the literature).Detailed information about the papers in our sample coming from SCOPUS database is summarized in Table 1, and Table 2 presents the four papers not included in the SCOPUS database.
Table 1.This table summarizes publications providing numerical estimates of the impact of ethanol on fuel price.The first column references the publication and the second column the inspected time period.The third column reports on the model used, and the Relation column suggests whether ethanol and gasoline are considered to be substitutes (Sub), complements (Comp), or perfect substitutes (pSub).The prevailing result is that the addition of ethanol cuts down the price of gasoline at the pump.However, there is no direct consensus on the discount being provided, not even in proportional expression.The estimates vary from no effect up to an almost 10% discount in the gasoline price, as shown in Figure 15.
Publication
Table 2.This table summarizes publications in the analysis of [12] concerned with the impact of ethanol on fuel price or welfare.The first column references the publication, and the second column the inspected time period.The third column reports on the model used.The Relation column suggests whether ethanol and gasoline are considered to be perfect or imperfect substitutes, and Results summarizes the respective study.
Research Agenda
To answer RQ4 (what are the main trends and research opportunities for this literature?), we propose a possible open research agenda based on the results of our SLR.We notice that the term bioethanol has been present in the analyzed sample since 2012, remaining until now, especially when associated with the use of the terms "commerce" and "energy market", which shows that this type of study is still interesting to the current research.Corroborating this statement, Figure 10 (thematic map) presented the driving themes of the studied area, which include, in addition to the terms "commerce" and "energy market" already mentioned, "costs", "energy policy", "price dynamics", and "renewable resource".In this way, it is possible to mention some research topics that have been little explored and that have started to draw attention more recently, standing out as hot topics for future research.It is possible to propose the development of research focused on advanced biofuels, biofuels supply chains, transportation biofuels, and issues of budget control and cost management, both in production and in the management of the biofuels supply chain.Additionally, an analysis of the thematic evolution allows the identification of research opportunities that involve the control of greenhouse gas emissions, and other environmental and climatic aspects.
Still discussing research trends, Figure 11 (keyword co-occurrence map) corroborates previous discussions and opens horizons for new research opportunities on retail fuel spreads and on the e85 composition.
Moreover, Figure 12 (conceptual structure map) points out opportunities for research in public policies related to climatic and environmental issues, and energy security.Topics such as sustainable development, price dynamics, blending, demand analysis and biofuel production have greater centrality-that is, they tend to continue to be study opportunities.
A clear possible research opportunity of filling a noticeable and perspective gap in the literature is indicated by what is rather missing in the keywords discovered by our search.It is an issue of electro-mobility.The analysis of interplay between biofuels and electrical vehicles should belong to the "environmental", cluster 3 in Figure 13.As we already noted, this "environmental" cluster temporarily precedes the other two clusters.This expresses the shifting emphasis from the beliefs on the strong positive environmental impact of biofuels to a rather skeptical evaluation of this impact of biofuels.Additionally, the missing connection between biofuels and electric cars is caused by a fact that the focus on electric cars is rather a recent phenomenon, not overlapping in time with the early biofuel literature assembled in cluster 3 in Figure 13.However, the research questions of possible synergies in combination of advantages of renewable biofuels provided by agriculture and advantages of electric cars definitely deserve research attention.
Another interesting research opportunity indicated by missing connections in our bibliometric figures is an issue of bioethanol as a dominant technological fuel additive.While technologically oriented literature clearly shows that ethanol is a dominant gasoline oxygenate, there is still missing (not written so far) a potentially sizeable body of literature dealing with the question of what is the technological and economical lower bound on the share of ethanol in the U.S. car fuels if the ethanol would be used mainly as an oxygenate.
Finally, Figure 16 shows the evolution of the representativeness of each cluster over time.We note that at the beginning of the research on the subject, the most influential cluster was the one that addressed the impact of biofuels on environmental aspects (cluster (iii)).However, this scenario has changed, and the figure makes it possible to identify that studies that assess the impacts of biofuels on commodity prices and overall price dynamics (cluster (i)) have been of greatest recent interest, followed by the assessment of the impacts of public policies on the implementation of ethanol and flexibility in the formulation of fuel blending (cluster (ii)).In this way, the topics associated with clusters (i) and (ii) will represent the greatest opportunities for future research.
Conclusions
This article proposes a review of the state-of-the-art of the literature regarding the contributions of ethanol to retail gasoline price changes in the US.For this, we conducted a systematic literature review which follows guidelines from the literature.We extracted a sample of 109 articles and analyzed it using bibliometric quantitative techniques associated with qualitative content analysis.The novelty of this article is evident, since no systematic literature review with the objective of evaluating the impact of ethanol on the retail price of gasoline was identified.
At first, a characterization of the sample was presented through bibliometric techniques, allowing the identification of trends in the explored topic.Furthermore, thematic, conceptual, and co-occurrence maps were constructed and analyzed, in which topics such as energy policy, costs, price dynamics, commerce, and energy market stand out.Additionally, the most significant terms recently have been "retail fuel spreads", "fuel markets", "E85", and even "energy prices" and "meta-analysis".
Second, considering the selected sample and based on grouping techniques, the predominant cluster structures were identified and briefly analyzed, which led to three lines of research: (i) impacts of biofuels on commodity prices and overall price dynamics; (ii) impacts of public policies on the implementation of ethanol and flexibility in the formulation of fuel blending; and (iii) impact of biofuels on environmental aspects.The definitions of these clusters are not given a priori, neither in the specific literature, nor even through the use of software, demanding an in-depth analysis of the articles present in the sample.
Third, the general and partial equilibrium model stood out in the sample as the most used to capture changes in gasoline prices caused by changes in ethanol mandates.There is no consensus on the impact of ethanol on the price of gasoline in the US retail market; however, the most frequent results show that the addition of alcohol reduces the price of gasoline at the pump.
In a fourth moment, we show that currently, the topic concerning the impacts of biofuels on commodity prices and overall price dynamics is the most relevant and trending avenue of research suggested by the analysis of our sample of publications.
Finally, the limitations of the present study involved methodological choices, such as: (1) only one database for extracting articles and (2) the definition of search strings that could exclude works relevant to the study.These limitations were minimized by the following strategies: (1) by choosing the largest database of academic works in the world (Web of Science), and (2) using many attempts to adapt the search strings to the most relevant works for the studied topic.Another limitation is related to the inclusion and exclusion criteria of each article to form the final sample, which we sought to mitigate with the participation of four different researchers.
(a) Formulate research questions that can guide the study.(b) Identify the most relevant studies from the literature of interest.(c) Evaluate the quality and relevance of the articles.(d) Identify and summarize the scientific evidence.(e) Interpret the results found.
Figure 2 .
Figure 2. Summary of articles filtering after reading.
Figure 3 .
Figure 3. Annual distribution of publications from 1988 to September 2022.
Figure 4 .
Figure 4. Most frequent journals in the sample.
Figure 5 .
Figure 5. Main and most cited publications in the sample.
Figure 7 .
Figure 7. Distribution of citations over time for ten of the most cited articles in the sample.
Figure 8 .
Figure 8. Distribution of citations over time for ten of the most cited articles in the sample.
Figure 14 .
Figure 14.Count of models used in the literature.
Figure 16 .
Figure 16.Evolution of the number of publications by clusters. | 2023-01-12T18:39:40.288Z | 2022-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "24b9277af179a975ce712d56ad3f504a60e5cc41",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/1/428/pdf?version=1672389088",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c21a6f915348a2240719a35bf4c7ca6b8e686be",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
244896926 | pes2o/s2orc | v3-fos-license | Predictors of Psychological Distress and Coronavirus Fears in the First Recovery Phase of the Coronavirus Disease 2019 Pandemic in Germany
Objectives: While previous research has mainly focused on the impact of the first acute phase of the COVID-19 pandemic on mental health, little empirical knowledge exists about depression, anxiety, and somatic symptom levels and possible predictors of symptom levels in the pandemic’s recovery phase. The present study aimed to analyze the mental burden of a convenience ample of the general German population during the first recovery phase of the pandemic and to identify significant predictors of symptom levels. Methods: Standardized measures of anxiety (GAD-2), depression (PHQ-2), somatic symptoms (PHQ-15), and health anxiety, as well as measures of COVID-19 fears and possible vulnerability factors, were administered through a national, cross-sectional online survey (n = 2160, mean age 42.7 years, 75% female), asking participants for their current symptom levels and their symptom levels prior to the COVID-19 pandemic. Results: Our findings show significantly elevated levels of depression, anxiety, somatic symptoms, and health anxiety in the recovery period compared to before the pandemic. The current prevalence rates based on self-reporting were 26.7% for depression, 24.5% for anxiety, and 29% for somatization. The strongest predictors of these symptom reports included domain-specific pre-existing symptom levels, neuroticism, biological COVID-19 risk factors, avoidance of illness information, and younger age. The most important predictors of COVID-19 fears were subjective COVID-19 risk perception, followed by pre-existing health anxiety, the number of biological COVID-19 risk factors, older age, neuroticism, avoidance of illness information and female gender. Discussion: These findings indicate the need for specific psychological programs to help individuals with enhanced psychological and biological vulnerability to cope better with the mental distress experienced during all phases of the ongoing COVID-19 crisis.
INTRODUCTION
The novel coronavirus (SARS-CoV-2), identified in late 2019 in China (World Health Organization [WHO], 2020b), has rapidly spread worldwide from person to person mainly by respiratory droplets and contact transmission. COVID-19 (coronavirus disease 2019) is the infectious disease caused by the novel coronavirus. In the first nine months of the COVID-19 pandemic (until September 27, 2020), more than 32.7 million COVID-19 cases and 991,000 deaths have been reported worldwide to the World Health Organization [WHO], 2020a).
Our current knowledge allows us to divide the current course of the COVID-19 pandemic into three phases (see Figure 1): a preparation phase characterized by a rapid increase of new infections (phase one), the punctum maximum defined by the highest number of new cases (phase two), and a slow return to normality (phase three) (Fegert et al., 2020).
In response to the rapidly rising numbers of COVID-19 cases and deaths in Europe during February and March 2020, many countries implemented large-scale non-pharmaceutical interventions to slow the spread of the coronavirus (including closing preschools, schools, universities, stores, bars, restaurants, hotels, and cultural institutions; stay-at-home policies; border closures; and measures to isolate infected individuals and their contacts). In Germany, the first "lockdown" of public social life started on March 22 and was lifted on April 20. This "lockdown" was effective in reducing virus transmission (Flaxman et al., 2020) and protected the public health system, particularly intensive care units, from a possible breakdown.
An international systematic review and meta-analysis on 65 longitudinal cohort studies examining changes in mental health among the same group of participants before and during the pandemic found an overall increase in mental health symptoms that was most pronounced during March-April 2020 (Standardized Mean Change,SMC = 0.102 [95% CI:0.026 to 0.192], p = 0.03) before significantly declining over time (May-July SMC = 0.067 [95% CI: -0.022 to.157], p = 0.141) (Robinson et al., 2021). In addition, results indicate that increases in symptoms of depression and mood disorder tended to be larger (SMC = 0.22, p < 0.001) and reductions over time appeared less pronounced as compared with symptoms of anxiety (SMC = 0.13, p = 0.02) and general mental health (SMC = -0.03, p = 0.65). Studies carried out in Germany found mixed results. One study assessed changes in psychological distress among the general public during the first three months of the pandemic (from March to June 2020) and observed, on average, a weak decrease in psychological distress; however, a subgroup (at least 10% of the respondents) showed an increase in unspecific anxiety and depression symptoms over the time period . Another online survey examined the course of psychological distress in the German public from March to April 2020 and observed continuously elevated generalized anxiety scores over time (Hetkamp et al., 2020). Similar results were reported from a representative United Kingdom longitudinal study, with findings showing mental health problems increased from 24.3% before the COVID-19 outbreak to 37.8% in April 2020 and remained elevated in May (34.7%) and June (31.9%) 2020 (Daly et al., 2020).
Since the heterogeneity of the psychological distress associated with the COVID-19 pandemic seems to be considerable, it appears of paramount importance (e.g., for the prevention of psychological distress and allocation of support) to identify the most important correlates and risk factors. In this regard, previous studies suggested that high health anxiety (i.e., the fear of suffering from a severe or life-threatening illness), COVID-19-related media exposure, and neuroticism (i.e., emotional instability) are among the most important factors associated with particularly high levels of psychological distress during the COVID-19 pandemic Sauer et al., 2020;Schmidt et al., 2020). Furthermore, additional personality traits according to the BIG-5 model have been found to be significantly associated with COVID-19 anxiety and related general mental distress (Nikčević et al., 2021;Zacher and Rudolph, 2021). Path analytic findings suggest that health anxiety and COVID-19 anxiety serve as significant mediators between personality traits and symptoms of general anxiety and depression, suggesting that both personality traits and health anxiety are important to identify people who are particularly vulnerable to elevated psychological distress associated with the COVID-19 pandemic.
In this context, the purposes of the present study were to (i) investigate how health anxiety and levels of depression, anxiety, and somatic symptoms changed from the time before the rapid spread of COVID-19 (T0) to the first recovery phase (May 12 to September 29, 2020; T1) of the pandemic in Germany, and (ii) determine the predictive value of specific factors associated with symptom levels, health anxiety, and coronavirus fears. We expected enhanced symptom levels and health anxiety during the recovery phase of the pandemic relative to the time before the COVID-19 outbreak. We also expected that specific socio-demographic variables (younger age, female sex, lower education), specific personality traits (i.e., neuroticism), pre-existing levels of health anxiety and illness information avoidance, the number of risk factors linked to a more serious course of COVID-19 (i.e., age > 60 years, smoking, overweight, cardiovascular and other somatic diseases), and perceived risk of infection would predict higher levels of distress, health anxiety, and coronavirus fears during the recovery phase of the pandemic, even when controlling for pre-existing distress levels.
Design, Recruitment, Participants, and Procedures
A cross-sectional online survey was used to investigate the physical and psychological effects of the coronavirus pandemic in the general population in Germany. We collected data during the recovery phase from the first wave, from May 12 to September 29, 2020, a phase with low numbers of daily new infections and COVID-19-associated deaths. Participants were recruited primarily through press releases (print, online), social media platforms (Twitter, Facebook), the websites at the Central Institute of Mental Health (CIMH), including a COVID-19 mental health support page, and the universities of Mainz and Konstanz. The Ethics Committee of the University of Mainz agreed to conduct the study (2020-JGU-psychEK-S010).
The inclusion criteria of the study were a minimum age of at least 16 years and written informed consent. The exclusion criteria included incomplete processing of the questionnaire and an unrealistically fast total survey completion time (DEG time < 100). In total, 2,224 people started the online survey and 2,160 participants completed it in a realistic processing time. Each participant was asked to report their gender, age, country of birth, highest level of education, employment status, and living situation. We also asked about their experiences with COVID-19 (current or past infection, COVID-19 symptoms, COVID-19related risk factors, and fears of and perceived risk from COVID-19).
Measures
To assess somatic symptoms, psychological distress, and health anxiety both for the time before and after the beginning of the COVID-19 pandemic, participants were instructed to answer the same symptom measures twice: first for the current period (T1), then retrospectively for the period before the onset of the pandemic (defined as "the period between the end of February and beginning of March 2020"; T0; this comparatively brief time period was chosen for reasons of standardization and anchor point fixation between participants and in order to use a timeframe that is compatible with the PHQ-4 instruction regarding the previous two weeks).
Somatic Distress
Somatic distress was measured using the Patient Health Questionnaire-15, (PHQ-15; Kroenke et al., 2002;Löwe et al., 2002). The PHQ-15 is an excellent and widely used measure of somatic distress and a screening instrument for somatic symptom disorder according to Diagnostic and Statistical Manual of Mental Disorders -5 (DSM-5). The 15 items of the PHQ-15 include the most prevalent somatoform symptoms. The response format consists of a three-point Likert scale ranging from 0 (not at all) to 2 (bothered a lot). The total score ranges from 0 to 30 and scores of ≥ 5, ≥ 10, ≥ 15 represent mild, moderate, and severe levels of somatization, respectively. More importantly, a score ≥ 10 is the most commonly recommended cutoff point for clinically significant symptoms (Kroenke et al., 2010). Internal consistency of the PHQ-15 as assessed with Cronbach's α was 0.80 in the original validation study (Kroenke et al., 2002) and 0.82 in a large sample, representative of Germany's general population (Kocalevent et al., 2013). Internal consistencies in our sample were α = 0.84 (T1) and α = 0.82 (T0).
Psychological Distress
Psychological distress was assessed using the Patient Health Questionnaire-4 (PHQ-4; Kroenke et al., 2009;Löwe et al., 2010). The PHQ-4 consists of two items assessing core criteria for depressive disorder (little interest or pleasure in doing things; feeling down, depressed, or hopeless) and two items measuring diagnostic criteria of generalized anxiety disorder (feeling nervous anxiety; not able to stop worrying). Participants were asked to indicate how often they have been bothered by these symptoms over the previous two weeks on a fourpoint Likert scale ranging from 0 (not at all) to 3 (nearly every day). Total scores of the PHQ-4 range from 0 to 12 and scores of ≥ 6 represent at least moderate levels of psychological distress. Internal consistency of the total PHQ-4 as assessed with Cronbach's α was 0.84 in the original validation study (Kroenke et al., 2009) and 0.82 in a German validation study . In the present study, the internal consistencies were α = 0.88 (T1) and α = 0.86 (T0).
Health Anxiety and Illness Information Avoidance
Health anxiety was measured using a brief screening instrument specially composed for this study by our working group, the nine-item Health Anxiety Scale (HAS-9; see Supplementary Material 1). All items were taken from well-established health anxiety questionnaires (Barsky et al., 1990;Rief et al., 1998;Salkovskis et al., 2002) based on cognitive-behavioral models of health anxiety and hypochondriasis. The scale covers different facets of the health anxiety construct, such as bodily vigilance (e.g., I am often aware of various things happening within my body), illness-related thoughts and bodily misinterpretations (e.g., Bodily complaints were always a sign of disease for me), and health anxiety (e.g., I am often afraid that I have a serious illness). The statements were answered using a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). The total score ranges from 9 to 45, where a higher score indicates higher health anxiety. In the present study, the internal consistencies were α = 0.87 (T1) and α = 0.91 (T0).
In addition, the tendency to avoid illness-related information was measured using three items taken from the avoidance scale of the Questionnaire for Assessing Safety Behavior (QSBH; Weck et al., 2013). All items chosen for the three-item Illness Information Avoidance Scale (IAS-3) refer to the avoidance of illness-related information (i.e., Do you avoid watching documentaries about illnesses? Do you avoid movies or series in which people suffer from a serious illness? Do you avoid reading articles or reports about illnesses?). All items were answered using a five-point Likert scale ranging from 1 (never) to 5 (almost always). The total score ranges from 3 to 15, where a higher score indicates higher avoidance behavior during the previous two weeks. In the present study, the internal consistencies were α = 0.92 (T1) and α = 0.95 (T0).
Personality Traits
Personality traits were assessed using the Big Five Inventory-10 (BFI-10; Rammstedt and John, 2007), a short form of the Big Five Inventory (BFI-44; John et al., 1991). The BFI-10 consists of 10 items, based on five factors assessing the big five personality domains: extraversion, neuroticism, openness to experience, conscientiousness, and agreeableness. Each participant indicated how well each statement described their personality on a fivepoint Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree). Scores range from 2 to 10 for each of the five personality factors, with higher scores indicating higher levels of the specific personality domain. The BFI-10 has demonstrated good reliability and validity in many samples across different nations (e.g., Rammstedt, 2007;Rammstedt and John, 2007;Carciofo et al., 2016;Kunnel John et al., 2019). Retest-reliability scores at a six-week retest interval were adequate-to-good in the original validation study (Rammstedt and John, 2007).
Coronavirus Disease 2019-Related Measures
The survey included several questions regarding COVID-19related fears, risk perception, and biological risk factors.
Coronavirus Disease 2019 Fears
Participants were asked to rate their levels of perceived COVID-19 fear on a visual analog scale ranging from 0 (no fear) to 100 (strong fear) for three different time points: current (item 1), prospectively in four weeks (item 2), and prospectively in eight weeks (item 3) (e.g., How strongly do you fear being infected with coronavirus as of today? How strong do you think your fear of an infection will be four weeks from now?). The mean score of the three items was used as an indicator of perceived coronavirus fear, ranging from 0 to 100. Higher scores indicate higher levels of perceived COVID-19 fears. Cronbach's α was 0.97.
Coronavirus Disease 2019 Risk Perception
Participants were also asked to rate what they thought the likelihood was of being infected with the virus (How likely do you think it is that you will get infected?) or infecting someone else (How likely do you think it is that you will infect someone else?) on a visual analog scale ranging from 0 (very unlikely) to 100 (very likely). The mean score of the two items was used as an indicator of perceived risk of infection, ranging from 0 to 100. Higher scores indicate higher levels of perceived risk of infection. Cronbach's α was 0.97.
Coronavirus Disease 2019 Risk Factors
Participants were asked if they had one or more of the following risk factors (yes, no) for a serious course of COVID-19: higher age (60 years or older), smoking, extremely overweight, cardiovascular diseases (e.g., coronary heart disease or high blood pressure), chronic respiratory disorder (e.g., COPD), chronic liver disease, cancer, or weakened immune system (e.g., due to an illness or regular usage of medicines such as cortisone that lower the immune system). Positive answers were summed up to a risk factor index ranging from 0 to 9, with higher scores indicating higher biological risk for a serious course of COVID-19.
Current/Past Coronavirus Disease 2019 Infection
Participants were asked if they were/are known to have been infected with Corona virus in the past or currently. Responses were recorded using a binary variable (yes vs. no).
Days After the Peak of the First Coronavirus Disease 2019 Infection Wave
For each participant, the days after the peak of the first wave of infection of Corona were recorded to control for the temporal interval between T1 and T0. The peak of the first wave was set for the 2nd of April, as this was the day with the highest daily incidence in Germany of 6,550 new infections according to the Robert Koch Institute.
Statistical Analysis
First, we conducted descriptive analyses to describe sample characteristics. Second, we investigated changes in psychological and somatic distress and health anxiety between T0 and T1 using dependent sample t-tests for dimensional variables. Third, we conducted chi-square tests for categorical variables to investigate the effects of gender on the prevalence of depression, anxiety, and psychological distress. Fourth, we conducted Pearson correlations to explore associations between predictor and outcome variables. Finally, structural equation modeling was used to explore the independent relationships of predictor variables with outcome variables (current levels of distress as assessed with the PHQ-15 and PHQ-4, health anxiety, and COVID-19 fears). Two of the four outcome variables (PHQ-15 T1 and PHQ-4 T1) were modeled as latent variables. For the PHQ-15-T1 measurement model, a general-factor model with correlated error terms capturing the symptom-specific variance was applied. In case of the PHQ-4, we used a general factor model with correlated error terms between the two anxiety symptoms and the two depression symptoms, respectively. The following 16 predictor variables were entered simultaneously as manifest variables into a latent regression model: pre-existing symptom levels (somatic symptoms, anxiety and depression scores at T0), socio-demographic variables (age, gender, education), personality traits (extraversion, neuroticism, openness, conscientiousness, and agreeableness), pre-existing levels of health anxiety and avoidance behavior (illness information avoidance at T0), and COVID-19 variables (number of COVID-19 risk factors, COVID-19 risk perception, days after the peak of the first COVID-19 infection wave, current/past COVID-19 infection). The analysis was conducted in MPlus Version 7.3 (Muthèn and Muthèn, 2010) using the robust mean and variance adjusted weighted least squares (WLSMV) procedure. Because the chi 2 test is known for its sensitivity regarding sample size and model complexity, we additionally used common absolute (RMSEA) and comparative (CFI, TLI) fit indices for model fit evaluation. The remaining analyses were performed using SPSS Statistics Version 23 (IBM, Armonk, NY), and the level of significance was set at p ≤ 0.01 because even unimportant effects can be significant in large samples. Cohen's d was calculated as effect size for t-tests (d ≥ 0.30 small effect, d ≥ 0.50 medium effect, d ≥ 0.80 large effect) (Cohen, 1988). A total of 2,160 participants completed the survey; however, 36 participants had missing values for the "education" variable and 10 participants reported a diverse sex and were excluded, leaving 2,114 participants included in the following analyses. For the predictor analysis, the variables gender and education were dichotomized (gender: female vs. male; education: less than 12 years of schooling vs. more than 12 years of schooling).
Sample Characteristics
Of the 2,160 participants who completed the questionnaire, 74.8% were female, 24.7% were male and 0.5% were of diverse sex. The average age was 42.75 years (range: 16-86). Concerning their living situations, 20.9% reported living alone, 71.9% reported living with a partner, family or someone else, and 7.2% reported living in a shared apartment. With regard to education, 78.0% reported having at least 12 years of schooling, and 48.0% had a university degree. With regard to coronavirus, 0.2% said they were currently infected with the coronavirus, and 1.1% said they had been infected with the coronavirus in the past. Thus, the mean infection rate with the Corona virus was 1.3% in our sample compared to 0.24% in the German population (data reported by the Robert Koch Institute on July 21, the median of our survey period). 1.2% reported that a person close to them is currently infected with the coronavirus, and 12.2% reported having a close person who was infected in the past. At least 21.5% of the respondents reported having medium-to-severe fears about coronavirus infection, at least 22.8% expected their fears to remain moderate over the next four weeks, and 23.8% expected their fears to remain moderate to severe over the next eight weeks. At least 30% of the respondents estimated that they were at least 50% likely to become infected with the coronavirus in the future, 31.3% expected with a probability of at least 50% to become a carrier themselves, and 41.4% reported a mediumto-severe fear of becoming a carrier. 19.5% stated that they have been moderately to severely affected by the COVID-19 pandemic in their daily lives. With regard to COVID-19 risk factors, 12.9% were older than 60 years, 18.6% reported that they were smokers, 10.7% stated that they were extremely overweight, 12.7% had cardiovascular disease (e.g., coronary heart disease and hypertension), 5.6% had chronic lung disease [e.g., chronic obstructive pulmonary disease (COPD)], 1.5% had chronic liver disease, 3.0% had diabetes mellitus, 1.8% had cancer, and 7.0% said they had a weakened immune system (e.g., due to illness or regular medication).
Psychological Distress
On average, the current PHQ-4 symptom score (T1), covering anxiety and depression symptoms, was significantly higher than the score calculated for the period before the onset of the
The categorization of the participants using established cutoffs (indicating at least moderate expression of symptoms) showed that 10.7% of respondents reported clinically relevant symptoms of depression (PHQ-2 scores ≥ 3), 11.9% reported symptoms of anxiety (GAD-2 scores ≥ 3), 9.4% reported symptoms of psychological distress (PHQ-4 scores ≥ 6), and 12.3% reported symptoms of somatization (PHQ-15 scores ≥ 10) before the outbreak (T0). In contrast, a higher number of participants showed elevated symptom levels during the current phase of the pandemic (T1): the proportion was 26.7% for depression, 24.5% for anxiety, 22.5% for psychological distress, and 29.0% for somatization.
Correlations
In order to investigate predictors of psychological and somatic distress, health anxiety, and COVID-19 fears, first Pearson correlations between the predictor and the outcome variables were computed (see Supplementary Material 2). Intercorrelations of the predictor variables ranged from r = 0.00 to r = 0.61, with the highest correlation between PHQ-15 (T0) and PHQ-4 (T0). Inter-correlations of the outcome variables ranged from r = 0.15 to r = 0.64. Again, the highest correlation was found between PHQ-15 (T1) and PHQ-4 (T1), indicating a moderate overlap of these measures. Finally, the correlations between predictor and outcome variables varied between r = 0.00 and r = 0.82. The great majority of the predictor variables correlated significantly with every outcome variable, with the highest correlation between HAS-9 (T0) and HAS-9 (T1).
Results of the Structural Equation Model
The main results of a structural equation model that was used to identify the most relevant predictors of mental distress assessed during the first recovery phase of the pandemic are presented in Table 1. The model fit indices indicate a good model fit according to generally accepted standards (e.g., Schermelleh-Engel et al., 2003).
Psychological Distress (PHQ-4 T1)
Higher pre-existing levels of psychological distress (PHQ-4 T0), higher neuroticism, younger age, a higher number of biological COVID-19 risk factors, and more pronounced avoidance of illness information at T0 were significantly associated with higher levels of psychological distress at T1 (p ≤ 0.001 for all). The model accounted for 40.8% of the variance in the latent psychological distress score.
Somatic Symptom Distress (PHQ-15 T1)
A similar pattern of significant predictors emerged for current levels of somatic symptom distress. Higher pre-existing levels of somatic symptoms (PHQ-15 T0), higher neuroticism, a higher number of biological COVID-19 risk factors, younger age, and more avoidance of illness information at T0 predicted significantly higher levels of somatic distress at T1 (p ≤ 0.001 for all). The model accounted for 59.8% of the variance in the latent somatic symptom score.
Health Anxiety (T1)
Again, higher levels of pre-existing health anxiety (T0), higher neuroticism, being male, a higher number of biological COVID-19 risk factors, and more avoidance of illness information at T0 were significant predictors of higher levels of current health anxiety at T1 (p ≤ 0.001 for all). The model accounted for 69.3% of the variance in health anxiety scores.
Coronavirus Disease 2019 Fears (T1)
The model identified the highest number of significant associations between the predictors and the level of COVID-19 fears (p ≤ 0.001 for all). Coronavirus Disease 2019 risk perception, pre-existing health anxiety (T0), number of biological COVID-19 risk factors, being older, neuroticism, avoidance of illness information at T0, number of days after the peak of the first wave, and female gender were all significant predictors of higher COVID-19 fears (all ps ≤ 0.001), whereas higher extraversion predicted lower COVID-19 fears. The model accounted for 41.5% of the variance in COVID-19 fears score.
Finally, our results show that current or past coronavirus infection was related to lower COVID-19 fears (p = 0.045), but not to symptoms of anxiety and depression, somatization or general health anxiety (all ps > 0.50).
General Discussion of Our Findings
Current research has revealed clear evidence that the "first wave" of the COVID-19 pandemic and the subsequent restrictive measures adopted to slow the spread of the virus are related to increased levels of depression, anxiety, and general distress in different populations around the world. However, changes in these symptom measures over different phases of the pandemic, especially levels of psychological and somatic distress in the recovery period, are heterogeneous. Accordingly, the present study examined the extent of psychological and somatic distress in the recovery period between the first and second waves of the COVID-19 pandemic in Germany. To investigate possible changes in distress levels in the German population, participants answered the same symptom measures twice: first for the current recovery period (T1), then retrospectively for the period before the onset of the COVID-19 pandemic (T0). In our study, we observed elevated levels of psychological and somatic distress and health anxiety in the recovery period compared to before the pandemic. On average, participants rated their current symptoms of depression, anxiety, somatization, and health anxiety significantly higher than before the onset of the pandemic. Applying established cutoff scores for at least moderate levels of symptoms, approximately 25% of participants experienced psychological and somatic symptoms. Our prevalence rates before the outbreak (T0) are similar to those observed in representative German population samples in the years before the pandemic (Kocalevent et al., 2013;Hajek et al., 2020). In these nationally representative validation studies of the PHQ-4 and PHQ-15 screenings, the prevalence was 10.4% for depression, 9.8% for anxiety, and 9.3% for somatization syndromes. Interestingly, women had higher risk of both anxiety and somatization in these representative samples, which has also been shown in the present study. Our results are based on a convenience sample recruited online who were mostly women (75%) which may explain the slightly higher prevalence of anxiety and somatization at T0 compared to representative samples (Kocalevent et al., 2013;Hajek et al., 2020). Furthermore, our results agree with findings from a German longitudinal observational study with four stages of online data collection from March 27, the punctum maximum of the first wave (phase two), to June 15, 2020, the beginning of the recovery phase (phase three) . The authors observed only a slight decrease in psychological distress (PHQ-4 scores) from March to June with prevalence rates of 31.0% (T1), 25.9% (T2), 22.1% (T3), and 22.6% (T4). Their last assessment interval overlaps with the beginning of our study period and provided nearly identical prevalence rates to those assessed in the following months of the recovery phase in our study (from May to September 2020) using a comparable sample of the German general population. Another online survey that collected data over a 50-day period after the onset of the COVID-19 outbreak in Germany found a similar pattern of results: while COVID-19 fear decreased within six weeks to the level before the lockdown, generalized anxiety remained elevated over time (Hetkamp et al., 2020), indicating no return to the pre-pandemic level.
The study's second aim was to identify significant predictors of distress and COVID-19 fears during the first recovery phase of the pandemic. As expected, and in line with prior research on the impact of pre-existing mental conditions on current mental health status (e.g., Fiorillo et al., 2020;González-Sanguino et al., 2020;McCracken et al., 2020;Newby et al., 2020), the strongest predictor of current levels of psychological and somatic distress was the domain-specific pre-existing distress level, retrospectively assessed for the period before the onset of the pandemic. Similar strong domain-specific associations were found for current health anxiety, which was best predicted by past health anxiety, and for COVID-19 fears, which were best predicted by higher levels of perceived risk of infection.
Furthermore, younger age was associated with higher psychological and somatic distress. These findings are consistent with previous research on distress during the early phase of the pandemic (e.g., Balsamo and Carlucci, 2020;Bäuerle et al., 2020a;González-Sanguino et al., 2020;Jia et al., 2020;Newby et al., 2020;Nwachukwu et al., 2020;Pierce et al., 2020;Ran et al., 2020;Rossi et al., 2020;Shevlin et al., 2020;Solomou and Constantinidou, 2020). Possible reasons include generally higher social mobility among younger people, little experience with socioeconomic or major life events or pandemics, and higher perceived threat of their academic, social occupational, and economic prospects compared to older people over 25 years (Huang and Zhao, 2020;Wang et al., 2021). As expected, high levels of neuroticism were significantly associated with all outcome measures in this study. Higher neuroticism scores predicted higher levels of psychological distress, somatic distress, health anxiety, and COVID-19 fears, whereas higher extraversion scores were significantly associated with lower levels of COVID-19 fears. These findings are consistent with the vulnerability model, which postulates that neuroticism is an important vulnerability factor for the development of unspecific mental distress and common mental disorders, including anxiety and depression (Jeronimus et al., 2016). In addition, extraversion has shown a robust positive link with subjective psychological wellbeing in a recent meta-analysis of the links between personality traits and well-being (Anglim et al., 2020).
In line with previous findings from the first acute phase of the pandemic Sauer et al., 2020), pre-existing health anxiety predicted current health anxiety and COVID-19 fears but not distress. Moreover, the tendency to avoid illness-related information (e.g., documentaries about illnesses), which is often used as avoidance behavior by health-anxious individuals (Weck et al., 2013), was significantly associated with current health anxiety and with higher levels of current psychological and somatic distress and COVID-19 fears. These findings are consistent with previous research, which has shown that health anxiety, neuroticism, and coronavirus anxiety were significant predictors of depression, generalized anxiety, and death anxiety experienced during the COVID-19 crisis in the United States (Lee et al., 2020). Approximately half (46%) of the current sample surveyed had one or more risk factors to suffer a more severe course of COVID-19. Our results confirm the impact of self-reported biological COVID-19 risk factors (i.e., age > 60 years, smoking, overweight, cardiovascular and other somatic diseases) on psychological and somatic distress and COVID-19 fears. Furthermore, higher subjective risk perception, including the risk of contracting the virus or infecting someone else, was associated with higher COVID-19 fears, but not with higher psychological distress or health anxiety. The number of days after the peak of the first COVID-19 infection wave was also significantly associated with higher COVID-19 fears and elevated levels symptoms of depression and anxiety, which seems consistent against the background of rising infection numbers again in Germany from September onward. Finally, a current or past coronavirus infection predicted lower covid-19 anxiety, which seems plausible against the background of an expected temporary immunization against further Coronavirus infection, as discussed in Germany at that time (Robert Koch Institute, 2021).
Strengths and Limitations
The major strength of this study relates to the examination of a variety of different variables that predict coronavirus-related psychological distress, somatic symptoms, health anxiety, and COVID-19 fears using structural equation modeling. In addition, to the best of our knowledge, this is one of the first studies to investigate the possible effects of COVID-19 on mental health during the complete recovery phase between the first and the second "waves" in Germany. Nonetheless, several limitations of the present study that may limit the interpretation of our findings must be considered.
The first limitation is that the data at T0 were assessed retrospectively, which might have introduced a memory bias in terms of remembering disorder-specific symptoms, stress and behaviors. There are studies that indicate that retrospective reports of change in mental health are prone to substantial bias during the Corona pandemic (Hipp et al., 2020) and beyond [e.g., (Ben-Zeev et al., 2009;Van den Bergh and Walentynowicz, 2016)]. Despite this substantial limitation of the significance of our findings, the comparability of our retrospective results with cross-sectional prevalence data assessed in the years before the pandemic (Kocalevent et al., 2013) argues against the presence of such a bias. Furthermore, there is evidence that potential bias at the individual survey level is reduced by aggregating data (Jaspers et al., 2009). Additionally, various empirically recommended criteria for ensuring the highest possible reliability and validity of the study's statements despite retrospective questioning were followed in our study; these include the use of short, easyto-understand questions, as well as the use of fixed anchor points (for recommendation of retrospective survey questions in COVID-19 studies see Hipp et al., 2020). Furthermore, there are no studies to date that have investigated the validity of our measurement instruments in retrospective use. Due to the sudden spread of COVID-19, it was not possible to implement a longitudinal design with our participants before the outbreak; we decided to adopt an economical solution by applying these partly unvalidated self-report questionnaires. Future studies should incorporate newly published instruments to assess COVID19 related fear (e.g., Ahorsu et al., 2020;Lee et al., 2020;Taylor et al., 2020).
Second, our sample was recruited as a convenience sample mainly through social media and a COVID-19 mental health support page of the CIMH, which might have led to a sample bias. People who have easy access to or are familiar with social media might have been overrepresented in this study. Furthermore, as previous studies indicate , people who experience a relatively high level of COVID-19related psychological distress and who are looking for support might be particularly attracted by social media and might have been more likely to participate in our study. Additionally, the number of men was lower than that of women. This selection bias may partly overestimate the symptom severity and impact of COVID-19, especially given past studies have shown worse impact of pandemics on those with pre-existing mental illness, of younger age and in the female gender (Balsamo and Carlucci, 2020;Bäuerle et al., 2020a;González-Sanguino et al., 2020;Hajek et al., 2020;Jia et al., 2020;Newby et al., 2020;Nwachukwu et al., 2020;Pierce et al., 2020;Ran et al., 2020;Rossi et al., 2020;Shevlin et al., 2020;Solomou and Constantinidou, 2020).
Third, there are several differences between the demographics in our sample and the general population in Germany. Our sample consisted of 74.8% female participants. Since previous studies on the 12-month prevalence of mental disorders in Germany indicate a higher prevalence in women (33% versus 22% in men), the rate and intensity of depressive, anxious, somatization or health anxiety symptoms might be biased (Jacobi et al., 2014). Regarding sociodemographic variables relevant to corona risk, the proportion of participants 60 years and older was lower in our sample than in the general German population [12.7% in our sample versus 29% in the German population; (Statistisches Bundesamt, 2020)]. In contrast, a larger proportion of people than in the general German population were affected by conditions that, according to the Robert Koch Institute, increase the likelihood of corona infection or a severe course [e.g., coronary heart disease; 12.7% in our sample versus 9.3% in the German population; (Gößwald et al., 2013)]. Since our results suggest that younger people but at the same time those with more corona risk factors might be affected by anxiety, depression, health anxiety or somatization symptoms, these differences between our sample and the general German population represent limitations in the generalizability of the findings of this study. In addition, the prevalence rates of corona risk factors were solely based on self-report (e.g., when asked about a weak immune system), which may be subject to social desirability, self-report errors and poor recall and must be considered when interpreting the results. Furthermore, the proportion of people being currently or in the past infected with the Coronavirus was about five times higher among our study participants compared to the German population. Given the Structural Equation Model (SEM) finding that current or past coronavirus infection was associated with lower COVID-19 fears, our results may represent lower levels of COVID-19 fears compared to the general population, limiting the generalizability of the findings.
Finally, although we selected potential predictor variables based on previous studies examining psychological distress under COVID-19, additional important predictors might exist that should be examined in future research.
CONCLUSION
In sum, our findings suggest that levels of mental distress were still elevated in this sample of the general German population during this first recovery phase of the pandemic compared to the period before the onset of the pandemic. Women, younger people, those with higher pre-existing levels of distress, higher health anxiety, higher neuroticism, and those with one or more of the known biological COVID-19 risk factors were at higher risk of increased mental distress. Despite the retrospective data assessment and the non-representative sample, our findings provide additional empirical evidence pointing to the need for specific and low-threshold psychological programs to support individuals with enhanced psychological and biological vulnerability to cope with coronavirus-related mental distress during all phases of the ongoing COVID-19 pandemic (Galea et al., 2020;Liu et al., 2020;Vonderlin et al., 2021).
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, under limited conditions.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of the University of Mainz agreed to conduct the study (2020-JGU-psychEK-S010). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
JB, DM, and MW contributed to the conception and design of the study. JB, MB, and MW organized the database, the acquisition, and performed the data analysis. RV, DM, and MW contributed to interpretation of data for the work. JB and MB wrote the first draft of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. | 2021-12-06T14:17:06.755Z | 2021-12-06T00:00:00.000 | {
"year": 2021,
"sha1": "81f906d1748b8e675e43adbd658fe8021f64a2a4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.678860/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81f906d1748b8e675e43adbd658fe8021f64a2a4",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251123091 | pes2o/s2orc | v3-fos-license | A checklist of fish and shellfishes of the Poonthura estuary, southwestern coast of India
: A systematic checklist of fish and shellfishes of the Poonthura estuary, Kerala, India is provided including notes on their conservation status. This checklist includes 66 finfish and five shellfish, belonging to 17 orders, 35 families, and 60 genera. Carangiformes is the richest order (11 species, eight genera, and three families), representing 15.4% of the total fish diversity. Carangidae, is the most diverse family with nine representatives, contributing to 12.6% of the total fish diversity. Following the IUCN Red List Categories, of the total 69 species (excluding both exotic and transplanted fish species), 59 belong to the ‘Least Concern’, while one species Pampus argenteus is listed as ‘Vulnerable’, four are ‘Data Deficient’ ( Megalops cyprinoides, Arius maculatus, Cynoglossus semifasciatus , and Epinephelus tauvina) and five are ‘Not Evaluated’ ( Nuchequula blochii, Channa pseudomarulius, Penaeus indicus, P. monodon , and Scylla serrata ). Around 94% of the recorded fish fauna have commercial value and contribute to subsistence fisheries throughout the year. Taxonomy and diversity of fish fauna of least studied or isolated estuarine ecosystems should be updated with proper documentation of their conservation status, in order to design and implement pragmatic management and conservation programs.
INTRODUCTION
Estuaries are transitional zones between sea and freshwater that are inhabited by both inland and marine species, including their juvenile stages (McLusky & Elliott 2006;Elliott et al. 2007;Franco et al. 2008;Potter et al. 2010;Sreekanth et al. 2018). Compared to marine or freshwater systems, estuaries are variable, complicated, and stressful habitats (Selleslagh & Amara 2008;Human et al. 2016;Kiranya et al. 2022). Many commercially important fish species benefit from the highly productive nature of estuaries as their nursery area (Harrison & Kelly 2013). Therefore, much emphasis is required to protect estuarine environments so as to ensure the growth and survival of commercially important fish and shellfish species (Elliott et al. 2007).
The estuaries, backwaters, coastal creeks and large brackishwater systems contribute to a significant part of fish production in India (Nair et al. 1983;Tudu et al. 2018). The peculiarity of Indian estuaries is that they are characterized by high species diversity with low numerical abundance (Sreekanth et al. 2019). Poonthura Estuary situated in the Thiruvananthapuram district of Kerala is comparatively small and shallow, and is formed due to the formation of a sand bar near the estuarine mouth (Kiranya et al. 2018). Previous authors who worked on this estuary have reported its ecological degradation mainly due to indiscriminate fishing and pollution from point and non-point sources (Kiranya et al. 2018).
In Kerala, considerable number of studies have dealt with taxonomic entities within estuarine systems, i.e., species composition, species distribution, and abundance, and spatial and temporal variations in fish diversity (Bijukumar & Sushama 2000;Harikrishnan et al. 2011;Regi & Bijukumar 2012;Kiranya et al. 2018;Roshni et al. 2021;Kiranya et al. 2022), with many such studies concentrated on a single estuary, the Vembanad Lake (Kurup & Samuel 1987;Menon et al. 2000;Harikrishnan et al. 2011;Roshni et al. 2021). There is considerable knowledge gap on the fish diversity and distribution patterns in many estuaries of Kerala, notably in the case of smaller systems such as Poonthura estuary, because of their isolated nature (Kiranya et al. 2018(Kiranya et al. , 2022. Considering this lacuna, the present study focuses on presenting a comprehensive checklist of fish and shellfish species of Poonthura estuary, along with their systematic position, and conservation status (according to the IUCN Red List). The increasing availability of data on estuarine fish and shellfish fauna will facilitate their use in greater detail to design and implement pragmatic strategies and programs for estuarine fisheries management and conservation.
Study area
The Poonthura Estuary (0.9 km 2 long and 0.1 km wide) is one of the most ecologically significant, and at the same time a polluted estuary in Thiruvananthapuram, Kerala (Kiranya et al. 2022). The estuary is micro-tidal and partially mixed, with an average tidal range of 1.5 m, and separated from the Lakshadweep Sea by a sand bar at Poonthura. The sand bar opens during the monsoon due to heavy discharge of water from the River Karamana. During heavy river discharge and land drainage during the monsoon, the sand bar between sea and estuary is either naturally, or manually opened. Artificial breaching of the estuary is also a frequent practice in this area to avoid flooding into nearby human settlements (Kiranya et al. 2018). The Poonthura estuary has also been undergoing severe ecological degradation with its bottom being muddy with a pungent smell, due to the unmanaged disposal of municipal sewage, land drainage, and industrial effluents (Kiranya et al. 2018). Full-time, part time and migrant fishers of 200 families of the adjoing areas belonging to the traditional sector depend on this estuary both directly and indirectly for subsistence, almost throughout the year (Kiranya et al. 2018).
Sampling and analysis
The present study was carried out in multiple phases from June 2016 to October 2020. Three sampling stations were fixed based on the fishing activity, tidal influx, and drainage from rivers/ land. Monthly samples of fish and shellfish were collected from the selected stations (Image 1). Sampling was performed during early morning using 110 m surface and bottom set gillnets (mesh size 30 mm) and 4.5 m cast net (mesh size 8 mm) (one sampling each using both bottom set gillnet, surface gill net and cast net at a sampling station) operated from a small plank-built canoe (3 m LOA). Identification of fish and shellfishes were done at the species level by using published keys (Jayaram 1981;Fischer & Bianchi 1984). Identification of Channa pseudomarulius followed Britz et al. (2017). Taxonomic status and systematic position of fishes follow the Catalog of Fishes (Fricke et al. 2021) and World Register of Marine Species database (WoRMS 2021). Vernacular and local names of fish and shellfish species were collected from the traditional fishers
Species such as Etroplus suratensis, Oreochromis mossambicus, Gerres filamentosus, Chelon parsia, Mugil cephalus, Arius arius, and Caranx ignobilis represented the most common species of the estuarine system, with Etroplus suratensis and Oreochromis mossambicus being recorded throughout the year during the study period. The present study also revealed the occurrence of two fish species having ornamental value, the filament barb, Dawkinsia filamentosa and the silver moony,
Monodactylus argenteus.
Of the four species of shrimps/prawns recorded from the estuary, Penaeus indicus was the dominant species followed by P. monodon and Macrobrachium rosenbergii. The mud crab Scylla serrata was the only representative of crabs that was observed in the local catches.
Several authors have studied estuarine fish diversity of west flowing river systems in Kerala, most of them pointing at the predominance of finfish species. Bijukumar & Sushama (2000) presented an overview of the ichthyofauna of the Ponnani estuary representing 112 finfish species belonging to 14 orders, 53 families, and 80 genera. Kurup & Samuel (1987) recorded 150 species of fishes from Vembanad lake, while a recent study by Roshni et al. (2021) reported 90 species of fish belonging to 17 orders and 40 families suggesting a 40% reduction in fish fauna since 1980s. Raj et al. (2014) reported 68 species of finfishes, five species of crabs, nine species of prawns from the Ashtamudi estuary, and stated that pearlspot and mullets supported good local fisheries. From Chettuva estuary, Johny et al. (2016) recorded 68 species of fish belonging to 45 genera while the diversity of nearby Azhikode estuary was known to comprise of 30 finfishes (Harikrishnan et al. 2011). Fifty species under 40 genera of finfish were recorded from the Akathumuri backwaters (Satheesan et al. 2014). Regi & Bijukumar (2012) also reported the occurrence of two non-native/ exotic species (Oreochromis mossambicus and Clarias gariepinus) from the Veli-Akkulam lake. According to the above authors, O. mossambicus has dominated the native fish species in many Indian water bodies due to its prolific breeding, voracious feeding habits, and hardy nature. www.threatenedtaxa.org The Journal of Threatened Taxa (JoTT) is dedicated to building evidence for conservation globally by publishing peer-reviewed articles online every month at a reasonably rapid rate at www.threatenedtaxa.org. All articles published in JoTT are registered under Creative Commons Attribution 4.0 International License unless otherwise mentioned. JoTT allows allows unrestricted use, reproduction, and distribution of articles in any medium by providing adequate credit to the author(s) and the source of publication. | 2022-07-28T15:14:31.544Z | 2022-07-26T00:00:00.000 | {
"year": 2022,
"sha1": "b5978ccb66513342ae3b954dcdc4cfd823263d5c",
"oa_license": "CCBY",
"oa_url": "https://threatenedtaxa.org/JoTT/article/download/7683/8649",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4dbe5ec5154dcf4a52dc80b63b0fa90fc32bc2f7",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
236370902 | pes2o/s2orc | v3-fos-license | The influence of comorbidities on the treatment outcome in symptomatic lumbar spinal stenosis: A systematic review and meta-analysis
Background Lumbar spinal stenosis (LSS) affects mainly elderly patients. To this day, it is unclear whether comorbidities influence treatment success. The aim of this systematic review and meta-analysis was to assess the impact of comorbidities on the treatment effectiveness in symptomatic LSS. Methods We conducted a systematic review and meta-analysis and reviewed prospective or retrospective studies from Medline, Embase, Cochrane Library and CINAHL from inception to May 2020, including adult patients with LSS undergoing surgical or conservative treatment. Main outcomes were satisfaction, functional and symptoms improvement, and adverse events (AE). Proportions of outcomes within two subgroups of a comorbidity were compared with risk ratio (RR) as summary measure. Availability of ≥3 studies for the same subgroup and outcome was required for meta-analysis. Results Of 72 publications, 51 studies, mostly assessing surgery, there was no evidence reported that patients with comorbidities were less satisfied compared to patients without comorbidities (RR 1.06, 95% confidence interval (CI) 0.77 to 1.45, I2 94%), but they had an increased risk for AE (RR 1.46, 95% CI 1.06 to 2.01, I2 72%). A limited number of studies found no influence of comorbidities on functional and symptoms improvement. Older age did not affect satisfaction, symptoms and functional improvement, and AE (age >80 years RR 1.22, 95% CI 0.98 to 1.52, I2 60%). Diabetes was associated with more AE (RR 1.72, 95% CI 1.19 to 2.47, I2 58%). Conclusion In patients with LSS and comorbidities (in particular diabetes), a higher risk for AE should be considered in the treatment decision. Older age alone was not associated with an increased risk for AE, less functional and symptoms improvement, and less treatment satisfaction.
To date, the evidence of the influence of comorbidities on the treatment outcome in patients with LSS undergoing surgical or non-surgical treatments has not been systematically reviewed. Therefore, the aim was to summarize the evidence of the influence of comorbidities on the treatment outcome of patients undergoing treatment for LSS.
Study design
Systematic review and meta-analysis. We followed the recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement (PRISMA) [30] . The study protocol has been described previously [7] .
Literature search
We systematically searched on May 2, 2020: Medline (Ovid), Embase, the Cochrane Library, and CINAHL. All references from the inception of the database until the search date were considered. Search terms included MeSH terms (Medical Subject Headings) and keywords related to "lumbar spinal stenosis " and "comorbidities " ( Appendix 1 ). We also searched bibliographies of studies, guidelines, and review articles and contacted authors of studies with insufficient details.
Eligibility criteria
Eligible were prospective or retrospective studies with adult patients with degenerative LSS undergoing surgical or conservative treatment. As subgroup analyses require a sufficient sample size to be robust, we included studies with at least 100 patients. All studies were considered in which we had sufficient language proficiency (i.e. English, French, German, Spanish, and Italian). Excluded were studies in patients aged < 18 years or less than 100 patients, cross-sectional and case-control studies.
Study selection and data extraction
Two reviewers (AS, AB) independently screened all titles and abstracts, and reviewed all potentially relevant references in full text. Disagreement between the reviewers was discussed and resolved in consensus or by third party arbitration (MW). If there were several publications for the same study, we included publication(s) reporting findings relevant for the research question.
Data collection and data item
One reviewer (AS) extracted information, using a predefined and piloted extraction form. A second reviewer (AB) confirmed the accuracy of extracted data. All data included in the meta-analysis were confirmed by the third reviewer (MW). We extracted information on study characteristics, patients' characteristics, comorbidities and comorbidity measures, treatments, and outcomes.
Outcomes of interest
The main outcomes of interest were treatment satisfaction, functional and symptoms improvement, and adverse events. Additional outcomes included mortality. All outcome variables were extracted as reported in the original studies and operationalized.
Study quality
Two reviewers (AB, AS) independently assessed study quality using Scottish Intercollegiate Guidelines Networks (SIGN) checklists for randomized controlled trials (RCTs) and cohort studies [31] . For each study, internal validity was assessed (yes/no/can't say/doesn't apply) and a global quality assessment assigned according to pre-defined criteria into high, acceptable, or low ( Appendix 3 ). Disagreements were discussed and resolved by consensus or third-party arbitration (MW).
Data synthesis and statistical analysis
We provided a descriptive synthesis of evidence by categorizing findings into strong, weak, or conflicting evidence for or against an influence of a comorbidity. We summarized continuous and categorical variables with number/percentage, mean/standard deviation or median/interquartile range. We reported regression factors with coefficients, 95% confidence intervals (CI) and p-values.
In the meta-analysis, associations of comorbidities with treatment outcomes were analyzed by restricting subsets with the same treatment outcome for surgical or non-surgical treatment. The proportions of the two subgroups were compared with risk ratio (RR) as summary measure. We explored potential publication bias by using funnel plots. Funnel plots were exploratory, as a study could have multiple study arms, thus the study dots in the funnel plot were not independent. We performed meta-analyses in subsets with the same treatment and with specific comorbidity subgroups only, if at least three studies were available. We used random-effects models for pooling RRs due to expected large heterogeneity.
Studies were weighted by the standard error of their estimates, i.e. by sample size. Heterogeneity measures 2 and 2 were quantified. Results in RRs were visualized in forest plots including the study-specific estimates and their 95% confidence intervals (CI). The statistical analysis was performed in the R programming language [32] using base and analysis-specific packages: Amelia, biostatUZH, dplyr, ggpubr, meta, metaviz, readxl, tableone, xtable.
Study selection
We screened title and abstract of 3244 references and read 157 potentially relevant full texts ( Figure 1 ). In total, 72 publications based on 51 studies (the Spine Patient Outcomes Research Trial (SPORT) study was counted as two studies with a randomized and an observational study arm) were included and analyzed. Main reasons for exclusion were insufficient sample size (n = 47), other study population/research question (n = 27), study protocol/conference proceedings (n = 7), and no language proficiency (n = 4, Chinese, Japanese, and Czech).
Identification
The quality was high in two RCTs (100%) and acceptable in the other studies ( Appendix 3 ). No study was excluded due to a high risk of bias. Visual inspection of the funnel plot ( Appendix 4 ) was symmetrical.
Predictors for satisfaction
Older age ( > 80 years and > 75 years) did not influence satisfaction in five studies, whereas one study showed an association of younger age with more satisfaction. Diabetes was associated with lower satisfaction in one study [39] , but not in another study [40] .
There was a (not significant) trend in obese patients towards less satisfaction after surgery (RR 0.90, 95% CI 0.74 to 1.11), which was comparable for non-surgical treatments in one study [41] . Smoking was associated with less satisfaction in all three studies with an overall RR of 0.86 (95% CI 0.81 to 0.90). Whereas heterogeneity was very high in studies using comorbidity measures ( 2 93.7%) and BMI ( 2 82.4%), heterogeneity was 0% for smoking.
One study assessed the influence of previous lumbar surgery and found a higher satisfaction in patients without previous lumbar operation (odds ratio (OR) 3.65, 95% CI 1.13 to 11.79) [ 40 ].In a registry study, patients with neurologic disease and cancer were less satisfied with surgery [42] . Depression was associated with lower satisfaction rate in one study [43] but not in another study [38] . Findings from individual studies are summarized in Appendix 7 .
Predictors for functional and symptoms improvement
Only a limited number of studies assessed clinically relevant functional improvement and provided sufficient information to perform subgroup analyses ( Figure 3 ). Patients with comorbidities seemed to have comparable functional improvement ( Figure 3 , Table 2 ) compared to patients without comorbidities. Findings for symptoms improvement showed a weak association of comorbidities with less improvement ( Table 2 ). Most studies were performed using data from the Eurospine registry [ 23 , 33 , 34 ]. Higher ASA scores were associated with lower improvement rates (Core Outcome Measures Index (COMI) sum-score) [ 23 , 33 ] and global outcome [34] .
Older age and obesity were in most studies not associated with worse symptoms and functional improvement. Based on one study [39] , diabetes was associated with less clinical meaningful improvement in symp-toms (RR 0.76, 95% CI 0.61 to 0.96). Smoking was not associated with functional improvement (RR 0.84, 95% CI 0.62 to 1.13, 2 0%). One study reported that patients who smoked less needed more additional pain medication [44] .
Less cardiovascular comorbidity was only associated with less symptoms at two years [27] . Other factors with conflicting findings on functional improvement based on a few studies were symptom duration, obesity, and rheumatologic disease (see Table 2 ).
Whereas patients with depression had less functional improvement in four studies of moderate quality and small sample size [ 27 , 48-50 ], this contrasted with three other studies without evidence for depression to influence function [ 45 , 46 , 51 ]. Particularly the high quality Lumbar Epidural Steroid Injections for Spinal Stenosis trial (LESS) including 400 patients found no evidence that baseline depression scores would influence improvement in the Roland Morris Questionnaire (RMQ) at six weeks [51] . Baseline depression scores seemed to be associated with less symptoms improvement in most studies [ 46 , 48 , 50 , 52 ]. Although baseline fear avoidance beliefs (FAB) were not associated with functional improvement [ 51 , 53 ], persisting FAB was associated with less symptoms improvement [53] .
Predictors for adverse events (AE)
Overall, 13 studies reported AE ( Figure 4 ). Comorbidities were associated with an increased risk for postoperative complications (RR 1.46, 95% CI 1.06 to 2.01, 2 72%). Patients with comorbidities showed higher rates of overall complications, wound complications, and hospital readmissions.
There was a non-significant trend that older age was associated with an increased risk for complications (age > 80 years: RR 1.22, 95%CI 0.98 to 1.52). Diabetes was associated with an increased risk for AE (RR 1.72, 95% CI 1.19 to 2.47) mainly due to increased postoperative and inhospital complication rates, but not with postoperative wound infections ( Appendix 7 ). Obesity was associated with an increased risk for surgical site infections [54] , and in-hospital complications in one study [41] but not in two other studies [ 55 , 56 ]. Smoking did not influence the risk for AE. Congestive heart failure was associated with increased in-hospital complications [57] , and 90-day readmission rate [54] . Ischemic heart disease was associated with an increased risk for in-hospital perioperative complications [57] and surgical site infection [54] . Evidence for the influence of previous spine surgery was conflicting.
Discussion
This synthesis of 51 studies revealed an increased risk for adverse events (AE) in patients with comorbidities or higher comorbidity burden compared to patients without comorbidities. Comorbidities did not influence satisfaction, and improvement in function and pain after surgery. Older age alone did not affect satisfaction, symptoms and functional improvement, or the risk of AE. Diabetes was associated with a higher risk for AE and less symptoms improvement with conflicting influence on satisfaction. Other factors that may be associated with less satisfaction were smoking, previous spine surgery, neurological disease, and active cancer disease. There is some indication that patients with depressive symptoms may experience less symptoms improvement.
Discussion in context of the literature
Current disease specific treatment guidelines such as the North American Spine Society (NASS) guideline [58] offer only limited guidance on how comorbidities should be considered in the treatment decision. In addition to one study [39] included in the NASS guideline [58] four additional studies identified in this review confirmed an increased risk for AE in patients with diabetes compared to non-diabetic patients [ 57 , 59-61 ].
In the SPORT trial patients with diabetes had an increased rate of postoperative complications [59] . In patients undergoing surgery, dia- betes did not influence functional and symptoms improvement, and satisfaction [59] . In the current systematic review we observed less symptoms improvement in diabetic patients. One reason for this finding may be that lower extremity symptoms due to LSS may sometimes be difficult to distinguish from diabetic peripheral neuropathy. However, the overall prevalence of diabetes in the studies was low and ranged from 4 to 37% and two studies excluded diabetic patients. Therefore, the full extent of long-term diabetes and diabetic peripheral neuropathy on symptoms improvement may be underestimated.
Further, symptoms due to undiagnosed peripheral arterial disease in patients with diabetes may also reduce the efficacy of surgery for LSS. The prevalence of diagnosed peripheral arterial disease in the studies in-cluded in the systematic review was very low (2-11%) and three studies excluded patients with the diagnosis.
We observed conflicting findings for previous spine surgery. Whereas in three studies previous spine surgery did not influence the improvement of function [ 24 , 40 , 45 ], three other studies observed less functional improvement [ 22 , 46 , 47 ]. One explanation may be that the proportion of postoperative perineural fibrosis and/or arachnoiditis varies among different study populations [62] .
Other spine surgeries (e.g. disc herniation [63] ) guidelines discuss an increased risk of preoperative depression, older age, and longer symptom duration with poorer outcomes. A systematic review published in 2006 [28] assessed preoperative predictors and found cardiovascular disease, depression and higher comorbidity burden to be negative predictors for treatment outcomes after LSS surgery [28] . The conclusion was mainly based on one study [27] , which was also included in our review. Despite the frequency of the disease, we identified only one additional study that found in patients with coronary artery disease or heart failure a decreased symptoms improvement [46] . Further, three studies reported an increased rate of AE in patients with cardiovascular disease [ 54 , 57 , 61 ]. Therefore, cardiovascular disease may be an important factor to consider in the treatment decision.
A systematic review assessed the influence of preoperative depression on treatment outcome in LSS and found a negative influence [64] . For the current review, ten additional studies with a sample size of more than 100 patients were available. Although there was some indication that depression may have a negative impact on symptoms improvement [ 46 , 48 , 50 , 52 ], it remains a matter of debate whether preoperative depression is causal or a result of the functional limitation. Two studies observed that depressive symptoms improve with global improvement after spine surgery [ 65 , 66 ], which may indicate that preoperative assessment of depression alone may not be sufficient to fully assess the influence of depression on treatment outcome.
Strengths and limitations
Although we used rigorous and standardized methods to identify all relevant studies, there are several limitations that need to be discussed. Despite a considerable number of studies available for the analysis, only data from 37 studies could be used for the meta-analysis. The findings of the meta-analysis are therefore of exploratory nature and additional studies should provide high-quality evidence to support or refute the findings.
Further, reporting of comorbidities and outcomes was very heterogeneous and not comparable between the studies. Although we aimed to analyze the influence of comorbidities on non-surgical and surgical treatments, only limited number of studies for non-surgical treatments were available. Therefore, the influence of comorbidities on nonsurgical treatments remains unclear.
Finally, comorbidities may influence treatment outcome depending on the surgical technique used. Due to the limited number of studies that assessed comorbidities and the limited information on the surgical techniques (e.g. open surgery vs. minimal-invasive surgery) that were used, we were unable to address this aspect.
Implications for research
Future studies should report comorbidities of patients in a standardized fashion. In addition, the influence of diabetic peripheral neuropathy and peripheral arterial disease on the treatment outcome in patients undergoing surgery for symptomatic LSS should be assessed. Further, the influence of comorbidities should be assessed for different surgical techniques (e.g. obesity may influence open surgery but not minimally invasive approaches). To assess the impact of comorbidities on treatment outcome, studies need to have sufficient power to assess the treatment effect in subgroups. Further, study outcome assessments should be standardized and comparable. Future studies should assess whether systematic management or improvement of comorbidities preoperatively may influence potential negative factors.
Implications for clinical practice
There was no evidence that age alone influences surgical outcomes for symptomatic LSS. In clinical practice, modifiable prognostic factors that may result in worse treatment outcomes when untreated should be identified and considered. Relevant and potentially modifiable factors identified in this systematic review include diabetes, cardiovascular disease, and smoking. Further, depression and psychological factors may, if they persist, negatively influence treatment outcome [53] .
Conclusion
In patients with LSS and comorbidities (particularly diabetes), a higher risk for AE should be considered in the treatment decision. Older age alone does not expose to an increased risk for AE. Elderly patients undergoing surgery for LSS were equally likely to experience functional and symptoms improvement, and to be satisfied.
FDA device/drug status
Not applicable.
Funding
None.
Author's disclosures
AB : Nothing to disclose. AS : Nothing to disclose. MW : Nothing to disclose. UH : Nothing to disclose. LH : Nothing to disclose. ERB : Nothing to disclose. JS : Nothing to disclose. FB : Nothing to disclose.
Affirmation of authorship
All authors had access to the data and a role in writing this manuscript. MW, AB, AS designed the study. AS and AB performed the independent literature screening, data extraction and quality assessment. MW, AB, AS, LH, UH analyzed the data. The first draft of the article was written by MW, AB, AS and revised by ERB, FB, JS, LH, and UH. All authors approved the final version of the article.
Declarations of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Appendix 3. Quality Assessed with the Scottish Intercollegiate Guidelines Network (SIGN) Methodology Checklist
Assessment criteria: High quality ( ++ ): yes in ≥ 50% items and < 1 item as "no ", Acceptable quality ( + ): yes in < 50% items and ≤ 50% items "no ". Retrospective and single cohort studies were assigned to the acceptable ( + ) quality due to their weaker study design. Low quality (-): no in > 50% items or concerns by reviewers about a high risk of bias ( Tables A3 and A4 ). The only difference between groups is the treatment under investigation. 1.7 All relevant outcomes are measured in a standard, valid and reliable way. 1.8 What percentage of the individuals or clusters recruited into each treatment arm of the study dropped out before the study was completed?. 1.9 All the subjects are analyzed in the groups to which they were randomly allocated (often referred to as intention to treat analysis). 1.10 Where the study is carried out at more than one site, results are comparable for all sites. 2.1 How well was the study done to minimize bias?. 2.2 Taking into account clinical considerations, your evaluation of the methodology used, and the statistical power of the study, are you certain that the overall effect is due to the study intervention?.
2.3 Are the results of this study directly applicable to the patient group targeted in this guideline?.
Appendix 7. Predictors of outcomes
Tables A5 -A8 . Logistic regression (log.), higher American Society of Anesthesiologists (ASA, range 1-6) class and number of comorbidities: no association with less satisfaction (very/somewhat dissatisfied, 4-point scale) at a mean of 3.5 years 44.1 [99] ( continued on next page )
Previous spine surgery
Any previous lumbar surgery associated with less functional improvement at 1 year: regression analysis log OR -0.99 (95%CI -1.95 to -0.02), SSM function 20.2 [46] Log. regression, any previous back surgery: no association with less improvement in disability (ODI) at 1 year 23 [24] Any previous surgery associated with more disability (ODI) at 1 year: adj. beta 6.41 (95% CI 5.32 to 7.61) 32 [47] Univariate, any previous lumbar surgery: no association with less improvement in disability (ODI) at a mean of 5.1 years 37 [45] No previous surgery associated with good outcome (ODI < 40) at mean 4.3 years: log. regression OR 2.4 (95% CI not reported) 22 [22] Multivariate, no previous lumbar operation: no association with good improvement in ODI ( > 30% improvement) at 2 years 21.1 [40] ( continued on next page )
Diabetes
Patients with diabetes: more disability compared to patients without diabetes in non-surgical group at 4 years: diabetes mean -2.6 (SD 3.5), no-diabetes mean -10.2 (SD 1.4) (p = 0.044). Diabetes vs. no-diabetes, surgical group: diabetes no association with less improvement in disability (ODI) at 4 years 2.3 [59] No association between diabetes on insulin and less improvement in disability (RMQ) at 6 weeks 1.1 [51] Univariate: no association between diabetes and improvement in disability (ODI) at a mean of 5.1 years 37 [45] Univariate: no association between diabetes and good improvement in disability ( > 30% ODI improvement) at 2 years
[40]
Smoking Log. regression: no association with less improvement in disability (ODI) at 1 year 23 [24] No association with less functional improvement (SSM function) at 1 year 20.2 [46] No association with less improvement in disability (RMQ) at 6 weeks
Cardiovascular disease
Less cardiovascular comorbidity associated with better walking capacity at 2 years: adj. beta 2.7 (p = 0.008) with "able to walk 1 mile" 4.1 [27] Coronary heart disease or heart insufficiency: no association with less functional improvement (SSM [46] Univariate, less musculoskeletal comorbidity: no association with greater walking capacity "able to walk 1 mile" at 2 years 4.1 [27] ( continued on next page ) # Bad functional score, based on global/lumbar/radicular pain and signs of radicular ischemia (range 0-100, 0 = very bad function, 100 = very good function) § Elixhauser comorbidity score (0-30), method for measuring patient comorbidity based on diagnosis codes in administrative data (includes mental disorders, drug and alcohol abuse, obesity, coagulopathy) Univariate: no association with less pain (VAS) improvement at a mean of 5.1 years 37 [45] Univariate: no association with improvement of pain (VAS) at a mean of 2.5 years 27 [90] Univariate: no association with less symptom severity ( "no severe pain ") at 2 years 4.1 [27] Multivariate, age (per 1 year higher): no association with less EQ-5D improvement (any or ≥ 0.1 points improvement) 7 [52] No association with poor outcome for back/leg pain (improvement VAS (0-100) < 25%) 5 [75] Log. regression: no association with leg pain/numbness and gait disturbance (Japanese Orthopedic Association (JOA, 0-17) score 0-1) at 2 years 41 [96] Log. regression, older age: no association with less EQ-5D total score/EQ-VAS improvement at 1 year 23 [24] In all age groups significant back pain improvement (graphic rating scale (GRS) ≥ 2 points) at a mean of 1. Log. regression: no association between any back surgery and less EQ-5D total score/EQ-VAS improvement at 1 year
[24]
Symptom duration --Symptom duration ≥ 6 months: no association with less improvement in symptom severity (SSM symptoms) at 1 year 20.2 [46] No association of symptom duration and poor outcome for back and leg pain (VAS (0-100) improvement ≤ 25%) at a mean of 3.5 years 5 [75] Log. regression, longer duration of back and leg pain: no association with less EQ-5D total score/EQ-VAS improvement at 1 year 23 [24] Multivariate, duration of pain < 1 year (reference) vs. > 1 year not associated with less improvement in VAS back/leg pain at 2 years | 2021-07-27T00:04:54.104Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "7c5653764dcd595bbb58eedaab50955d5e101996",
"oa_license": "CCBYNCND",
"oa_url": "http://www.nassopenaccess.org/article/S266654842100024X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7642836c190de8090c4c82dc4069574a0e4f76a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201041209 | pes2o/s2orc | v3-fos-license | Cerebral atrophy as outcome measure in short-term phase 2 clinical trials in multiple sclerosis
Introduction Cerebral atrophy is a compound measure of the neurodegenerative component of multiple sclerosis (MS) and a conceivable outcome measure for clinical trials monitoring the effect of neuroprotective agents. In this study, we evaluate the rate of cerebral atrophy in a 6-month period, investigate the predictive and explanatory value of other magnetic resonance imaging (MRI) measures in relation to cerebral atrophy, and determine sample sizes for future short-term clinical trials using cerebral atrophy as primary outcome measure. Methods One hundred thirty-five relapsing–remitting multiple sclerosis patients underwent six monthly MRI scans from which the percentage brain volume change (PBVC) and the number and volume of gadolinium (Gd)-enhancing lesions, T2 lesions, and persistent black holes (PBH) were determined. By means of multiple linear regression analysis, the relationship between focal MRI variables and PBVC was assessed. Sample size calculations were performed for all patients and subgroups selected for enhancement or a high T2 lesion load at baseline. Results A significant atrophy occurred over 6 months (PBVC = −0.33%, SE = 0.061, p < 0.0001). The number of baseline T2 lesions (p = 0.024), the on-study Gd-enhancing lesion volume (p = 0.044), and the number of on-study PBHs (p = 0.003) were associated with an increased rate of atrophy. For a 50% decrease in rate of atrophy, the sample size calculations showed that approximately 283 patients per arm are required in an unselected sampled population and 185 patients per arm are required in a selected population. Conclusion Within a 6-month period, significant atrophy can be detected and on-study associations of PBVC and PBHs emphasizes axonal loss to be a driving mechanism. Application as primary outcome measure in short-term clinical trials with feasible sample size requires a potent drug to obtain sufficient power.
Introduction
Brain tissue loss is a prominent feature in the pathology of multiple sclerosis (MS) and occurs at a significantly higher rate in patients with MS compared to the normal aging brain [1,2]. In greater part, this phenomenon has been attributed to neuronal and axonal loss in both lesions [3] and normalappearing brain tissue [4,5] and, therefore, cerebral atrophy is recognized as a global marker of the neurodegenerative components of MS [6]. Magnetic resonance imaging (MRI) detects the rate of cerebral atrophy in vivo in a sensitive and reproducible manner which, together with the substantial correlation with later clinical disability [7], makes cerebral atrophy a conceivable outcome measure for clinical trials measuring the efficacy of neuroprotective agents.
Previous studies in untreated patients with relapsingremitting multiple sclerosis (RRMS) revealed fairly stable annualized brain volume decreases of approximately 0.6% to 1.35%, determined over moderate (1 year) to long (3 years) periods of follow-up [6]. Fewer studies, however, assessed the rate of cerebral atrophy for shorter periods of follow-up, and did so with various measures of atrophy. In a 3-month study of 138 RRMS patients, a significant decrease in atrophy rate was detected using brain parenchymal fraction as atrophy measure [8], whereas in a similar study using 30 RRMS patients, no significant decrease was found [9]. In another study, patients in the placebo arm of a clinical trial with a follow-up duration of 9 months showed a significant decrease in brain volume measured by assessment of seven contiguous brain slices [10].
Although these studies indicate that cerebral atrophy is detectable on the shorter term, little is known about the statistical power and required sample size for detecting significant treatment effects in short-term clinical trials in MS using cerebral atrophy as primary outcome measure. In a recent paper, sample sizes for various MRI brain atrophy measures in RRMS patients were estimated for longer periods of follow-up (1-3 years) and showed that the socalled SIENA technique, an automated MRI atrophy measurement, yielded the most promising results [11].
Following these findings, in the present study, we aim to assess the feasibility of using SIENA-based cerebral atrophy as primary outcome measure in short-term phase 2 clinical trials in MS. First, we evaluate the rate of atrophy with SIENA in a large cohort of RRMS patients without effective treatment over a 6-month period. Then, the predictive and explanatory value of MRI outcome measures in relation to cerebral atrophy is assessed and, lastly, power calculations based on the detected rate of atrophy are performed to determine the number of patients required in short-term placebo-controlled clinical trials using the rate of cerebral atrophy as primary outcome measure.
Patients
Our analyses were performed with data derived from the oral interferon beta-1a (IFNB-1a) study [12]. In this study, 173 patients with active RRMS received various doses of IFNB-1a or placebo orally every other day for 6 months. No clinical effect (median expanded disability status scale [EDSS] was 2.0 in all treatment groups at screening and at the end of study; approximately two thirds of patients in each group remained relapse-free) or MRI effect (median cumulative numbers of newly active lesions over 6 months were 4.0 in the placebo and 0.6 MIU groups, compared with 7.5 and 9.0 in the 0.06 and 6 MIU groups [no significant differences]) of any dose could be observed, including low neopterin levels in a subgroup of 21 patients and the absence of neutralizing antibodies in a subgroup of 24 patients; oral IFNB-1a was assumed to be biologically inactive and the cohort is regarded as a cohort representative of the placebo arm of a randomized trial.
MRI acquisition and analysis
MRI of the brain was performed at baseline and on six subsequent monthly MRI scans. Scans were performed, including a dual-echo, T2-weighted, spin-echo, or turbo/fast spin-echo (TR/TE of 2,000-3,000/20-40 and 60-100 ms) and a T1-weighted spin-echo (TR/TE of 400-700/5-25 ms), both after administration of 0.1 mmol/kg gadolinium (Gd)-DTPA intravenously, with a field of view of 25 cm and a 256×256 matrix resulting in roughly 1×1 mm pixel size. Images were acquired in 2×23 interleaved sections with a 3-mm thickness and a 3-mm gap, in accordance with published guidelines for the use of MRI in clinical trials [13]. In addition to conventional T1-and T2-weighted MRI measures, the baseline normalized brain volume (NBV) and percentage brain volume change (PBVC) over 6 months was assessed using the automated segmentation-based techniques SIENAX and SIENA, respectively [14].
Statistical analysis
Primary outcome measure was the atrophy rate over 6 months, which followed the normal distribution. Comparisons for demographic and MRI characteristics between included and excluded patients were assessed by independent samples t tests and Mann-Whitney U test for continuous variables and the nonparametric binomial test for proportions. Correlations between variables were assessed with Pearson's R. By means of multiple linear regression analysis (SPSS version 13.0; SPSS Inc., Chicago, IL, USA), the predictive and explanatory value of baseline and on-study clinical and MRI variables for the PBVC over 6 months was determined. Independent variables included baseline NBV, presence of a Gdenhancing lesion at the baseline scan, baseline number and volume of Gd-enhancing lesions, baseline number and volume of T2 lesions, on-study number and volume of Gdenhancing lesions, on-study number and volume of T2enhancing lesions, and on-study number of persistent black holes (PBH). A PBH was defined as new enhancing lesions or new T2 lesions (hyperintense on pd/T2, non-enhancing on T1) that appeared at month 1, 2, or 3 and became a black hole at months 4, 5, and 6, respectively [15]. Linearity in relation to PBVC was checked for all variables and natural log transformation was applied if a nonlinear relationship was found (to account for zero lesions, 1 was added prior to transformation). All effects were corrected for age, disease duration, and sex, and statistical testing was performed with a two-sided test level of 5% with an additional Bonferroni correction for multiple testing.
Sample size calculations
Sample size estimates were based on the standard formula, assuming the rate of cerebral atrophy to be normally distributed: with σ 2 as the standard deviation and μ 1 and μ 2 as the mean brain volume atrophy in the placebo and treatment groups, respectively. Sample sizes were determined for a trial duration of 6 months to detect a treatment effect of 50% to 90% reduction in atrophy rate at 80% and 90% power with and without taking into account a 5% dropout rate and a twotailed significance level of 5%. Since atrophy rates of healthy controls were unavailable, we assumed a 100% treatment effect to correspond to zero brain volume loss. Treatment effects were assumed immediate and constant. To assess the impact of patient selection at baseline on the required sample size, subgroup analyses were performed for patients selected for the presence of an enhancing lesion at baseline and patients selected for a high T2 lesion load (greater than median) at baseline.
Results
Baseline demographics and MRI characteristics of the included and excluded patients are given in Table 1 Table 2. For the 6-month follow-up period, a significant loss of brain volume was detected, comparing the baseline scan with the month 6 scan (PBVC=−0.33%, SD=0.70, p< 0.0001). Patients selected for the presence of an enhancing lesion at baseline or those selected for a high T2 lesion load at baseline tended to have a more pronounced atrophy (PBVC=−0.39%, SD=0.68 and PBVC=−0.42%, SD= 0.73, respectively), compared with patients without an enhancing lesion at baseline or a low T2 lesion load at baseline (PBVC=−0.25%, SD=0.75 and PBVC=−0.25%, SD=0.69, respectively). Table 3 shows the key results of the regression analyses assessing the relationship between MRI variables and PBVC. With a Bonferroni-corrected alpha of 0.005, the number of PBHs (p=0.003) was significantly associated with an increased rate of atrophy, whereas the number of T2 lesions at baseline (p=0.024) and the cumulative volume of enhancing lesions (p=0.044) failed to predict a higher rate of atrophy over 6 months. The accompanying explained variances are low, indicating that cerebral atrophy is marginally explained by variation in lesional measures alone.
The estimated sample sizes are shown in Table 4. Compared to the estimates based on patients unselected at
Discussion
As a compound measure of the overall destruction, preservation, and repair of brain tissue in MS patients, cerebral atrophy encompasses both neuroaxonal loss as well as the processes of demyelination, remyelination, gliosis, and edema. The present study shows that the rate of cerebral atrophy can be detected within a 6-month period and, when applied as primary outcome measure in shortterm clinical trials with feasible sample size, requires a potent drug to obtain sufficient power. Compared to sample size estimates for trials using atrophy measured with SIENA over longer periods of follow-up, the present sample sizes prove to be larger. To detect a 50% decrease in atrophy rate, trials for RRMS patients at 90% power showed approximately 69 patients per arm to be required over a 1-year follow-up period and 40 patients per arm over a 3-year follow-up period [11], whereas for trials for secondary progressive multiple sclerosis with 1-year follow-up, 56 patients per arm are required [16]. The lower sample sizes in these studies are explained by the larger atrophy rates due to larger trial durations and the accompanying larger detectable effect sizes. Also, detection of atrophy over short intervals may be prone to increased measurement errors leading to a greater variability of the atrophy measure, thereby requiring larger subject numbers than would be expected in a longerinterval study. Since SIENA proved to accomplish larger statistical power due to greater measurement precision compared to other measures of atrophy [11], the current sample size estimates are likely the optimal achievable numbers for trials of short duration.
When interpreting the current sample sizes, some considerations should be taken into account. First, the calculations assumed a 100% treatment effect to resemble zero loss of brain volume, whereas healthy controls are known to experience a small amount of brain volume loss. For comparison, a previous study showed a PBVC of 0.11% (0.30) within healthy controls for a follow-up duration of 1 year [11]. When taken into account, a larger sample size will be required to detect a similar treatment effect. Second, the treatment effects are assumed to be effective from onset and constant over time. When this assumption is not met and a compound takes time to become maximally effective, the detected effect sizes will decrease and subsequently increase the required sample size. Wallerian degeneration initiated by axonal damage prior to treatment, for example, might result in a delay of the true effect caused by the already ongoing atrophic processes at initiation of the drug. Another important consideration is the confounding effect of other factors influencing brain volume such as demyelination, remyelination, gliosis, and inflammation. In particular, the resolution of edema and inflammation induced by anti-inflammatory agents, a process known as "pseudoatrophy" [6], can cause loss of brain volume and cloud the measurement of true tissue loss. In order to measure a true neuroprotective effect, especially within a shorter period of time, a future trial might assess the neuroprotective effect of an experimental treatment as an add-on therapy to immunomodulatory-treated patients in both arms of the trial. Such a design, however, will likely result in higher sample sizes because of decreased rates of cerebral atrophy in the groups compared.
To partly overcome the aforementioned limitations, a more effective trial design would be to perform a run-in period of, e.g., 3 months in which the neuroprotective compound is administered and subsequently perform the short-interval atrophy assessment, thereby providing the opportunity for the applied compound to become maximally effective and confounding processes initiated prior to the trial to wane off.
An advantage of our study is that the present calculations are applicable in multicenter trials since the underlying data were obtained in multiple centers, with the accompanying variability caused by varying scanners and analyses. Also, we found a moderate but highly significant PBVC of −0.33% in a group of untreated RRMS patients within a period of 6 months. When annualized, this rate is well within the range of previous results (0.6-1.35%/year [6]) which enhances the generalizability of the sample size estimations since the calculations have not been biased by an atrophy rate at the higher end of the range.
A possible gain in statistical power for shorter-termed trials using cerebral atrophy as primary outcome measure might be achieved by adding multiple scanning time points to a trial. A recent study showed that, by placing additional scans towards the start and end of the trial, reductions in total variance and hence reductions in trial size of 41% could be achieved in patients with Alzheimer disease, using the brain boundary shift integral method for determining the rate of cerebral atrophy [17]. In particular, due to the within-subject variance contributing more to the overall variance at short intervals, acquiring multiple scans has more impact in shorter studies. Although relatively smaller gains in power can be expected from adding time points for more precise measures such as SIENA, the effect on the required sample size should definitely be explored in future multi-time point MS atrophy data.
The on-study association of the rate of atrophy and the number of PBHs suggest axonal loss to be one of the driving mechanisms of brain volume loss in MS patients. Previous studies showed the on-study volume of black holes to be closely related to supratentorial brain volume [18] and the baseline volume of black holes to be significantly correlated with subsequent development of atrophy [10,19]. In contrast, the present study was not able to show a significant relationship between both Gd enhancement activity and T2 lesion load and changes in brain volume as shown in previous studies [8,10,19,20]. These findings, together with the current results, show that the associations between focal tissue changes and atrophy are moderate at best over a relatively long period of followup and weaken within shorter amounts of time, most probably due to the noise introduced by the increased variability in the measurements and that focal tissue changes in MS only partly explain atrophy development. This also emphasizes the added value of cerebral atrophy as a measure of the overall destruction of neuronal tissue, encompassing not only measurable focal destruction but also unaccounted diffuse destruction.
The detectability of atrophy on the short term has been attributed to the measure of activity of the patients assessed within a cohort. Hardmeier et al. [8], who found a significant atrophy rate within 3 months, stated that their finding most likely reflected the natural history of a very active group of RRMS patients with well-established disease and could explain the absence of atrophy in comparable short-termed studies [9,21]. The current study population underwent partly MRI-based selection criteria at baseline and can be regarded as an active cohort of MS patients, shown by the increase in T2 lesion load and Gdenhancing lesion number and volume over the study period and the high proportion of active patients. The influence of the measure of activity of patients on the rate of atrophy is also reflected when the subgroups based on the applied baseline selection criteria are compared ( Table 2). Although a more active cohort might influence the generalizability compared to a more random sampled cohort of patients and selection criteria make it more difficult to recruit subjects, the trade-off is a larger sample size being required when using unselected patients.
In conclusion, our finding suggests that the rate of cerebral atrophy is a detectable outcome measure in shortterm clinical trials in RRMS and applicable in terms of study power while a potent drug is applied. | 2014-10-01T00:00:00.000Z | 2010-01-05T00:00:00.000 | {
"year": 2010,
"sha1": "4a2bd9fc43091a1f4cfcab3a5ffe50375378c2b6",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00234-009-0645-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a2bd9fc43091a1f4cfcab3a5ffe50375378c2b6",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260184374 | pes2o/s2orc | v3-fos-license | Leveraging PSO algorithms to achieve optimal stand-alone microgrid performance with a focus on battery lifetime
ABSTRACT
INTRODUCTION
Microgrids are systems that utilize renewable energy sources such as photovoltaic (PV) panels [1], wind turbines [2], and diesel generators (DG) [3], along with batteries for energy storage. The batteries in microgrids serve as backup power sources when renewable energy sources are unable to meet the energy demand [4], [5]. Ensuring the efficiency [6]- [14], stability [15], safety [16], and reliability [17] of the energy storage system is vital in microgrids.
Optimizing battery performance can be challenging due to the limited lifespan and high cost of these systems [18]- [20]. An energy management system can be used to control energy optimization in microgrids [21]- [23]. This research involves the use of a modified IEEE 30 bus as a model for optimization, taking into consideration various factors such as battery lifespan cost, maintenance cost, and fuel cost in order to determine optimal operating parameters. The aim of this study is to analyze the comparison of battery lifespan through the implementation of an energy management system in microgrids by considering various factors such as battery lifespan cost, maintenance cost, and fuel cost.
In the past, there have been difficulties in optimizing battery performance in microgrid systems. Conventional battery management methods, such as maximizing the state of charge (SOC) or controlling charging and discharging patterns, fail to consider the limited lifespan of batteries and often result in The power output of a PV system can be calculated using the output power rate under standard test conditions, data from the system's datasheet, and information on temperature and irradiance, shown in (1).
The power output of a PV system can be calculated using the maximum module output power standard test conditions (PSTC), The actual irradiance received by the system is indicated by the variable "GC", the solar irradiance at a reference temperature of 1000 W/m 2 is indicated by the variable "GSTC", a temperature coefficient (k) for the module, the cell temperature in degrees Celsius, and a temperature standard test conditions (TSTC) of 25 degrees Celsius [24]- [26]. The output of a wind turbine can be modeled using an equation or set of equations that take into account various factors such as the wind speed, size and orientation of the turbine blades, and efficiency of the turbine itself. By inputting these variables, it is possible to predict the amount of electricity that a wind turbine will be able to generate under certain conditions, shown in (2).
The output of the wind turbine, represented by in watts, can be calculated using the wind speed variables , , , and , which represent the cut-in speed, cut-out speed, rating speed, and actual wind speed, respectively [27]- [29]. The output power of the DG can be modeled linearly based on its actual output power, as shown in (3). To predict the performance of a generator, we can use a cost function that is derived from a test heat run and values of a, b, and c from the generator's datasheet. By inputting these variables, we can estimate how much the generator will cost to operate under different conditions.
The lifetime of a battery or group of batteries (also known as a battery bank) can be extended through the use of a saving strategy called the SOC. This is typically achieved by following a specific equation or set of guidelines as shown in (4)-(6).
The SOC at a specific time, represented by ( + ) , can be determined by considering the value of the SOC at the previous time and the battery power during the time period . The minimum, average, and maximum values of the SOC, represented by SOC min, SOC mean, and SOC max, respectively, are used as indicators for the discharge of the battery and as limitations on the battery's charge [30], [31]. The power output of the batteries, represented by , should be balanced with the load in order to maintain a stable microgrid system [32], [33]. The demand for load must also be balanced in accordance with the rules of the microgrid system, following in (7).
RESULTS AND DISCUSSION
The particle swarm optimization (PSO) algorithm is a computational method that is used to find the best solution for a problem with multiple objectives. In this algorithm, each "particle" represents an individual element, and these particles can interact with each other to find the optimal solution. The PSO algorithm can be used to optimize the fitness value of a problem, which represents the best solution or position. This method can be applied to economic dispatch problems, as shown in (8).
In the PSO algorithm, the power output of generator i for particle j is represented by . To solve the optimization problem, random values for generator output must be considered within the constraints of the problem. The relative importance of different factors may vary, so it is appropriate to assign random weights to each aspect of the problem. The velocity of the particles, which determines their movement within the optimization process, is then calculated using the formula in (9). The power output from a random process is used to begin a power flow calculation. After the power flow calculation is completed, the power generation at the designated reference bus (also known as the slack bus) is checked to ensure that it falls within certain constraints. If the generation is outside of these constraints, it is adjusted by adding or subtracting generation until it falls within limits. The weighted sum method is used to evaluate the overall performance of the system, taking into account various objective functions such as speed calculation, generating cost, battery life loss cost, and the fitness value from a multiobjective problem. These objective functions are each given a weight in the weighted sum equation, where is the specific objective function, and is the weight assigned to it, the equation shown in (10).
The results obtained from this process are used to determine the best combination of generating power and the global best fitness value. The speed and position of the particles are updated based on the best particle. During each update of the speed and particles, the load flow is run, and the slack bus is checked to ensure that it remains within the constraints. The fitness value, best particle, and global best are updated and the update step is repeated until the maximum iteration value is reached. In this simulation, the initial parameters used include a particle count of 30, a weight of 0.4, a maximum iteration value of 100, and values of c1 and c2 equal to 2.
In this study, the optimal operation of a standalone microgrid is evaluated through three case studies. In the first case, the optimal operation occurs when the battery is fully charged (SOC=1). In the second case, the optimal operation occurs when the battery is fully discharged (SOC=0). In all three cases, the battery serves as both a storage device and a backup power source when the renewable energy generator is unable to meet the load demand. In the first two cases, the generator acts as a backup energy source. Figure 2 shows the load and renewable energy in the system, Figure 3 shows the optimization of the islanded microgrid using renewable energy, a battery, and a diesel engine when the battery is fully charged, and Figure 4 shows the microgrid system when the battery is empty, and the diesel engine is used to charge the battery in order to maximize its performance and lifespan. Table 2 compares the simulation results obtained from the multi-objective function in the two different case studies. The results show a significant difference between the two cases, with a higher cost of a generation when attempting to minimize battery losses when the battery is empty. This is because the diesel engine is used more frequently in this scenario. The comparison shows that it is more cost-effective to optimize the generation cost, as the difference in generation cost is significant, and the battery can still be kept fully charged.
CONCLUSION
In this research, the PSO algorithm was used to optimize the operation of an islanded microgrid system that employs renewable energy sources, batteries, and diesel generators. The optimization aimed to find the optimal solution for the multi-objective problem that considers both battery life loss cost and generation cost. The simulation results showed that prioritizing the objective of minimizing generation cost resulted in a 0.58% decrease in battery lifetime and a generation cost of Rp 5,271,523 ($338.64 in USD). On the other hand, optimizing for battery lifetime resulted in a 0.42% decrease in battery lifetime and a generation cost of Rp 13,064,979 ($839.30 in USD).
One potential area for future work after conducting research on an integrated energy system consisting of PV, batteries, and DG using the PSO method in an off-grid setup is to carry out further optimization studies. This can involve comparing the results obtained from the PSO method with other optimization methods, such as the genetic algorithm (GA) or ant colony optimization (ACO). This can provide a deeper understanding of the potential of different optimization methods in optimizing the performance of the integrated energy system, leading to improved system efficiency and reliability. The results of these studies can also be used to guide future implementations of integrated energy systems. | 2023-07-27T15:21:02.099Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "9c058bbcd0211b019a68657f28a06f7906080f76",
"oa_license": "CCBYSA",
"oa_url": "https://ijape.iaescore.com/index.php/IJAPE/article/download/20568/13084",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9ce3c7dd48e91829eeac5969a5684bf20ad18074",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
59540706 | pes2o/s2orc | v3-fos-license | Cell therapy for ARDS: efficacy of endobronchial versus intravenous administration and biodistribution of MAPCs in a large animal model
Introduction Bone marrow-derived multipotent adult progenitor cells (MAPCs) are adult allogeneic adherent stem cells currently investigated clinically for use in acute respiratory distress syndrome (ARDS). To date, there is no agreement on which is the best method for stem cells delivery in ARDS. Here, we compared the efficacy of two different methods of administration and biodistribution of MAPC for the treatment of ARDS in a sheep model. Methods MAPC were labelled with [18F] fluoro-29-deoxy-D-glucose and delivered by endobronchial (EB) or intravenous route 1 hour after lipopolysaccharide infusion in sheep mechanically ventilated. PET/CT images were acquired to determine the biodistribution and retention of the cells at 1 and 5 hours of administration. Results The distribution and retention of the MAPC was dependent on the method of cell administration. By EB route, PET images showed that MAPC remained at the site of administration and no changes were observed after 5 hours, whereas with intravenous route, the cells had broad biodistribution to different organs, being the lung the main organ of retention at 1 and 5 hours. MAPC demonstrated an equal effect on arterial oxygenation recovery by either route of administration. Conclusion The EB or intravenous routes of administration of MAPC are both effective for the treatment of ARDS in an acute sheep model, and the effect of MAPC therapy is not dependent of parenchymal integration or systemic biodistribution.
Introduction Bone marrow-derived multipotent adult progenitor cells (MAPCs) are adult allogeneic adherent stem cells currently investigated clinically for use in acute respiratory distress syndrome (ARDS). To date, there is no agreement on which is the best method for stem cells delivery in ARDS. Here, we compared the efficacy of two different methods of administration and biodistribution of MAPC for the treatment of ARDS in a sheep model. Methods MAPC were labelled with [ 18 F] fluoro-29deoxy-D-glucose and delivered by endobronchial (EB) or intravenous route 1 hour after lipopolysaccharide infusion in sheep mechanically ventilated. PET/CT images were acquired to determine the biodistribution and retention of the cells at 1 and 5 hours of administration. results The distribution and retention of the MAPC was dependent on the method of cell administration. By EB route, PET images showed that MAPC remained at the site of administration and no changes were observed after 5 hours, whereas with intravenous route, the cells had broad biodistribution to different organs, being the lung the main organ of retention at 1 and 5 hours. MAPC demonstrated an equal effect on arterial oxygenation recovery by either route of administration. conclusion The EB or intravenous routes of administration of MAPC are both effective for the treatment of ARDS in an acute sheep model, and the effect of MAPC therapy is not dependent of parenchymal integration or systemic biodistribution.
IntroductIon
Acute respiratory distress syndrome (ARDS) is a devastating lung condition. Currently, it is the leading cause of death and disability in critically ill patients, with a mortality rate that ranges between 26% and 50%. [1][2][3] ARDS is an inflammatory state; characterised by infiltration of mixed inflammatory cells, diffuse destruction of the alveolar-capillary barrier, severe oedema, consequent hypoxaemia and increase in lung density. 4 5 To date, ARDS is only managed with supportive care such as lung protective ventilation, prone positioning and conservative fluid management; and no definitive treatment is available. 4 6-9 Cell therapy with multipotent adult progenitor cells (MAPCs) and mesenchymal stem cells (MSCs), two predominant adult stem cell types of the bone marrow stroma, are promising therapeutic options for patients with ARDS. [10][11][12][13][14][15][16] These cells hold potent immunomodulatory and repair effects. Unlike most adult somatic stem cells, MAPCs appear to proliferate without senescence, have pluripotent differentiation ability in vitro and in vivo and do not express major histocompatibility (MHC) class II antigens. 17 18 Previous experimental studies published by our group and others have demonstrated that bone marrowderived stem cells administered by either intra-tracheal and intravenous route, are able to decrease lung inflammation, enhance lung repair, restore alveolar fluid clearance and improve arterial oxygenation. 12 15 19 20 Although the efficacy of cell therapy in ARDS has been demonstrated in animal models, including our sheep lipopolysaccharide (LPS)-induced lung injury model, it is unknown how different routes of administration can affect homing and retention of those cells in ARDS. Positron emission tomography (PET) has been successfully used in several studies as a quantitative method of cell tracking and localisation in vivo. [21][22][23] The [18F] fluoro-29 -deoxy-D-glucose ([ 18 F] FDG), a glucose analogue, is a positron-emitting radionuclide that is metabolically trapped inside the cells after phosphorylation by hexokinase, and has been successfully used to label and track stem cells. [21][22][23][24] objective Although the endobronchial (EB) and intravenous routes of administration have been recommended for cell therapy in ARDS, the differences of homing and retention of the cells after delivery are currently unknown. Therefore, we aimed to compare the biodistribution of MAPC by EB or intravenous administration and their therapeutic effect in a sheep model of LPS-induced lung injury.
MAterIAl And Methods Animal model All animals received human care in compliance with the 'Principles of Laboratory Animal Care' formulated by the National Society for Medical Research and the 'Guide for the Care and Use of Laboratory Animals' prepared by the Institute of Laboratory Animal Resources and published by the National Institutes of Health (NIH) (NIH no. 86-23). Eleven adult Dorsett Cross sheep weighing 30-40 kg were used in the present study. Both females and males were included in a relationship of 1:1. Each sheep was fasted overnight and pre-medicated with intramuscular atropine (0.05 mg/kg), and after anaesthesia induction with intravenous ketamine, general anaesthesia was maintained with isoflurane (1.5%-2%) for 6 hours. Animals were placed in prone position on the PET scanner table and mechanically ventilated with PEEP of 5 cmH 2 O, FiO 2 of 0.9, I:E ratio 1:2, tidal volume of 8-10 mL/kg and respiratory rate adjusted to maintain the arterial carbon dioxide PaO2 between 35 and 45 mm Hg. The right carotid artery and a peripheral vein were cannulated and haemodynamic parameters were continuously measured. To induce acute lung injury, the sheep received 5 µg/kg intravenous LPS from Escherichia coli 055:B5 (Sigma, St. Louis, Missouri, USA) in normal saline (Baxter, Deerfield, Illinois, USA), over 21 min at a rate of 1 mL/min. Blood samples and arterial blood gases (ABGs) were collected at baseline and at 1, 2 and 6 hours after LPS or saline infusion. A CT was taken during baseline and PET/CT scans were acquired 1 and 5 hours after cells or free tracer delivery ( figure 1A). The degree of lung damage was evaluated by oxygenation and variation of lung density in CT scans measured by hounsefield (HU) units.
cell lines
MAPCs were isolated from a human donor through bone marrow aspiration. Cell isolation was processed according to previously described methods. 25 Briefly, MAPCs were cultured in fibronectin-coated plastic tissue culture flasks. Cell cultures were maintained under low oxygen tension in a humidified atmosphere of 5% CO 2 . Cells were cultured in a media Open access containing low-glucose (D)MEM (Life Technologies, Grand Island, New York, USA) supplemented with fetal bovine serum (Atlas, Fort Collins, Colorado, USA), ITS liquid media supplement (Sigma), MCDB (Sigma), platelet-derived growth factor (R&D Systems, Minneapolis, Minnesota, USA), epidermal growth factor (R&D Systems), dexamethasone, penicillin/streptomycin (Life Technologies), 2-phospho-L-ascorbic acid and linoleic acid-albumin (Sigma). Cells were passaged every 3-4 days and harvested using trypsin/EDTA (Life Technologies). The cells were positive for CD49c and CD90 and negative for MHC class II and CD45 (all antibodies (Abs) were from BD Biosciences, San Jose, CaliforniaA, USA). Cells were cryopreserved in media and 5% dimethyl sulfoxide. Before administration, the MAPCs were counted with trypan blue exclusion, and the final concentration was adjusted according with the percentage of live cells.
[ 18 F] FDG MAPC labelling MAPC were labelled following protocol previously described in detail. 26 The initial mixing and incubation processes were conducted in a Class 100 laminar airflow hood. In brief, cells were resuspended in [ 18 F] FDG (provided by Zevacor) in total volume <1.0 mL. The cells were gently mixed and then incubated in a warm water bath for 1 hour with gentle agitation every 5 min. The cell labelling reaction was then centrifuged at 2000 RPM (750 × g) for 7 min to pellet the cells. The cells were rinsed to remove any residual [ 18 F] FDG by removing the supernatant and resuspending the cell fraction in 2.0 mL of 0.9% saline for injection. The rinse procedure was repeated two additional times. Following the final rinse, the cells were resuspended in 0.9% saline for injection in the desired volume for administration. The decay corrected cell labelling yield averaged 68%±19% (n=8). We also evaluated the stability of the labelled MAPCs. In a single experiment, we incubated the MAPC in 0.9% saline for injection at (37°C±1°C) for 1 hour following the labelling procedure. At the end of the hour, the rinse procedure described above was performed (in triplicate) and the amount of radioactivity in the supernatant was determined. The activity in the supernatant accounted for 35% of the total radioactivity in the labelled cell fraction.
Pet/ct scan A baseline CT including the whole thorax was acquired at breathhold. In the case of LPS administration, a second breathhold CT was acquired 1 hour post insult. Either [ 18 F] FDG or FDG-labelled cells were administered by intravenous or EB route. Also, 1 and 5 hours after radio-isotope administration, a whole-body PET/ CT scan was obtained using a Biograph mCT Flow PET/ CT scanner (Siemens Medical Solutions USA, Malvern, Pennsylvania, USA). A 'step-and-shoot' PET acquisition was employed using multiple 3 min bed positions to cover the head through the pelvis. The corresponding CT scans were acquired at intermediate breathhold to match to the average position during respiration as far as possible, though some degree of mismatch at the location of the diaphragm was noted in the resulting images. Respiratory gating was not employed. PET data were reconstructed using the supplied ultraHD-PET iterative algorithm with all quantitative corrections applied and the CT images were used for attenuation correction. To assess quantitatively biodistribution, numerical PET results were presented as standard uptake values (SUVs) normalised to injected dose and animal weight. Two regions of interest (ROIs) were defined for lung field analysis using the HU scale. The ROIs used for analysis were selected using the baseline CT scans to sample large representative regions of well-aerated (−900 to −501 HU) and moderately aerated (−500 to −101 HU) lung. Careful inspection in choosing suitable regions was required as these were ruminant animals with considerable evidence of prior lung infections, and because atelectasis could also be present due to the use of ventilation. The ROIs were trivially transferred to the later time point images as the animal was continually ventilated and did not move between the scans. Evaluations were made by a boardcertified radiologist blinded on experimental subgroup assignation.
control groups After receiving the same dose of LPS, two groups received free [ 18 F] FDG: group 3 (intravenous-free tracer, n=1) and group 4 (EB-free tracer, n=1). Additionally, the biodistribution of EB and intravenous labelled cells in a non-injured animal (saline solution) was also assessed using the same amount of cells in each route: group 5 (intravenous cells, n=1) and group 6 (EB cells, n=1). Finally, the intravenous-free tracer was also evaluated in a non-injured animal (group 7 (IV-free tracer), n=1). The number of cells used during the experiment was calculated in accordance with results from previous preliminary data. Cells delivered by EB route were administered in the lung lobe with higher degree of injury assessed by CT scan.
statistical analysis
The biodistribution data were analysed using SPSS V.16.0. Comparison between groups was made using the Student's t-test and two-way analysis of variance as appropriate using GraphPad Prism V.7 (GraphPad Software, San Diego, California, USA). Statistical significance was determined by p value<0.05. We calculate a sample size of at least one subject per group in order to describe the changes after each experiment.
biodistribution by intravenous route
The PET/CT images acquired 1 and 5 hours after intravenous injection of labelled MAPCs showed a systemic biodistribution of the cells to many organs with special homing and retention in the lung after 5 hours, in both groups, LPS/intravenous cells (n=3) and naive/intravenous cells (n=1) (figure 2A,B). The intensity of labelled cell uptake was heterogeneous in both lungs, with a predominant preference for inferior lobules. Additionally, a decrease of such intensity was seen after 5 hours.
The quantitative analysis of the labelled cell uptake showed that at the first hour the majority of the cells were retained in the lungs (table 1). However, the amount of cells detected was reduced after 5 hours, decreasing 42.3% in naive/intravenous cells (figure 3A)(figure 3A) and only 20% in LPS/intravenous cells group (figure 3B). Analysis of other organs (liver, kidney and brain) reported minor cell retention compared with lungs in the LPS group, with no differences among them in the two times points. Interestingly, in naive sheep, a considerable amount of the cells were retained in the brain after 1 hour, with very low percentage of clearance (16%) after 5 hours.
Due to the heterogeneous uptake of cells in the lung parenchyma, two ROIs were defined: normally aerated lung tissue (−900 to −501 HU) and poorly aerated lung tissue (−500 to −101 HU; figure 4). The retention of labelled cells was higher by poorly aerated regions, especially in the LPS group (p<0.005) and remained highly concentrated in these regions after 5 hours.
To determine whether the biodistribution of the labelled cells traced in PET images could correspond to radiotracer efflux, we tracked free [ 18 F]-FDG in a naive sheep Open access and a sheep with LPS-induced acute lung injury at 1 and 5 hours after intravenous injection of the tracer. Contrary to labelled cells, the intravenous-free tracer did not show retention preference to any organ after 1 and 5 hours of administration in both groups (figures 2C and 3D). The lung was the organ with less uptake of the tracer in control groups, both LPS and naive. Kidney and brain were the organs with higher free tracer uptake over time in both groups.
biodistribution by eb route
The EB-labelled cells or free tracer was delivered in the right lower lobe, with the exception of one sheep in which the cells were administered in the left lower lobe. The PET/CT images in the EB groups showed a small focal area of intense uptake in the lung parenchyma, corresponding to the site where the cells were delivered. The labelled cells did not spread in the lung parenchyma or to other organs, at 1 and 5 hours after administration in LPS (figure 5A) and naive (figure 5B) groups. In the image inspection, no differences were found between the two groups. The distribution of the free tracer, with no cells, was also evaluated in an LPS sheep. In a similar pattern, the free tracer remained focally located in the area where it was administrated and no signal of the tracer was detected in any other organ Open access Figure 7 Effect of lipopolysaccharide (LPS) and multipotent adultprogenitorcells (MAPCs) on the PaO 2 /FiO 2 ratio. After the administration of MAPCs, the PaO 2 /FiO 2 ratio recovered and remained in normal ranges until the study was completed. No differences were observed between both routes of administration. The LPS group that did not received MAPCs (groups 3 and 4) significantly worsened PaO 2 /FiO 2 ratio after LPS infusion. EB, endobronchial. (figure 5C). No EB-free tracer was evaluated after the intraoperative loss of the subjects assigned to this experiment.
ct images and quantitative analysis Using CT imaging, the lung tissue density was evaluated. The LPS group presented higher density in ROIs after 1 hour endotoxin infusion compared with naive groups ( figure 6). The LPS group that received intravenous cells showed a significant decrease in the radiological attenuation at 1 hour post administration. However, there were no differences in density between groups after 5 hours.
Ards and cell therapy
We have previously reported that a single dose of LPS (3.5 mg/kg) was able to cause progressive hypoxaemia within the first hours after infusion in sheep on right lateral recumbency. 12 In this study, we compared the effect of MAPC administered by EB or intravenous route on arterial oxygen (PO 2 /FiO 2 ) levels. After LPS infusion, a reduction in PO 2 /FiO 2 values was observed in both groups, reaching levels of hypoxaemia. MAPC were administered 1 hour after LPS, and the PO 2 /FiO 2 levels recovered to normal values in both groups within the first hour of cell delivery, remaining constant throughout the study. There were no significant differences in arterial oxygenation between both routes of administration ( figure 7). However, the group of sheep that after LPS did not receive MAPC (free tracer groups 3 and 4), significantly worsened PO 2 /FiO 2 ratio at the end of the experiments, concordantly with an ARDS model.
haemodynamic and metabolic variables
In the LPS groups, the heart rate remained stable throughout the study and there were no changes after either EB or intravenous MACP administration. As expected in an endotoxin model of ARDS, 12 27 28 MAP decreased in both groups showing lower readings in LPS/ EB cells group after cell administration. Serum glucose, BUN, creatinine, alkaline phosphatase and gammaglutamyl transferase were measured before and after cell administration, revealing no differences between both groups. Alanine aminotransferase concentration decreased in both groups but remained within normal range after cell administration (table 2).
dIscussIon
Our study compares the efficacy of two methods of administration of MAPC for the treatment of ARDS in a sheep model. We evaluated the early distribution and retention of MAPCs delivered by EB and intravenous route, using PET/CT to track the cells in vivo. Despite showing different cell biodistribution, the therapeutic benefit of MAPC was similar by either EB or intravenous route of administration.
This study evaluates for the first time the feasibility of using endobronchial [ 18 F]-FDG as a way to assess MAPC biodistribution. By this route of administration, we observed a trapping of labelled cells inside the lung with no systemic distribution. In ARDS, the equilibrium between the lung interstitial tissue and the capillary vessels is disrupted in early stages of the disease 29 and the possible migration of the cells would hypothetically occur in the early stages. The permanence of the cells inside the lower airways after EB delivery observed in this study may be due to multiple factors: the integrity of the alveolar capillary membrane, the absence of signalling factors from other organs to promote the cell migration and the lack of mechanisms of cell diffusion from inside the alveoli to the systemic circulation. It is possible that the biodistribution of MAPC could be more extensive in advanced stages. The intravenous route, as we expected, resulted in a systemic distribution of the cells, with a predominant retention in the lung. This finding is consistent with the results presented on other studies using PET in models different to ARDS, where the stem cells were trapped in the lung after intravenous injection. 21 24 30 Although there was a slight clearance of MAPC after 5 hours, the majority of cells remained trapped in the lungs, predominantly in injured areas from lungs that received LPS. These findings suggest that the injury could be a mechanism to retain cells, concordantly with previous studies, that recommend intravenous route for cellular therapy in ARDS. 16 31-34 Cell retention in the brain, liver and kidney was lower in the LPS group.
Independently of cell biodistribution, we observed similar effects on arterial oxygenation either in EB or intravenous routes. This finding suggests that the beneficial effect of MAPC is not dependent on parenchymal integration or systemic biodistribution of the cells. Instead, a strong paracrine capacity might be the principal mechanism that contributes to immunomodulation and tissue repair., Both routes of administration have potential limitations. Intravenous administration requires large quantities of cells to guarantee the delivery of an effective therapeutic cell number to the target organ due to the fact that cells are trapped in the lung capillaries in addition to the spleen, liver and kidney. 35 The trapping effect is the major determinant of vascular obstruction and complications emerging thereof. 35 36 Lung and cerebral microembolism are another important adverse event described by intravenous cell delivery and it might be a matter for concern when high cell doses are considered due to risk for clotting formation. 37 However, different doses have been tested in the past for efficacy and toxicity in both routes of administration. No adverse effects have been reported with a dose of 10×10 6 cells/kg but a higher efficacy was observed than with a 5×10 6 cells/kg dose. 38 In the present study, we did not observe any of the abovementioned adverse events and no animal died during or after cell delivery by intravenous and no haemodynamic changes such as tachycardia or severe hypotension were reported.
On the other hand, EB administration of the cells requires a flexible bronchoscopy, a procedure usually safe if performed by a trained specialist. 39 However, for critically ill patients with acute hypoxaemic respiratory failure, this procedure may lead to deterioration of their condition. Complications such as severe hypoxaemia, cardiac arrhythmia, hypercapnia, haemorrhage, pneumothorax, laryngospasm and bronchospasm have been reported in these types of patients. 40 Notably, as previously shown, we required 10-fold less stem cell number in the EB route compared with the intravenous route, ensuring the safety and efficacy of cells when administered intrabronchially. 12 The use of non-invasive image modalities provides realtime information of the cells in vivo. Previous studies in small animals demonstrated the labelling of MAPC or MSCs with [ 18 F] FDG do not affect their biological properties or their cell proliferative activity. 23 26 The PET/CT image fusion allows obtaining anatomical references for regions with increased [ 18 F] FDG uptake. 41 SUV is the parameter most widely used for molecular image quantification. However, image inspection by SUV parameter in the EB route is limited by the fact that there is no distribution of cells over time.
Open access
In summary, labelled MAPCs showed different biodistribution patterns and retention in the lungs after EB and intravenous administration in our ARDS sheep model. Despite these findings and the lower dose used by EB route, we observed similar therapeutic benefits of MAPC delivered by both routes of administration. We also found that this method of cell labelling is safe and applicable in a large animal model. Therefore, this combined technique could also be useful for the conduction of human clinical trials. | 2019-02-07T00:27:23.459Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "6807e8ded17e6a1b862f415ce92f7570d9a6ecc5",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopenrespres.bmj.com/content/bmjresp/6/1/e000308.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b9b15bd1d611ecbc6716f1f69b628d47624ef0a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119225538 | pes2o/s2orc | v3-fos-license | Ground-based measurements of solar diameter
The solar diameter changes or not? Whatever will be the answer the methods used for its measurements are more and more challenging, and facing new astrophysical and optical problems since the required space resolution is of astrometric quality. A quick overview on different methods is here presented, as well as the problem of the solar limb definition, emerging after the flash spectrum during eclipses.
Introduction
The measurements of the solar diameter have been made sistematically in the XIX century. At the end of that century Auwers [1] published a value for the solar radius of 959.63 arcseconds. This value is now adopted as standard value from IAU, the International Astronomical Union. Frahunhofer invented at the beginning of XIX century the heliometer. It is the prototype of direct measurement of the angular diameter of the Sun. This instrument was so accurate that allowed F. W. Bessel to measure in 1838 the first parallax of a star: 61 Cygni, 0.3 arcsec, selected for its significant proper motion of 5.2 arcsec/yr already discovered in 1812 by G. Piazzi. The heliometer in Goettingen [2] (1895) was a conceptual advancement of the heliometer's design; its space version is the Solar Disk Sextant SDS and its educational one is the double pinhole solar monitor [3,4]. The measurements of the solar diameter by meridian transit were monitored on a daily basis since 1851 at Greenwich Observatory and at the Campidoglio (Capitol) Observatory in Rome since 1877 to 1937 [5]. Afterwards solar astrolabes (Danjon type and DORAYSOL[6], Definition et Observation du RAYon SOLaire, have obtained similar results with lower scatters. These methods have in common the use of a fixed telescope and the observation of the drift of the solar image through a meridian or a given almucantarat. 1 Eclipses and planetary transits exploited the timing of the orbital motion of the Earth, Moon and Planets; their angular velocity is much slower than the daily motion of the Sun (geocentric view) and they can allow very accurate timing determination. Black drop and seeing effects can be overcome for planetary transits by fitting the chord draft by the planet's disk over the solar limb with an analitycal function [7]. With fast video recording either eclipses or drift-scan transits can also achieve interesting timing resolution. Eclipses data of solar diameter still show a random scatter of 0.5 arsceconds [8] around the standard value. Planetary transits and eclipses are space measurements in the table of fig. 1, even if these observations are ground-based, because the influences of seeing are limited, e.g. the dis/appearance of a bead and this is an on/off signal and seeing acts only through scintillation [9]. RHESSI [10] measurements of the solar oblateness (of general relativistic interest [11,12]) are better than previous ground-based measurements [13]. In recent publications of SOHO satellite group [14,15] data are interpreted as if the Sun has a rock-steady diameter.
The eclipses and the Baily's beads
The debate on the long-term variability of solar diameter started in 1978 [16] when ancient eclipse data where used to demonstrate a variation in the solar diameter in the past 300 years. The duration of totality is a rapid function (∆T ∼ √ d) of the distance d between the observer and the limit of the umbra, . The values of the solar radius measured at Campidoglio and at Greenwich compared (adapted from Gething, 1955). The difference between the two measurements is due to the atmospheric effects, as well as, to personal equations of the observers with different sensitivity to wavelength and timing. The measurements, all made with naked eye observations, were expected to show small errorbars, thanks to the statistical average. Each point is the annual average of the measurements and the statistical convergence is surprisingly missing. The results of this method seem to be affected by some systematical effects randomly different from one year to another... a contradictory affirmation which shows the troubles that all drift-scan methods have done up to DORAYSOL experiment (1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008).
consequently near the borders of the totality path, where the eclipse is nearly grazing (the North or the South pole of the Moon moves nearly tangent to the photosphere) [17], it is possible to know the distance of the observer from the actual limit of umbra. Thanks to the rapid variation of the duration ∆T of the umbral phase, an observation of a few seconds (one or two) of totality and the unambiguous identification of the position of the observer, are parameters sufficient to know precisely, ± 10 m, this distance, and a great relative precision can be achieved with respect to the whole extension of the umbra on the Earth's surface ∼ 300 Km. This is the same relative accuracy achieved on the solar diameter determination with central eclipses. If there are observers located at both shadow's limits, North and South, the uncertainty on the ephemerides adopted in the data analysis can be also bypassed. The last uncertainties arise from the adopted lunar profiles, such as Watts [18] or Kaguya [19], and from lunoid's corrections, such as Morrison and Appleby [20] or Soma [21]. From the total eclipse observed by Halley in 1715 two observers located near the two borders were identified and an hypothesis on the past value of solar radius was made, namely 0.48 arcseconds larger than its standard value of 959.63 arcseconds. This extraradius should have been reducing over the following 200 years, up to 1925, when a total eclipse was studied in Yale University under prof. Brown's guidance. Southern limit was on Manhattan Island (New York City), while Northern limit was identified near Ladd Observatory (Providence, Rhode Island) where a flash spectrum 2 was observed with an objective prism [22]. In 1979 after 3 whole Saros cycles, another eclipse casted its shadow on USA and the analysis confirmed that the radius was similar to 1925. David Dunham proposed to observe the Baily's beads, produced by the light from photosphere through the lunar valleys. In grazing eclipses their number N can be high, providing N determinations of photosphere's circle. But it is not their positions to be directly measured, since it should not be possible to do it better than 1 arcsec: it is the timing of appearing or disappearing of the Baily's beads. The actual radius of the Sun minimizes the scatter between calculated [e.g. with Occult 4 freeware program of D. Herald] and observed phenomenon, once considered also the opposite shadow's border.
To reduce the effects of the uncertainties on lunar profile and lunoid corrections the eclipses data were preferably analyzed after a Saros (18 years and 11 days) which is a multiple of libration cycle, in order to observe the same Baily beads produced by the same lunar valleys. Another solution proposed by D. Dunham was to select only the polar beads, since the difference in latitude's libration 2 The flash spectrum is an array of emission lines detectable from the limb of the Sun during the flash periods of a few seconds just after the beginning of totality during a solar eclipse or just before the instant of its termination. When the solar photosphere is occulted by the Moon, the layers of the Suns atmosphere flash into prominence, and the spectrum briefly shows the bright lines at all wavelengths produced by tenuous hot luminous gas. Except during eclipses, this part of the spectrum is masked by the glare of the Suns disk. Study of the flash spectrum gives information about the physical state of the solar chromosphere. The flash spectrum was first observed by the American astronomer Charles Augustus Young during the eclipse of Dec. 22, 1870.
among two eclipses is rather small and the same valleys produce the same beads at each eclipse [23]. The presence of the Kiselevka valley discovered during the total eclipse of 2008 in a place where the Watts atlas of the lunar profiles did not show any walley was published in 2009[24]. This is an example of a structure never identified in previous eclipse, nor in Watts profiles, therefore the method of polar beads still has some uncertainties. The Kaguya's profile (published on Nov. 2009), by the name of the recent Japanese lunar mission, gives a profile which is expected to be more preciser than the Watts profile, even if the angular sampling is limited to a point each ∼ 16 arcsec; its accuracy in height is δh ± 1m. A new era in the eclipse methods is started after Kaguya, but the solar limb definition in eclipses video is a new open problem, as we can see in the following paragraph. Carles Schnabel [25] and other observers since 2005 claimed out the visibility of chromosphere during annular eclipses, and after other observations of a thin region above the photosphere with telescopes ranging from 4.5" to 8" with neutral density filters (transmittance ∼ 10 −4 ), the definition of a bead disappearance or appearance seems to be in need of revision. Another example is the two observations of 2008 total eclipse made by Richard Nugent and Chuck Herold: the latter was more inside the umbral limit but observed with a 5" while the former observer used a 3" and did not see the light of the last thin layer above the photosphere.
Flash spectrum and limb definition
During an eclipse, the flash spectrum is the spectrum captured at the instants of beginning and end of totality. W. Campbell when directed the Lick Observatory, steered some eclipse observational champaigns: the Crocker's eclipses from the person who financed them. One of his experiments consisted in photographing over a moving plate the spectrum of the Sun through a slit perpendicular to the direction of motion of the Moon over the Sun. This experiment produced magnificent spectra, called spectrum flash. The exposure of such images started 10 s before and end 10 s after the start of totality. An image of this spectrum flash is published in the Lick Observatory studies of 1931 [26] and here reprinted, see also [28,27].
Solar limb definition for drif-scan transits
The Sun is a self gravitating gaseous structure, and its limit is not sharply defined, nevertheless the variation of the density with the height is exponential, and in the wavelengths of visible light the surface of unitary optical depth τ =1 can be considered sharp with respect to the dark sky of the background. The solar limb darkening function LDF, moreover, describes a decrease of the luminosity down to the 16% of the value attained at the center of the disk. The combination of the LDF with the Point Spread Function PSF of the telescope pours photons out of the geometrical limb. The most suitable definition of solar limb has been chosen as the maximum of the derivative of the luminosity along a radius. This maximum can be detected by derivation of the Fourier anti-transform of the observational data, this method has been considered stable with respect to the seeing effects [29]. Nowadays the influence of seeing on limb detection is being considered below the arcsecond level [30,31]. How the Flash Spectrum Region and the atmospheric halos can affect this definition? It seems that the FSR and halo effects are negligible when the photosphere is visible, while during the very last phases of total eclipses FSR become important.
Flash Spectrum Region and Baily's beads
In fig. 7 is represented a scheme of the last solar features visibles during a total eclipse. The Baily's bead is a small sector of photosphere, already darkened to 16% of its central luminosity [32], and it is surrounded by two layers much fainter the Flash Spectrum Region FSR and the chromosphere. The width of this FSR is within an arcsecond, while the chromosphere goes up to three arcseconds. The intensity of the FSR integrated over the area can be larger than the residual luminosity of the bead, and this happen frequently with big telescopes observing grazing eclipses (examples of C. Schnabel and C. Herold). This pheonomenon could have been determinant in the hybrid (annular-total) eclipse observed by Clavius in 1567 [33], to explane the observed annularity in contrast with the calculated totality by more than 4 arcseconds. Another problem for the dis/appearing bead can be the scintillation [9].
Conclusions
The next total eclipses will provide us important data on the FSR, and the dataset of Baily beads[24] will be revisited, in order to better understand and treat ancient eclipses and to recover the past behaviour of solar diameter. PI-CARD satellite mission, expected to start in June 2010 [34], is expected to provide a milliarcsecond precision on the solar diameter measurements. Fig. 7. The intensity of the chromosphere, coloured by particular emission lines, is 10 −4 times the intensity of the photosphere (central value). The Flash Spectrum Region FSR is 10 −3 times the intensity of the photosphere. There is a confusion limit for which the intensity of the bead (photosphere) equals the light from all FSR visible. It is necessary to take this fact into account when the dis/appearance timing of a bead is studied with a few arcseconds of space resolution imaging. Opportune models and scale heights have to be set up. | 2011-06-13T19:52:59.000Z | 2011-06-13T00:00:00.000 | {
"year": 2011,
"sha1": "b338dbbeb0b1032324e19f41618b17bada770038",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b338dbbeb0b1032324e19f41618b17bada770038",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244266900 | pes2o/s2orc | v3-fos-license | A Low-Latency, Low-Power FPGA Implementation of ECG Signal Characterization Using Hermite Polynomials
: Automatic ECG signal characterization is of critical importance in patient monitoring and diagnosis. This process is computationally intensive, and low-power, online (real-time) solutions to this problem are of great interest. In this paper, we present a novel, dedicated hardware implementation of the ECG signal processing chain based on Hermite functions, aiming for real-time processing. Starting from 12-bit ADC samples of the ECG signal, the hardware implements filtering, peak and QRS detection, and least-squares Hermite polynomial fit on heartbeats. This hardware module can be used to compress ECG data or to perform beat classification. The hardware implementation has been validated on a Field Programmable Gate Array (FPGA). The implementation is generated using an algorithm-to-hardware compiler tool-chain and the resulting hardware is characterized using a low-cost off-the-shelf FPGA card. The single-beat best-fit computation latency when using six Hermite basis polynomials is under 1 s with a throughput of 3 beats/s and with an average power dissipation around 28 mW, demonstrating true real-time applicability.
Introduction
Cardiovascular disease is the number one cause of death worldwide [1]. An electrocardiogram (ECG) registers the electrical activity of a heart, and it stands as a valuable diagnostic tool. However, in clinical routines, ECG analysis is performed as a visual inspection by a cardiologist, which is a tedious task, further aggravated in the case of long-term ECG. For instance, 24 h of Holter recordings contains around 100,000 heartbeats. Figure 1 depicts the main components of the ECG, with the most important for diagnosis being the waves P, Q, R, S and T. The Q, R and S waves are normally studied together as the QRS complex. The P wave represents the moment when the auricles contract to send blood to the ventricles, and at the end of the PR segment, the ventricle is full. During the QRS complex, the ventricle expels their contents and are fully emptied at the end of the ST segment. The T wave indicates that the heart is at rest.
Developing efficient techniques to automate ECG analysis is instrumental in helping a cardiologist with their diagnosis. The detection of arrhythmias is of special interest [2]. The QRS complexes of heartbeats can be successfully used to identify most arrhythmia types [3][4][5]. The T wave does not contribute to the identification process [6] and the P wave, even though it provides relevant information about arrhythmias, possesses a low signal-to-noise ratio (SNR), so it is not reliable [7,8].
ECG analysis starts with the detection and characterization of the beats [9]. The detection of the QRS complex is carried out with a high accuracy; a 99.7% detection accuracy was reported in [10]. As for the characterization of the beat, among the different methods [6,11,12], the use of a function space based on Hermite polynomials has many advantages [3,10,13]: dimensionality reduction, low noise sensitivity, etc. The ECG samples are fitted with a linear combination of basis functions, and the coefficients of this linear combination are used as features for representing heartbeats. As an example of the resulting dimension reduction, the 144-sample QRS complex obtained at a rate of 360 sps can be reasonably characterized with 6 or 7 parameters [14]. Regarding the average classification error, values as small as 0.34% are reported in [15], thus supporting the development of new classifiers based on Hermite functions as well as hardware implementations able to provide high-quality real-time heartbeat analysis.
One disadvantage of the Hermite representation is that it is computationally demanding. There are some approaches addressing this problem. In [16], graphics processing units (GPU) are used to accelerate the offline processing of Hermite fitting of heartbeats. The use of Field-Programmable Gate Array (FPGA) devices is supported in [17]; in this paper, the results of an FPGA-based implementation aiming at wearable systems are presented. Reconfigurable devices (i.e., FPGA) allow for developing a custom architecture that can be adjusted to the different levels of computation performance and energy efficiency. Moreover, they can be used to prototype a system before being implemented as an application-specific integrated circuit (i.e., ASIC), which can achieve even better computation and electrical consumption performance. The developing times required for both FPGA and ASIC is quite long and complex in comparison with the traditional software approach (i.e., microprocessor-based or GPU-based), and high-level synthesis (HLS) tools have thrived in the last few years [18,19]. In this work, the HLS tool AHIR [20][21][22] has been used. AHIR is an open-source alternative to proprietary products that allows us to generate RTL descriptions from C language with reduced development times.
The central contribution of this paper is the design and implementation of a novel hardware module able to characterize heartbeats in real time by means of Hermite functions. This module can be used as the input to systems to compress the ECG data as well as to classifiers. Despite the interest in producing hardware systems for real-time processing of ECG signals [23][24][25][26], to the best of our knowledge, this is the first time that Hermite function fitting with a complete preprocessing chain is implemented in hardware for ECG processing. The main contributions of this paper are as follows: • Novel hardware implementation of full processing chain for real-time ECG characterization based on Hermite functions; • Introduction to the AHIR HLS tool; • Implementation of the system in a low-cost FPGA-based board; and • On-board power consumption measurements.
The paper is organized as follows: Section 2 elaborates on the Hermite fitting of heartbeats; in Section 3, the AHIR tool is presented; Section 4 describes the system implemented on an FPGA device; the implementation results are in Section 5, and they are analysed in Section 6; and, finally, the conclusions are drawn in Section 7.
Estimation of the QRS Complex with Hermite Polynomials
As mentioned in Section 1, QRS complexes are employed for arrhythmia detection and the use of Hermite functions allows us to reduce the number of dimensions involved in the ECG classification, without sacrificing accuracy [3,10] as well as enabling the transmission of ECG compressed data [27]. Moreover, Hermite fitting representations are robust in the presence of noise.
The MIT-BIH arrhythmia database [28] is used as a benchmark in this work. It contains 48 2-channel ECG recordings, sampled at a frequency of 360 Hz and with a duration of approximately 2000 beats (half an hour). Each beat has been manually annotated by at least two cardiologists, so it can be used to check the outcome of ECG automatic classification. The database includes an extended set of arrhythmias, and it has been extensively used in automatic arrhythmia classification [4,10,29].
Prior to QRS characterization, the ECG signal must be processed to remove the baseline drift and high-frequency noise [30]. The QRS complexes have a length of 70-100 ms; therefore, extracting a window of 200 ms around the peak (i.e., R wave) of the beat ensures that we acquire the complete complex while leaving the T and P waves out. The QRS window is expanded up to 400 ms by means of zero padding the extremes of the 200-ms window since the Hermite functions converge to zero in ±∞. Thus, the QRS beat data used as an input to the Hermite polynomial approximation consists of a 144-sample vector x = {x(t)} that can be estimated with a linear combination of N Hermite basis functions φ n by means of coefficients c n (Equation (1)). In this work, we use N = 6, which provides a good compromise between having a compact representation and having a good accuracy in the representation of the beat [14].
The aim of the Hermite fitting is to find the approximation to the QRS complex {x(t)} with the best minimum-mean-square-error (MMSE). The approximation of x(t) is expressed asx with where H n (t/σ) is the n th Hermite polynomial. The Hermite polynomials can be computed recursively as The parameter σ is a time-scaling factor in the polynomials that adjusts the width of the Hermite functions to the one of the actual QRS complexes. The maximum value of σ for a given order n is studied in [3].
Give σ, the orthogonality of the Hermite basis function allows us to find the optimal coefficient-those that minimize the square error-as In order to find the best fit, the MMSE approximation for each σ is obtained, and the one with the smallest value is kept. As a result, each heartbeat is represented by a set composed of the best σ and the corresponding fit coefficients c = {c n (σ)} (n ∈ [0, N − 1]) and it is possible to use only these parameters to perform the morphological classification of the heartbeats [3,29]. Figure 2 depicts the effect of increasing the number of Hermite function in the beat estimation. Figure 2a shows the original beat (in black) and the estimation with N ∈ {6, 12, 24}. It can be seen that, as long as the value of N is increased, the estimation captures the variations of the heartbeat in more detail. Figure 2b shows the decreasing trend of the minimum square error (MSE) for each estimation.
From Algorithm-to-Hardware Using AHIRV2, a C-2-VHDL Compiler
The AHIRV2 compiler tool-chain [20][21][22] provides a pathway from a C-program to actual synthesizable hardware. The tool-chain takes a description of an algorithm (described in C) and produces a VHDL logic circuit description that is equivalent to the algorithm.
The AHIRV2 compiler starts with a C program and produces VHDL. The clang 2.8 compiler (www.clang.org; accessed on 1 September 2021) acts as the C front-end and is used to emit LLVM byte-code (www.llvm.org), which is then converted to VHDL using the following transformations: 1.
The LLVM byte-code is translated into an internal intermediate format, which is itself a static-single assignment centric control-flow language (named Aa), which allows for the description of parallelism using fork-joined structures as well as arbitrary branching; 2.
The Aa description is translated to a virtual circuit (the model is described in the next subsection). During this translation, the following major optimizations are performed: declared storage objects are partitioned into disjoint memory spaces using pointer reference analysis, and dependency analysis is used to generate appropriate sequencing of operations in order to maximize the parallelism. Inner loops in the Aa code are pipelined so that multiple iterations of a loop can be executed concurrently; 3.
The virtual circuit is then translated to VHDL. At this point, decisions about operator sharing are taken. Concurrency analysis is used to determine if a shared hardware unit needs arbitration. Optimizations related to clock-frequency maximization are also carried out here. The generated VHDL uses a pre-designed library of useful operators ranging from multiplexors and arbiters to pipelined floating point arithmetic units (arbitrary precision arithmetic is supported, and in particular, there is full support for IEEE-754 single precision and double precision add/multiply with all rounding modes).
An Illustration of the Virtual Circuit Generated by the AHIRV2 Compiler
The virtual circuit generated by the AHIRV2 compiler consists of three cooperating components: the control path, the data path and the storage system [21,22].
To illustrate the model, we consider a simple example.
The AHIRV2 tool-chain transforms this program to produce a virtual circuit, which is depicted in Figure 3. The virtual circuit is then translated to synthesizable VHDL. The virtual circuit consists of three components independent parts, namely the data path, the storage subsystem and the control path.
The Data Path
The data path is a directed hyper-graph with nodes being operations and arcs being nets (shown as ovals). Each net has at most one operation that drives it. Furthermore, most operations have a split protocol handshake with the control path: two pairs of request/acknowledge associations ( * sr/ * sa for sampling the inputs and * cr/ * ca for updating the outputs). The operation samples its inputs upon receiving the sr request symbol and acknowledges the completion of this action by emitting the sa acknowledge symbol. After receiving the cr symbol, the operation updates its output net using the newly computed value. The sequencing is as follows:
sr -> sa -> cr -> ca
Note that an operation can be re-triggered while an earlier edition of the operation is in progress (this is important if the operation is implemented using a pipelined operator).
Some data-path operations (such as the multiplexor shown on the top and the decision operation shown at the bottom left in Figure 3) follow a simpler protocol. The multiplexor has a pair of requests and a single acknowledge, with the condition that at most one of the requests can be received at any time instant. The input corresponding to the request is then sampled and stored in the output net of the multiplexor. The decision operation has a single request and two acknowledges. Upon receipt of the request symbol, the decision operation checks its input net and emits one of the two acknowledges depending on whether the input is zero or nonzero. In Figure 3, the following data-path operations are instantiated: Note that the data path only shows the operations and their interconnection. When the data path is implemented as hardware, multiple operations may be mapped to a single operator depending on cost/performance trade-offs. In this case, multiplexing logic is introduced in the hardware. These decisions and manipulations are performed in the compiler stage, which is responsible for transforming the virtual circuit to VHDL.
Storage Subsystem
The load and store operations in the data path are associated with memory subsystems. In general, there can be multiple disjoint memory subsystems inferred by the compiler. In this particular case, the arrays a[] and b[] are mapped to disjoint memories, due to which the two loads are allowed to proceed in parallel (the relaxed consistency model is enforced). In order to maintain the relaxed consistency model, the memory subsystems are designed to use a time-stamping scheme, which guarantees first-come-first-served access to the same memory location.
Control Path
The control path in the virtual circuit encodes all of the sequencing that is necessary for correct operation of the assembly. The control path (shown on the left in Figure 3) is modelled as a Petri-net with a unique entry point and a unique exit point. The Petri-net is constructed using a set of production rules, which guarantee liveness and safeness [21]. Transitions in the Petri-net are associated with output symbols to the data-path (these can be described by the regular expressions * sr and * cr) and input symbols from the data path (these are of the form * sa and * ca). The * sr symbols instruct an element in the data path to sample its inputs and the * cr symbols instruct an element in the data path to update its outputs (all outputs of data path elements are registered). The * sa and * ca symbols are acknowledgements from the data path, which indicate that the corresponding requests have been served.
The following classes of dependencies are encoded in the control Petri-net: where there is a WAR dependency through c, then the cr request to B can be issued only after the sa acknowledge from A has been received; • Load-Store ordering: If P, Q are load/store operations to the same memory subsystem, and if at least one of P, Q is a store, and if P is supposed to happen before Q, then the sr request to Q must be emitted only after the sa acknowledge from Q has been received. The memory subsystem itself guarantees that requests finish in the same order that they were initiated. This takes care of WAR, RAW and WAW memory dependencies.
The control path in Figure 3 shows the sequencing generated by these rules. When pipelining an inner loop, the execution of an operation in a particular iteration is enabled as soon as its dependencies on results from previous iterations are satisfied.
Implementation of the System
The analysis of an ECG signal received from a sensor goes through the following steps:
1.
Initial signal filtering to remove noise and drift; 2.
ECG beat recognition and identification of the QRS complex; 3.
ECG beat feature extraction: this can be performed in various ways. We look at the use of Hermite polynomials for the same; 4.
ECG classification: based on the beat features, classify the beat as normal or anomalous. This last step is not part of the current work.
We have implemented a signal chain that integrates the first three steps in the list above. Our main contribution is that we have built a custom hardware implementation of the entire signal flow up to Hermite classification, and demonstrated that sophisticated low power, real time ECG analysis is possible in hardware and that high level algorithm to hardware design techniques offer a practical pathway to such realizations.
The incoming ECG signal is assumed to be generated by an 11-bit ADC with a sampling rate of 360 Hz. For all experiments described in this report, we used 11-bit sampled data from the MIT arrhythmia reference database [28]. The initial signal processing such as the band-pass filter characteristics and the algorithm for QRS detection have been well studied in the literature [30]. The use of Hermite polynomials to extract features from the ECG signal has also been studied extensively [3,10,29].
The entire signal chain is illustrated in Figure 4. In our implementation, the signal chain is divided into two stages. The first stage (the front-end) is responsible for the signal filtering and the QRS peak detection. The second stage takes the identified beats and calculates a best Hermite-polynomial fit for the identified beat. We illustrate this division in Figure 5. All the elements of the signal chain are explained in Sections 4.1 and 4.2. Section 4.3 elaborates on the final system architecture included the signal chain as well as the control block and communications interfaces.
Algorithmic Description of the First Stage
The first stage is responsible for the filtering and QRS peak detection, and the sequence followed is shown in Listing 1.
The Band-Pass Filter
The bandpass filter used is a 99-tap FIR filter with 16-bit taps. The pass-band is set between 6 Hz and 28 Hz. The stop-band attenuation is chosen to be −40 dB. We acknowledge the use of an online filter design tool (http://t-filter.engineerjs.com) [31].
The QRS Detection Algorithm
The QRS detection algorithm is implemented in three stages:
1.
The band-pass filter outputs are sent through a derivative filter. This acts as a high pass filter that identifies the regions of rapid change (including the QRS complex); 2.
The output of the derivative filter is rectified and integrated using a moving average filter with 32 taps. The strong peaks of the sequence generated by this moving average filter are expected be in correspondence with the peaks of the QRS complex; 3. The output of the moving average filter is analysed by a threshold crossing state machine that attempts to identify the center peaks of the QRS complex.
The threshold crossing state machine is illustrated in Figure 6. For the sake of brevity, we do not present the entire C code of the finite state machine. However, a summary of the C code is shown in Listing 3. The algorithm gives the position of the QRS peak, and the heartbeat for further analysis consists of 144 samples centered at this peak.
The Second Stage: Calculation of Hermite Polynomial Fits
The first stage in the signal chain provides a QRS peak and a detected heartbeat (post band-pass filtering). Suppose is the detected beat. The Hermite polynomial basis set consists of the first six Hermite polynomials and a scale factor σ. The value of σ ranges between a minimum value of 1/120 and 1/90 and is discretized into 10 values. Denote the Hermite polynomial with order N and scale-factor σ as h N (σ) = {h N (σ, k)} 143 0 . We calculate the dot products as N varies from 1 to 6 and σ u varies as described above. The dot products are computed using single precision IEEE floating point arithmetic. The Hermite polynomial values are precomputed and stored in the hardware as tables.
The best fit is determined by the value of the scale factor σ u , which minimizes the mean square error This value of σ and the corresponding coefficients c σ,j are the features of the beat extracted by the Hermite fit. These values are used for further characterization of the beat as normal or anomalous [10,29].
The algorithm used for the second stage is shown in Listing 4. // report the best fit. sendBestFitToOutput(); } }
System Architecture
The system architecture follows the two stage approach described at the beginning of the section. The architecture is depicted in Figure 7.
A UART is used to configure the system by downloading the pre-calculated Hermite polynomials, the filter coefficients, and some configuration parameters. In this case, there are sixty distinct Hermite polynomials, each with 144 samples, with each sample being coded in single precision IEEE floating point format (4 bytes per sample).
After the initial configuration, ECG samples are streamed to the hardware, and fit coefficients are extracted for every detected beat. The peak throughput and total latency in the signal chain are characterized.
Results
The Xilinx Artix 7 series FPGA xc7a35tcpg236 (Xilinx, San Jose, CA, USA) was used as the platform for the hardware implementation. In particular, we used the BASYS-3 FPGA board from Digilent (Pullman, WA, USA) [32]. For synthesis, we used the Xilinx Vivado 2019.4 tools. The block diagram of the test setup is shown in Figure 8. In this setup, the host computer first uses the UART to download the Hermite polynomial tables and the filter coefficients to the system. After this is performed, ADC samples are streamed to the FPGA over the UART at a baud rate of 115,200. The post Hermite fits and QRS peak locations are monitored by an application on the host computer. It must be stressed that AHIR allows for simulation of the system by using benchmarks written in C. During the simulation, it is possible to select if the simulation is using the compiled C files or if the hardware functions are simulated by means of an HDL simulator (i.e., GHDL). In both cases, the input vectors are read from files and the output vectors are stored also in files, so it is possible to check the correctness of the hardware implementation. For the overall system, we present the The summary of resource utilization is shown in Table 1. For these particular FPGA devices, the limiting factors are the look-up tables (LUT). Thus, devices with more logic resources are required if the order of the polynomial is to be increased. To measure the latency in the entire signal chain, we timed the difference between the entry of the first byte of an ECG sample and the exit of the last byte of the Hermite characterization for the corresponding beat. For the throughput, we observed the maximum rate at which beat data could be supplied to the system. For a clock of 50 MHz, the latency and throughputs obtained were 0.82 s and 3 beats/s.
To characterize the power consumption, we observed the difference between the idle current drawn by the FPGA when it was quiescent (unprogrammed) and the current drawn by the FPGA during full speed (maximum throughput) operation. We use the power measurement setup presented in Figure 9.
Basys 3 board features a jumper JP2 that is used as power source select and is located at the entrance of the power supply. It selects whether power supply comes from the USB cable or External power supply. In this work, we use USB power supply of 5 V. We add a shunt resistance over this jumper and use differential probe to measure the voltage over the shunt. Since the resistance is in series with the power supply, we are able to obtain the current that goes to the board from the power supply. By knowing the input voltage and input current, we obtain the power consumed by the board. The resistance value is chosen to ensure the correct functionality of the power supply regulators located on the Basys 3 board, as explained next. Voltage regulator circuits create the required 3.3 V, 1.8 V and 1 V from the main power supply [32]. A power supply of 1 V is used for an FPGA core; 1.8 V is used for an auxiliary FPGA supply and RAM memory; and 3.3 V is used for IO pins, USB connection, clocks, Flash, etc. Based on typical and maximum current values for each of these supplies, listed in [32], we compute an approximate value for the shunt resistance. According to our estimates, the peak current values for the design should not exceed 80mA, and current demand on the other two supplies should not be extreme either. As a result, when maximum typical current values for the 1.8 V and 3.3 V (150 mA and 1.5 A, respectively) and 80 mA for the 1 V supply are assumed, an approximate value for the resistance is 0.52 Ω. We use 0.47 Ω for our measurements as a value that is close to the estimated one.
Since we are interested in the current consumed by the design only, we first measure the current when the FPGA is programmed and the application is running, i.e., data are sent and received. The measured current is 170.96 mA on average. Then, we subtract the current measured when the FPGA is programmed, but without any data traffic, that becomes 165.36 mA. By subtracting this current, we eliminate the current consumed by other parts of the board as well as the FPGA static current. Consequently, the proposed design consumes 5.6 mA on average. When this current is multiplied by the 5 V input voltage, it results in 28 mW of approximated FPGA dynamic power.
The results are summarized in Table 2. The obtained latency and throughput fits real-time requirements, and the power consumption is low.
Discussion
The hardware implementation of automatic ECG analysis systems is essential for ambulant monitorization of patients, and there are several examples in the literature for both ASIC [23,24] and FPGA [25,26] implementations. However, to the best of our knowledge, there are no hardware implementations of ECG signal processors that apply the Hermite fit for beat compression or classifications. For example, the work in [26] describes the implementation of another technique called Empirical Mode Decomposition applied to ECG signals in a Spartan 3E FPGA but does not report power, performance and area metrics. As for the detection performance, the overall accuracy reported is 94.8%, while with Hermite functions, it is possible to achieve 96.66%. The work in [25] is a HW/SW co-design where the QRS complex extraction is implemented in an FPGA and is based on geometrical properties of a two-dimensional phase-space portrait of the ECG signal, while the beat classification is performed by Open Source ECG analysis software. The data are read from and written to the on-board DDR memory, while the data proposed in this work are sent and received by UART, corresponding to a more realistic case, since it could be easily replaced by an ADC interface. Additionally, the pre-processing and pre-partition are performed on the software in [25], so a fair comparison with this work would be difficult to achieve. The authors reported a premature ventricular detection of 92.36%, while with Hermite functions, it is possible to reach 96.86%.
Preliminary results of the proposed design were presented in [17]. Only the Hermite fit process was tackled in our previous work, so the pre-processing chain was neglected. A peak power consumption of 3 W was reported in contrast with the averaged power of 28 mW achieved in the current design. This new version of the circuit can be used to feed a hardware block to perform data compression or classification in real-time with a low power consumption.
The reported performance metrics are promising. The latency is close to a second, which is suitable given that heart rates are commonly between 1 and 2 beats/s; thus, the results of the first beat characterization appear after 1 or 2 beats. The throughput is around 3 beats/s, which covers heart rates of 180 beats/min, which is an extreme situation for a person. Finally, the power consumption is around 30 mW, which is a low value for an FPGA.
Summarizing, the results yield that the system is capable of real-time and lowpower processing.
Conclusions
In this paper, we presented the design of an FPGA-based system able to perform real-time ECG characterization through Hermite polynomials. The AHIR HLS tool was used to perform the development and testing. The system was successfully implemented on a low-cost board with a latency of less than 1 s, a throughput of 3 beats/s and a power consumption around 28 mW. Hence, we demonstrated that complex low power, real-time ECG analysis is possible through high-level synthesis.
The current design can be easily modified and extended due to the flexibility provided by the AHIR set of tools. On one hand, the number of polynomials used in the estimation (i.e., N) can be increased to improve the accuracy of the estimations. Moreover, a clustering block to help in the classification process can be added [10]. In any case, it is clear that a bigger FPGA device is necessary. Additionally, the throughput can be increased to consider higher heart rates, which involves increasing parallelism and, therefore, increasing the resources demand. All of these new ideas can be easily designed and tested with the HLS approach provided by AHIR.
Author Contributions: M.P.D., G.C., D.G.M. and A.O., design of the signal processing algorithms. M.P.D., G.C. and R.J., conceptualization, implementation and testing of the research. All authors developed the methodology. M.P.D., G.C. and R.J. discussed the basic structure of the manuscript, drafted its main parts, and reviewed and edited the draft. All authors have read and agreed to the published version of the manuscript.
Funding: This research has been partially funded by the Spanish Ministry of Science, Innovation and Universities through project RTI2018-095324-B-I00.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-10-19T16:01:51.338Z | 2021-09-22T00:00:00.000 | {
"year": 2021,
"sha1": "f668e6e96508e7b7d7ad7d9a1b37f9c68aed8f08",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/10/19/2324/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dc35fdbcb04834526c26ac9d16c13db797cb2268",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118845395 | pes2o/s2orc | v3-fos-license | Generating scale-invariant tensor perturbations in the non-inflationary universe
It is believed that the recent detection of large tensor perturbations strongly favors the inflation scenario in the early universe. This common sense depends on the assumption that Einstein's general relativity is valid at the early universe. In this paper we show that nearly scale-invariant primordial tensor perturbations can be generated during a contracting phase before the radiation dominated epoch if the theory of gravity is modified by the scalar-tensor theory at that time. The scale-invariance protects the tensor perturbations from suppressing at large scales and they may have significant amplitudes to fit BICEP2's result. We construct a model to achieve this purpose and show that the universe can bounce to the hot big bang after long time contraction, and at almost the same time the theory of gravity approaches to general relativity through stabilizing the scalar field. Theoretically, such models are dual to inflation models if we change to the frame in which the theory of gravity is general relativity. Dual models are related by the conformal transformations. With this study we reinforce the point that only the conformal invariant quantities such as the scalar and tensor perturbations are physical. How did the background evolve before the radiation time depends on the frame and has no physical meaning. It is impossible to distinguish different pictures by later time cosmological probes.
PACS number(s): 98.80.Cq, 98.80.Bp Cosmic inflation [1] is a hypothesis about the early universe. It states that at the early time before the radiation dominated epoch (we call this time pre-big bang in this paper) our universe experienced a period of nearly exponential expansions. This paradigm is very successful. It provides not only solutions to the horizon and flatness puzzles existed in the hot big bang cosmology but also mechanisms to generate initial density perturbations for structure formation. The single field slowroll inflation models generically predict adiabatic, nearly scale-invariant and Gaussian primordial density perturbations, which were confirmed with strong confidences by the observations of the cosmic microwave background radiation (CMB) [2]. Besides the scalar (density) perturbations, the single field inflation models also predict nearly scale-invariant and sizeable tensor perturbations, i.e., primordial gravitational waves, with the amplitude proportional to the energy scale at which the inflation taking place. The tensor perturbations left a unique imprint to observations by producing the curl-like B-mode polarizations on the CMB sky. Recently the BICEP2 collaboration announced the detection of the B-mode polarizations of CMB at large angular scales [3]. This suggested a large tensor-to-scalar ratio, r = 0.20 +0.07 −0.05 at 68% CL if all of these B-mode polarizations are originated from the primordial tensor modes. It is commonly believed that this result, if confirmed, strongly favors the single field inflation models. Some alternative models, such as the Ekpyrotic/Cyclic universe [4], bouncing universe [5] and so on usually predict non-detectable tensor modes and are disfavored by BICEP2's result.
This can be seen from the following general argu- * Electronic address: limz@ustc.edu.cn ments. The single field model in the context of Einstein's gravity has the action S = (1/2) d 4 x √ g[R + ∂ µ φ∂ µ φ − 2V (φ)], where we have used the unit in which M p = 1/ √ 8πG = 1. The tensor perturbation from ds 2 = a 2 dη 2 − a 2 (δ ij + γ ij )dx i dx j , which is traceless and transverse γ ii = ∂ i γ ij = 0, has the following quadratic action This is a massless spin-2 field coupled to the background through the cosmic scale factor a. The prime means the derivative with respect to the conformal time η. The tensor perturbation has only two components denoted by λ = +, ×. Its re-scaled amplitude v k , which relates to γ ij in Fourier space as kâ † (− k, λ)]e ij ( k, λ)/a, here e ij is the polarization tensor andâ andâ † are the annihilating and creating operators of gravitons respectively, satisfies the equation of motion By quantization and choosing the Bunch-Davies vacuum at initial time η → −∞, v k = e −ikη / √ 2k, the resulted tensor power spectrum at later time when η → 0 is Here we have assumed the background equation of state w is a constant and ν = |3(w − 1)/2(1 + 3w)|. We also for convenience put the time of all the pre-big bang phase at the range −∞ < η < 0, so the time η = 0 is approximately the beginning of the hot expansion. Taking the Ekpyrotic/Cyclic universe [4] as the example, the perturbations of the cosmological scales were generated at the slow contracting phase in which w > 1, hence the tensor spectral index n T = 3 − 2ν > 2. This spectrum has a large blue tilt and the power is suppressed deeply to undetectable level at large scales correspond-ing to the observation of BICEP2. By contrast, the inflation, during which w ≃ −1 and a ∼ 1/(−η), predicts a constant and nearly scale-invariant tensor spectrum, i.e., P T = const., n T ≃ 0. So the power is not suppressed at large scales and can be significant. In addition, the contracting universe dominated by the matter in which w ≃ 0 [6] can also produce a nearly scale-invariant tensor spectrum. In this scenario the matter contraction must be interrupted by other phases well before bouncing to the hot big bang as indicated in the matter bounce models [7], otherwise both the scalar and tensor perturbations, scaling as (−η) −6 during the matter contraction, will blow up and make the background unstable. Another problem the matter contraction models encountered is that the classical anisotropies which scale as a −6 are not suppressed. So it seems that the inflation scenario has the strong preference over other models if BICEP2's result is confirmed.
According to above observations, if the tensor spectrum is not strongly blue tilted, it is possible to have significant tensor perturbations at large scales. Currently there are some studies on the tensor spectral index, see e.g., [8], but the confidence level is low and we need more data to improve it. But if we get a nearly scale-invariant tensor spectrum, as inflation predicted, we have the possibility to obtain significant tensor modes at large scales. In this paper we will focus on the production of the nearly scale-invariant tensor spectrum. The tensor perturbation couples to the background by the factor a 2 in the action (1). A few calculations show that the scale-invariant perturbation with constant amplitude can only be achieved if the background is nearly de Sitter, i.e., inflation where a = −1/(H I η) and the Hubble parameter H I a constant. This conclusion is made based on the assumption that the theory of gravity is Einstein's general relativity. If the theory of gravity is modified in the early universe, it is possible to obtain scale-invariant tensor perturbation without inflation. We can simply argue that if there are non-minimal couplings to the curvature scalar, ..], the quadratic action (1) will be modified as . Now scale-invariance requires a √ F instead of a scales as 1/(−η). That is to say even the spacetime deviates de Sitter significantly, we may still get scale-invariant tensor perturbation through the time dependence of the nonminimal coupling function F . This possibility has been investigated in Refs. [9,10], in which the authors constructed the models in the context of scalar-tensor theory to show that nearly scaleinvariant scalar and tensor perturbations can be produced in the slow expanding universe. In this paper we will pursue the productions of nearly scale-invariant perturbations during the contraction of the universe based on similar models. Scalar-tensor theories are usually used to model the later time acceleration of the universe or the variation of the Newton constant [11]. As we know, general relativity passed all the experiments at low energy scales. But at the early universe the energy scale is very high and the theory of gravity may get modified and replaced by a scalar-tensor theory. In fact scalar-tensor theories arise naturally from fundamental theories with higher dimensions. All versions of string theory predict the scalar-tensor theory rather than general relativity as the actual theory of gravity, in which the spin-2 graviton has a spin-0 partner, the dilaton. These theories are effective at high energy scales and at low scales they should approach general relativity since the solar system experiments have put stringent constraints on the deviations from general relativity. We also adapt this point here, at the early universe (pre-big bang) the theory of gravity is scalar-tensor and approaches general relativity at the post-big bang era. The model we consider has the general action where X = 1/2g µν ∂ µ φ∂ ν φ is the kinetic term. We have considered a general Lagrangian P (X, φ) for the scalar field. The non-minimally coupled function F (φ) should be positive to guarantee the positiveness of the effective Newton constant. The equations of motions are obtained by the variation of this action where T µν = −P g µν + ∇ µ φ∇ ν φ, P X represents ∂P/∂X, and P φ and F φ are defined in the same way. At the background ds 2 = a 2 (dη 2 − δ ij dx i dx j ), these equations become here we used the reduced Hubble parameter H = a ′ /a and defined ρ = −P + 2XP X as usual. There are some requirements for the model building as far as the background equations are concerned. To solve the flatness and the horizon problems, the absolute value |H| = a|H| should increase with time, i.e., d|H|/dη > 0. Furthermore, during the contracting phase H < 0, the energy density ρ is required to increase faster than a −6 to suppress the classical anisotropies.
In order to discuss the perturbations and their quantizations we will use the ADM decomposition ds 2 = N 2 dη 2 − h ij (dx i + N i dη)(dx j + N j dη) with the lapse function N , the shift vector N i and the induced metric h ij . Following Maldacena [12], we choose the gauge δφ = 0, h ij = a 2 (e 2ζ δ ij + γ ij ). This choice simplifies the calculations and at the same time the left dynamical fields ζ and γ ij are gauge-invariant. The purpose is to find the quadratic action of the scalar perturbation ζ and the tensor perturbation γ ij . With the ADM decomposi-tion the action (3) is written as In these formulae the indices are lowered and raised by the induced metric h ij and its inverse h ij , and N i|j represents the covariant derivative of N i induced by h ij .
In linear perturbation theory the scalar and tensor perturbations can be considered separately. We first consider the scalar perturbation, in this case h ij = a 2 e 2ζ δ ij . We first solve for the constraints N and N i through their equations got from the variation of the action (6) and then plug the result back to (6). At the background level, it is easy to find that N = a and N i = 0. When inhomogeneous perturbations are included we need only calculate N and N i up to the linear order as argued in Ref. [12]. Finally we may get the quadratic action of the scalar perturbation, which has been obtained in [13] (see also [14]). Using our notations the quadratic action is where θ ′ = H + F ′ /(2F ), and the square of the sound speed is defined as We require ρ X + 3F 2 φ /(2F ) > 0 to prevent the ζ field from being a ghost and c 2 s > 0 to guarantee the spatial stability. This is different from the minimal coupling case where ρ X , P X > 0 are required. Furthermore, in order to obtain nearly scale-invariant scalar spectrum the factor (aφ ′ /θ ′ ) 2 [ρ X + 3F 2 φ /(2F )] should scale approximately as 1/η 2 .
Then we focus on the tensor perturbation, for which h ij = a 2 (e γ ) ij and N = a, N i = 0, where e γ is the exponential function of the traceless and transverse matrix γ. The determinant h = det|a 2 e γ | = a 6 exp Trγ = a 6 is not perturbed. One can also prove that E = −h ij E ij = 3H is unperturbed up to the second order. With these considerations, we find that the quadratic action for the tensor perturbation is Scale-invariance requires a 2 F ∼ 1/η 2 .
We use a toy model to illustrate these points. The action is so in our notation F = ξ 2 φ 2 and V = V 0 (ξφ) q . The kinetic term has a wrong sign, this represents a ghost in general relativity and will cause quantum instability. But in our case from above arguments the non-minimal coupling will make the time derivative term of the fluctuation has the right sign if ρ X + 3F 2 φ /(2F ) = 6ξ 2 − 1 ≡ α > 0. The spatial stability P X + 3F 2 φ /(2F ) > 0 puts the same constraint and c 2 s = 1. What we will quantize are the fluctuations, so this condition is enough to make this model free from instabilities. Through definitions of the dimensionless parameters x ≡ φ ′ /(Hφ), y ≡ a √ 2V /(Hφ), the Friedmann equation, i.e., the first equation of background Eqs. (5), may be rewritten as y 2 = x 2 + 2(1 + α)x + 1 + α. The second equation of Eqs.
whereẋ ≡ dx/d ln a and we have defined β = 4 − q. This equation has three fixed points corresponding to three scaling solutions: . Both the first and second critical points, in which y 0 = 0, demand (aφ ′ /θ ′ ) ∝ η 1/2 instead of 1/(−η) and the produced scalar perturbation has a strong blue tilt. This conflicts with the observations and we will not consider these two points any more. The third critical point corresponds to the scaling solution One can find that for this scaling solution both prefactors in the quadratic actions (7) and (9) scale as: with b ≡ β 2 (1 + α)/[(12α − β 2 (1 + α)]. Now we study the implications of this solution to the expanding and contracting universes separately. For the expanding universe in which p < 0 and H > 0, the solution (12) can only be stable if β 2 (1 + α) > 36α, β(1 + α) > 6α or β 2 (1 + α) < 12α, β(1 + α) < 6α. The first case requires b < −1 and consequently the generated spectra in such an expanding universe deviate scale-invariance significantly. The second case can achieve scale-invariance if β 2 (1 + α) ≪ 12α, this will recover the inflation or the slow expansion studied in Refs. [9,10] according to the specific parameter space, and we will not consider it any more in this paper. Now let's consider the contracting universe for which p > 0 and H < 0. Combining with the stability conditions of Eq. (11), the solution (12) is an attractor one if and only if β 2 (1 + α) > 36α > 6β(1 + α) or β 2 (1 + α) < 12α < 2β(1 + α). Similarly the first case gives b < −1 and leads to non-scale-invariant spectra. We will focus on the second case in which This requires 0 < α < 1/2 and so both α and β are small positive parameters. One can also check that d|H|/dη = p/η 2 > 0 in such a contracting phase and this provides the solutions to the flatness and horizon problems. Furthermore due to the definition of x, at the critical point φ ∝ a x0 , the energy density ρ = −φ ′2 /(2a 2 ) + V scales as a x0(4−β) . To suppress the classical anisotropies in the contracting universe, ρ must increase faster than a −6 , this means x 0 (β − 4) > 6. Using the expression of x 0 this inequality becomes β 2 + 2β < 36α/(1 + α). This condition is not always satisfied in the region (14), but for the case β ≪ 12α/(1 + α) it can be well satisfied.
The scale-invariant scalar and tensor perturbations can be obtained if β ≪ 12α/(1 + α). In terms of the standard procedure learned from inflation theory one has the spectra P s = A s (k * )(k/k * ) −2b and P t = A t (k * )(k/k * ) −2b , both have the same small red tilts because b > 3α/(1−2α) from the inequalities (14). The amplitudes A s (k * ) and A t (k * ) at the pivot scale k * mainly depend on the model parameter V 0 which defines the energy scale at which the primordial perturbations were created. One can calculate that the observational result A s (k * ) ∼ 10 −9 requires V 0 ∼ 10 −8 . The tensor-to-scalar ratio is fixed in this model r = 16b/(1 + b) and if we choose the parameters α = 0.004, β = 0.024 one can easily find that r = 0.19 and n s − 1 = n t ≃ 0.0244.
So we have seen that with the model (10) the nearly scale-invariant scalar and tensor perturbations consistent with the current observations can be obtained in the contracting universe if the parameters α and β are positive and small. In terms of them, the action (10) is rewritten as and this shows that non-zero α and β represent the breaking of the conformal symmetry. The model in this form was also considered in [9]. It is approximately conformal invariant. We may think that the scale invariances of the spectra are originated from the conformal invariance of the model.
It deserves pointing out that this model is not complete. The contracting phase should end at some later time and bounce to an expanding spacetime. This toy model itself does not provide the mechanism of bouncing. For this purpose we make a little deformation to the toy model (10) as follows where Φ ≡ ξφ. We have added two terms with higher powers to the potential so that it has the form depicted in Fig. 1. The deformation produces a bump and an extra local minimum in the potential. The evolution begins at Φ ∼ 0. When Φ ≪ 1 this deformed model is almost the same with the model (10). At this regime Φ changes slowly and the universe is contracting, nearly scale-invariant primordial perturbations are generated. At later time when Φ is not so small the terms with higher powers become important and the universe exits from the contracting phase and bounces to the expansion. Soon after the bounce, the field Φ crosses the bump and then oscillates around the minimum Φ = 1 with damped amplitude. Reheating takes place at this final stage and the energy in the scalar field is transferred to the produced components such as the radiation. Reheating will make the amplitude of the oscillations decaying more quickly. Finally the universe enters into the radiation dominated epoch and the scalar field itself is frozen at the minimum Φ = 1. The evolution of Φ with respect to the cosmic time t is plotted in Fig. 2. With this frozen value the non-minimal coupling term in the action becomes so after reheating the theory of gravity is identical to general relativity. We also plotted the time evolutions of the Hubble parameter H = H/a and the scale factor, see Hence we see that sizable gravitational waves suggested by BICEP2 can also be generated in a pre-big bang phase different from inflation. The price we take is modifying gravity. Using the scalar-tensor theory we showed here that nearly scale-invariant and significant tensor perturbation can be obtained in a contracting universe. Such tensor perturbation can also be obtained during a slow expanding phase under the same context as pointed out in Refs. [9,10] . It is well known that a scalar-tensor system has different forms in different frames. The frame we discussed above is usually called Jordan frame and distinguished from the Einstein frame discussed below. It is assumed to be the frame in which the matter couples to the metric minimally. So that there is no extra force mediated by the scalar field φ among the matter and this is consistent with current experiments testing the equivalence principle in the matter sector. But the gravity itself did not obey the strong equivalence principle because the scalar field would mediate a fifth force in the gravity sector. However, in our model (16) this fifth force is not detectable by current gravitational probes because after bouncing to the hot expansion the scalar field had been stabilized to the minimum Φ = 1 through oscillations and decays and the theory of gravity approaches to the general relativity from the beginning of radiation dominated epoch. In other words, in our model the deviation of the gravity from the general relativity is only significant at the early universe, it does not change the post-big bang history and cannot have effects on later time gravitational probes.
It is also necessary to comment on the possibility of nonsingular bouncing behavior realized in our model. In the general relativity, nonsingular bounce requires violation of the null energy condition by the matter field. Especially the equation of state of the matter should cross −1, similar to the behavior of the quintom dark energy [15] at late time. This is not true for the scalar-tensor theory. In fact in the context of scalar-tensor theories effective phantom or quintom dark energy models without violation of null energy condition have been discussed extensively in the literature, see e.g., [16]. Hence applying the scalar-tensor theory to the early universe, it is also possible to realize a nonsingular bouncing universe without introducing ghosts. Such an example was provided in Ref. [17] where a nonsingular universe was obtained in the generalized Brans-Dicke theory without potential. In our model with the quadratic non-minimal coupling and the potential (16), the bounce happened at the point where ρ = −φ ′2 /(2a 2 ) + V = 0, but neither the scalar field nor the graviton is ghost. According to [16,18], the scalar tensor theory of the type is stable if The first requirement guarantees a positive Newton "constant" and prevents the graviton from being a ghost, and the second requirement protects the scalar field from being a ghost. These requirements are fully satisfied in our case. The non-minimal coupling function F = ξ 2 φ 2 = Φ 2 is positive everywhere because φ = 0, and F 1 = −ξ 2 φ 2 + 6ξ 4 φ 2 = αξ 2 φ 2 > 0 because α is a positive parameter as we discussed before. So in our model, the bounce is stable. For comparison with the discussions in the Jordan frame, it is useful to see what happened in the Einstein frame. For the toy model (10), if we re-scale the metric g µν = Ω 2 g µν with Ω = ξφ = Φ and through field redefinitions, one may get the action in the Einstein frame, The conformal transformation is essentially identical to the redefinitions of the scalar and tensor fields. The action (20) has been used to model the power law inflation in the literature, and the inflation is an attractor solution if 0 < β (1 + α)/(6α) < √ 2. With the same parameters α = 0.004, β = 0.024, this inflaton has the equation of state w = −0.992. Similarly this inflation model is not complete because it needs other mechanisms to end the inflation. The deformed model with the potential (16) in the Einstein frame has the poten-tialV = 2V 0 [cosh(−β (1 + α)/6αφ) − 1] and has the minimum atφ = 0. With this potential the inflation has a graceful exit. In the Einstein frame, other matters should couple to the metric non-minimally. However, after inflation, the scalar fieldφ has been relaxed to the vacuumφ 0 = 0 and these non-minimal couplings which depend on the exponential ofφ reduce to the minimal couplings. This means that the Jordan and Einstein frames are identical at late time in our model and denotes again the difference between these two frames are significant only in the early universe. Though the scalar field is stabilized at the vacuumφ 0 = 0, its fluctuation still transmits residual force between matter. It is straightforwardly to show that the fluctuation around the vacuumφ =φ −φ 0 has the potentialV = (1/2)m 2 ef fφ 2 with m ef f = β V 0 (1 + α)/3α. Please note that we use the unit M p = 1 in this paper. With the parameters to produce the right primordial perturbations, the effective mass is m ef f ∼ 10 −5 M p ∼ 10 13 GeV. Due to this high mass, the residual force is short range and invisible to the fifth force searches. Current experiments show that no deviations from the Newton's inverse square law have been found above the distances of 10 −8 m [19,20], this places a lower limit on the effective mass of the scalar field m ef f > 10 eV. In our case the effective mass is well above this limit. One can show that in both frames the created scalar and tensor perturbations are the same. This reflects the fact that the gauge-invariant scalar and tensor perturbations are frame independent as demonstrated in Refs. [14,21]. Note that the frame or conformal invariance of the scalar perturbation ζ has relative limited meaning compared with the invariance of gravitational waves. At least this can be seen from the discussions of [14,21]. The curvature perturbation ζ is only invariant under those conformal transformationsḡ µν = Ω 2 g µν in which Ω is a function of the scalar field φ. However the tensor perturbation which relates to the Weyl tensor is invariant for any Ω.
The conformal invariances of the perturbations have important implications. In terms of the trick from [22], the conformal transformations can be upgraded to the gauge transformations. In fact any scalar-tensor system with canonically normalized kinetic term can be described by the following conformal invariant action, | 2015-06-08T16:08:15.000Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "9d767104ccac6412bdd44f845aaa36b97c787984",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2014.08.008",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "9d767104ccac6412bdd44f845aaa36b97c787984",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
86718106 | pes2o/s2orc | v3-fos-license | Relevance of vancomycin susceptibility on patients outcome infected with Staphylococcus aureus
Background: Staphylococcus aureus is a serious pathogen with high rates of complications. We aim to study the susceptibility and outcome of S. aureus infection. Methods: A retrospective multicentre study conducted in three hospitals, Amman-Jordan. Between June 2013 and March 2014 laboratory records were reviewed for culture-positive samples growing S. aureus, also, medical records for the patients were reviewed for the demographic data, predisposing conditions, vancomycin MIC level, and outcome. Inpatients and outpatients were included, a case was classified as either hospital-associated (HA), community-associated (CA), or healthcare-associated (HCA). Data were entered as excel sheets and were statistically analyzed using SPSS version 21. Results: A total of 127 patient (46% MRSA) were culture-positive for S. aureus collected from different sources. Of these, eighty (63%) were inpatients. High resistance rates to non-β-lactam antimicrobials were recorded. Glycopeptides agents were the antibiotics of choice for the treatment of infections caused by MRSA strains. Complications rates were higher in patients with MRSA infections including mortality, whereas hospital stay was longer for patients infected with MSSA. Conclusion: Infection rates with MRSA were high among patients. There is a value for knowing vancomycin MICs for treatment of S. aureus and its implication for patients outcomes, though most outcomes were significantly worse due to MRSA infection. Relevance of vancomycin susceptibility on patients outcome infected with Staphylococcus aureus Ahmad Riyad Alsayed1, Malek Alzihlif2, Jamal Wadi Al Ramahi3 1 Department of Clinical Pharmacy, School of Pharmacy, Applied Science Private University, Jordan. 2 Department of Pharmacology, School of Medicine, The University of Jordan, Amman, Jordan. 3 School of Medicine, The University of Jordan, Amman, Jordan. Jordan Hospital Medical center, Amman, Jordan. Contact information: Jamal Wadi Al Ramahi MD, FIDSA. Address: Adjunct, Assistant Professor of Medicine, School of Medicine, University of Jordan. Jordan Hospital Medical center. Amman, Jordan. 11118. jamalwadimd@yahoo.com
Introduction
Staphylococcus aureus is a serious pathogen and a leading cause of both community-and healthcareassociated infections. Several factors may predispose patients to the development of S. aureus infections such as intensive care units (ICUs) stay, long-term care facility residents, surgical patients, immunocompromised conditions, patients on hemodialysis, and various invasive procedures [1][2]. According to some healthcare systems, S. aureus was the second most common isolate, accounting for 20% of cases, and a prospective analysis in 49 hospitals in the united states between 1995-2002, reported that the proportion of MRSA increased from 22% in 1995 to 57% in 2001 [3]. Among patients consecutively admitted in 1993 to the adult intensive care unit (ICU) in Jordan University Hospital in Amman-Jordan, the most frequent species isolated was S. aureus [4]. A single center study conducted in Jordan hospital over a period of 3-year, found S. aureus the third most common blood isolate in the ICU after coagulase-negative staphylococci and E. coli, and accounting for 9.8 percent of cases [5]. Nonetheless, there has been in the mid-1990s an increase in the number of MRSA infections among patients who lacked proper health care exposure. This increase has been associated with the recognition of a different MRSA strains, such as community-associated MRSA (CA-MRSA),and these strains have rapidly spread in the world causing infections in the general population [6 -12]. In an analysis of 132 cases of MRSA bloodstream infections in patients admitted to a hospital in Atlanta-USA in 2004, molecular typing studies demonstrated that 34% of isolates were CA-MRSA, which was genetically distinct from the traditional strains HA-MRSA [13]. HA-MRSA strains carry a relatively large staphylococcal chromosomal cassette mec (SCCmec) belonging to type I, II, or III. These cassettes contain mecA gene, which is nearly universal, they are often resistant to many classes of antimicrobials.
Also, CA-MRSA isolates carry SCCmec type IV or V, though smaller elements and presumably mobile, but they are resistant to fewer non-β-lactam classes and frequently carry PVL genes [14]. Patients infected with MRSA strains need more care and have a poorer prognosis. Several studies have demonstrated increased mortality among patients infected with MRSA compared to MSSA infection [15][16][17].
This study aims to identify S. aureus susceptibility to vancomycin and the outcomes of its infection on patients in Jordan.
Study Design and Settings
This is an observational multicenter study included three private hospitals (Arab Medical Center, Al-Khalidi Hospital and Medical Center, and the University of Jordan hospital, Amman, Jordan. The study was approved by the institutional review board (IRB) in every hospital. Outpatients, and inpatients who had positive cultures of S. aureus were recruited over the period (June 2013 through March 2014). Records of the inpatients were reviewed, and patients were followed during hospitalization and after discharge. A case report form was filled included the following items: patients' demographic data; age, sex, weight, length of stay (LOS), admission location; ICU, medical, surgical, gynecology and pediatric. The chief complaint for admission, diagnoses, source of culture, infectious diagnosis, surgical procedures, previous antibiotic use, the antimicrobial susceptibility of S. aureus isolates, predisposing clinical conditions and comorbidities, malignancy, administration of steroids > 20 mg/day of prednisone use or its equivalent for more than 14 days prior to specimen culture, antineoplastic chemotherapy in the 3 months prior to culture collection, longterm facility residence, indwelling catheters, intravenous drug use, diabetes mellitus, kidney disease, hemodialysis, skin or soft tissue lesions, respiratory illness, surgery wound and surgery requiring more than 48 h of hospitalization in the 30 days prior to admission, invasive procedures including cardiac catheterization, arterial angiogram, upper endoscopy, colonoscopy, bronchoscopy, tracheostomy, bone marrow aspiration, renal biopsy, hospitalization in the previous 12 months, and a history of previous MRSA infection and/or colonization. The patient's data about the use of antibiotics in the last three months and/or last week, frequent use of antibiotics prior vancomycin or other anti-MRSA agent exposure were recorded.
Inclusion criteria
S. aureus culture-positive specimens from inpatients and outpatients were included. The isolated S. aureus were classified as HA, CA, and HCA. CA-S. aureus is considered if the first isolate was recovered within 48 h of hospitalization, and if obtained from an outpatient source or the isolate was recovered within 48 h of hospitalization but believed to have had been incubating on admission, HCA cultures were considered from patients that frequently need healthcare attention or invasive procedures, but not admitted e.g. hemodialysis patients (CDC Definition). Non-duplicate strains for S. aureus isolates were considered. The outcome evaluated were vancomycin MIC distribution in relation to improved and discharged, discharged without improvement, switch to another antibiotic, relapse-progression, ≤ 30-days readmission, infection-related readmission and, all cause of mortality among patients suffering of S. aureus infection.
Identification of S. aureus
S. aureus was identified by the routine standard hospital microbiology laboratory procedure, methicillin resistance was detected by the oxacillin and cefoxitin Kerby-Bauer disk diffusion, E-test (Retro C80TM, AB Biodisk, Sweden), or by VITEK 2 (bio-Mérieux) for identification and antibiotic suscep-tibility testing of gram-positive cocci was used to measure vancomycin MICs. Vancomycin susceptibility was defined according to The CLSI breakpoints (M100, Performance Standards for Antimicrobial Susceptibility Testing [18]); susceptible ≤ 2 mg/L, intermediate 4-8 mg/L, and concentrations ≥16 mg/ mL as resistant.
Statistical Analysis
Statistical analyses were performed using the SPSS 21 (Statistical Package for the Social Sciences, Version 21, Inc. IBM Corporation, Chicago, IL, USA), and tables were initially analyzed by Microsoft Excel (Microsoft Corporation). All data were calculated by non-parametric analysis due to low numbers or groups. Numbers were transformed to frequencies and analyzed as relative frequencies among MSSA and MRSA covariates and outcomes. Wilcoxon Sign Rank test was used to analyze MRSA and MSSA paired differences for the covariates and outcomes. Regression analysis for the relation between MIC and the days for the length of hospital stay. Kruskal Wallis tests to the asses the difference among means. P-value < 0.05 was considered statistically significant.
The age of the patients was distributed into groups, they were almost similar in numbers, but less for the age group <20, elder inpatients were more than the other age groups, and the outpatients elder age group were smaller in number. The total distribution of S. aureus susceptibility consis-
(63%)
In-patients Demographic features were collected from 80 patients with S. aureus infection based on their oxacillin susceptibility. Comorbidities; (BMI>25, CVD, diabetes, chronic kidney diseases, chronic respiratory diseases, and malignancy were significantly different in the distribution between MSSA and MRSA (P < 0.05). However, there was no significant difference for smoking, CNS disorders and skin diseases (P > 0.05). There was a significant difference between the numbers of MSSA and MRSA for the use of antimicrobials either within the prior 3 months or 12 months of the study isolate (P < 0.05), and there were significant differences between both isolates based on the ward from which they were isolated (P < 0.05) ( Table 2).
Both strains of S. aureus almost matched for their vancomycin susceptibility. The majority of MSSA isolates (20) and MRSA isolates (29) were in the range of 0.5-1.0 mg/L. There were 4 MSSA and one MRSA. 2 MSSA and 3 MRSA in the range MIC >1 -1.5 mg/L, and 1 MSSA, 2 MRSA in the range MIC > 1.5-2 mg/L and zero for both for the MIC >2 mg/L (Figure 2). Both strains were tested for susceptibilities based on the location of isolation as inpatients or outpatients. After the exclusion of the antimicrobials with zero susceptibility rates from MRSA like carbapenems and cephalosporines. In the MSSA there were statistically significant differences (P < 0.05) between inpatients and outpatients with better inpatient susceptibility for clindamycin and erythromycin, but better outpatient for quinolones, and TMP-SMX. For MRSA there was a significant differences (P < 0.05) between the inpatients and outpatients, better susceptibility rate in the outpatients for quinolones, gentamicin and rifampin, but better susceptibility rate among inpatients for erythromycin and TMP-SMX (Figure 3). S. aureus susceptibility to vancomycin, teicoplanin, tigecycline and rifampin was 100%. Cephalosporines, carbapenems, penicillin and piperacillin/ tazobactam were 0% active against MRSA. After exclusion of the antimicrobial agents with both 100% susceptibility, and those with one agent with 0% susceptible for either strain from statistical analysis. Susceptibility differences for the other antimicrobial agents (quinolones, clindamycin, erythromycin, TMP/SMX and gentamicin) were significantly different in favor of MSSA (P < 0.001) (Figure 4) The outcome rates for S. aureus were plotted as a strain specific relative frequency for MSSA and MRSA, the MICs relative frequency differences between both strains for the range 0.5-1.0 mg/L The other MIC ranges were not considered for analysis for being zero or few in numbers. For all outcomes the difference in the relative frequencies for patients with MSSA or MRSA was evident pointing at a worse MRSA outcome (P = 0.018), and for each outcome: improved and discharged (P = 0.008), relapse- (20) and MRSA (29) for several measured outcomes. The Differences is measured by 2-taied Wilcoxon Sign Rank Test: improved and discharged (P = 0.008), relapse-progression (P = 0.001), ≤ 30days readmission (P = 0.005), infection related readmission (P = 0.008), and all-cause mortality (P = 0.025) but was not significant for: discharged without improvement (P = 0.083 ) and switch to another antibiotic (P = 0.083). progression (P = 0.001), ≤ 30-days readmission (P = 0.005), infection related readmission (P = 0.008), and all-cause mortality (P = 0.025) but was not significant for discharged without improvement (P = 0.083) and switch to another antibiotic (P = 0.083), (Figure 5). Length of hospital stay was strongly associated with vancomycin susceptibility for MSSA, where it averaged 9.6 (n = 4, SD 5.86) when MIC was less than 0.5 mg/L, 14.06 days (n = 20, SD 20.79) for 7 MIC > 0.5-1 mg/L and 45 days (n = 2, SD 39.59) for MIC > 1-1.5 mg/L. Regression analysis showed perfect fit for the relation among MIC and length of hospital stay (R = 1, R 2 = 1), Kruskal Wallace test showed no significant differences among the three length of hospital stay for the cited number of patients for each groups (P = 0.386).
Discussion
Our study showed that the MRSA rates among all isolates of S. aureus were 45.7%, these rates were lower than what was reported earlier by [7,9] and 53.3% from Jordan [19]. The rates for the CA-MRSA (including HCA-MRSA) was 51.8%, consistent with a previous study where it ranged 50.5-79.5% [12]. Patients with MRSA were more likely to have a risk factor. In this study, 78% of MSSA cases and 71.8% of MRSA cases have a history of hospitalization in the last 12 months, those rates were not close to what was reported by Lescure, et al. which were 67% MSSA and 85% MRSA cases [20]. Comorbidities including BMI > 25, CVD, diabetes, chronic kidney diseases, chronic respiratory diseases, and malignancy were significantly higher in MRSA infected patients (P < 0.05). Smoking, CNS disorders and skin diseases were not different (P > 0.05). There was a significant increase in the length of hospital stay and cost for patients with MSSA infection (P < 0.05), like a recent study by E. Y. Klein [21], possibly due to the relatively earlier and higher MRSA mortality rate in our patients (P = 0.025). Other studies found higher mortality rate associated with MRSA infection and longer hospital length of stay [16,17,22].
Our data showed that S. aureus was mostly concentrated in the susceptibility range (0.5-1 mg/L), with almost no distribution difference in the relative frequencies between MSSA and MRSA, and only fewer strains were in < MIC 0.5 mg/L and > MIC 1 mg/L. Though the standard of care in the treatment of MSSA are the penicillinase-resistant semisynthetic penicillins or cephalosporines. Nevertheless, studies have shown that elevated vancomycin MIC for MSSA strains was associated with more treatment failures and mortality [23,24]. Eight patients (2 cancer patients) with MSSA infection were treated with vancomycin, 6/8 (75%) had a poor outcome, and the other patients were younger (31 and 6 years old). Vancomycin is considered suboptimal therapy for the treatment of MSSA infection compared to the anti-staphylococcal β-lactams and is associated with the increased rates of treatment failure and high mortality rates. [25,26] Despite the Infectious Diseases of America (IDSA) guidelines support using vancomycin for MRSA infections when are susceptible (MIC < 2 mg/L), some studies and systematic reviews demonstrated treatment failure when patients were treated with vancomycin with MIC ≥ 1.5 mg/L. This is due to the difficulty in attaining the PK/PD target of AUC/MIC ≥ 400 with the clinical doses [27][28][29][30][31][32][33]. Furthermore, poorer prognosis was associated with vancomycin MIC level of > 1 mg/dl [34]. Noteworthy, in our patients both MRSA and MSSA susceptibility to vancomycin showed almost similar MIC distribution, lucky enough the vast majority fell ≤ 1 mg/L, this would attain the PK/PD target when clinically achieving the serum trough levels of 15-20 mg/L. Also, our data demonstrated to some extent, that the higher S. aureus MIC for vancomycin, the longer the length of hospital stay for patients is noted, though regression and correlation among MICs and length of stay were perfect, but the difference between their means was not significant for the analyzed number of patients within each group (P = 0.386).
For several antimicrobials, MSSA and MRSA were compared respectively as inpatients and outpatients. For the MSSA, clindamycin and erythromycin showed significantly better efficacy among inpatient against MSSA (P < 0.05), and in the outpatient quinolones and TMP-SMX were 8 better (P < 0.05). For the MRSA, erythromycin and TMP-SMX showed significantly better anti-MRSA activity inpatient (P < 0.05), and in the outpatient, quinolones, gentamicin and rifampin were better (P < 0.05). The rest of the antimicrobials for both types of staphylococci showed no significant differences (Figure 3) Susceptibility rates to the other antimicrobials were evaluated among outpatient MRSA isolates, these were 74% for quinolones and 69% TMP/ SMX, respectively. An earlier study carried out in the Jordan University Hospital demonstrated that MRSA susceptibility to quinolones was 15%, clindamycin 68%, and gentamicin 68%,respectively [35]. In the current study, a relatively higher susceptibility rates were observed for quinolones 74%, clindamycin 56%, and gentamycin 94%, respectively. An older Saudi study from the 1980s [36] found imipenem to have had an excellent in vitro activity against MRSA isolates after vancomycin. Nevertheless, in this study the lack of sensitivity of MRSA isolates to β-lactam antibiotics exemplified by cephalosporines and carbapenems is consistent with other more recent studies [7, 37 38]. Our finding showed that the susceptibility to imipenem was 0% among 35 tested MRSA isolates. The rates of in vitro ineffective β-lactam for MRSA cases were 100% in this study, while up to decade earlier it was reported to be ineffective up to 78.7% by using empirical β-lactam prescriptions, and carried a poor prognosis [7,39]. These results clearly demonstrated the escalating rates of resistance.
The outcomes of patients treatment were analyzed for the vancomycin (MIC range 0.5-1 mg/L), they could not be analyzed for the other vancomycin MIC distributions as they were low in count. Treatment of patients infected with MRSA evidently pointed at worse outcomes (P = 0.018). The outcome of each patient when analyzed separately, was significant (P < 0.05) for improved and discharged, relapse-progression, ≤ 30-days readmission, infection related readmission, and all-cause of mortality. However, the result was not significant (P > 0.05) for discharged without improvement or after the switch to another antibiotic.
Conclusion
The increasing MRSA rates leave limited treatment options. Evaluating S. aureus susceptibility to vancomycin with a minimum inhibitory concentration may help in predicting outcomes, nonetheless, our study significantly demonstrated worse outcome with MRSA infection. | 2019-03-28T13:34:00.021Z | 2019-03-05T00:00:00.000 | {
"year": 2019,
"sha1": "1cfe20d8df5e710ea93cbe790829e2a1f5dec4ea",
"oa_license": "CCBY",
"oa_url": "http://imed.pub/ojs/index.php/IAJAA/article/download/2343/2085",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f853488753be2fc2e4843c7e0a5d5d0478ff821",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269970737 | pes2o/s2orc | v3-fos-license | CULTIVATING ABUNDANCE AND HARMONY: EXPLORING THE ECOLOGICAL, ECONOMIC, AND CULTURAL IMPLICATIONS OF NATURAL FARMING: A LITERATURE REVIEW
In response to mounting concerns over the environmental impacts of conventional agriculture, the concept of natural farming has gained prominence as a sustainable alternative. This literature review critically examines the multifaceted implications of natural farming, drawing insights from a range of scholarly sources. The review explores its ecological benefits, economic viability, integration of traditional knowledge with modern science, and cultural significance. The findings emphasize natural farming's ability to enhance soil health, promote biodiversity, and foster resilient ecosystems. Economic assessments reveal its potential for reduced input costs and improved resource efficiency. The integration of indigenous wisdom and contemporary insights is recognized as a dynamic pathway toward agricultural innovation. Moreover, the literature underscores the need for further research in areas such as long-term ecological impact, socioeconomic dynamics, agroecological contextualization, and policy support. In conclusion, natural farming emerges as a transformative approach that harmonizes with nature, offering solutions to challenges in agriculture while nurturing ecological health, economic stability, and cultural heritage.
INTRODUCTION
In an era defined by environmental concerns and the urgent need for sustainable agricultural practices, the concept of "Cultivating Abundance with Natural Farming" has emerged as a beacon of hope.This approach to agriculture transcends conventional methods by seamlessly integrating ecological harmony, resource efficiency, and bountiful yields (Smith, A. et al., 2020).Natural farming represents a departure from the heavy reliance on synthetic inputs and mechanization that have characterized modern agriculture, embracing instead the innate intelligence of natural ecosystems (Brown, J. & Miller, C., 2018).This article delves into the essence of natural farming, exploring its principles, methodologies, and the remarkable potential it holds for not only addressing the challenges of food security and environmental degradation but also for fostering a deeper connection between humanity and the land (Jones, R., 2019).Join us on a journey to discover how this sustainable approach offers a transformative pathway towards a regenerative and abundant agricultural future.As the global population continues to expand and environmental concerns escalate, the imperative to reevaluate our agricultural practices becomes increasingly evident.The conventional methods that once promised higher yields and quick profits have shown their limitations, contributing to soil degradation, water pollution, and loss of biodiversity (White, L. et al., 2021).In contrast, the principles of natural farming offer a refreshing perspective that aligns with the natural rhythms of the earth.At its core, natural farming emphasizes the importance of working with nature rather than against it.It draws inspiration from traditional and indigenous farming practices that have stood the test of time, advocating for minimal disturbance to the soil and ecosystems (Gupta, M. & Patel, S., 2017).By avoiding synthetic chemicals, genetically modified organisms, and excessive tilling, natural farming nurtures the soil's health and structure, fostering a thriving microbial community that enhances nutrient availability and plant vitality (Williams, E., 2016).One of the remarkable hallmarks of natural farming is its emphasis on diversity.Polyculture and intercropping are Volume 3 Issue 2 April -June 2024 integral components of this approach, as they mimic the complexity of natural ecosystems, where various plant species coexist and interact synergistically (Nguyen, T. et al., 2019).This not only reduces the risk of pests and diseases but also maximizes resource utilization and minimizes waste.In the following sections of this article, I will delve into the fundamental principles of natural farming, examining its methodologies such as effective microorganism (EM) treatments, cover cropping, and natural pest management strategies.I will also explore case studies from around the world where natural farming practices have been successfully implemented, showcasing their tangible benefits in terms of yield, soil health, and economic viability.As humanity seeks sustainable solutions to the pressing challenges of our time, "Cultivating Abundance with Natural Farming" emerges as an integral part of the conversation.It offers not just a pragmatic approach to ensuring food security but a harmonious and regenerative relationship with the planet we call home.
➢
How does the implementation of natural farming practices impact soil health and microbial diversity, and how do these changes contribute to enhanced nutrient cycling and plant vitality?➢ What are the economic and environmental implications of transitioning from conventional agricultural methods to natural farming on a regional scale, considering factors such as yield stability, resource efficiency, and greenhouse gas emissions?➢ What are the most effective strategies for integrating traditional knowledge and modern scientific insights to optimize natural farming techniques, and how can these strategies be adapted to diverse agricultural contexts around the world?
✓
To assess the impact of natural farming practices on soil health indicators, including microbial diversity, nutrient availability, and soil structure, in comparison to conventional agricultural methods.
✓
To analyze the economic feasibility and environmental benefits of adopting natural farming on a local level, evaluating changes in crop yield, resource utilization, and greenhouse gas emissions.
✓
To develop a comprehensive framework that integrates traditional agricultural wisdom and contemporary scientific knowledge, aiming to optimize natural farming techniques and facilitate their adaptation across diverse agroecological zones.
LITERATURE REVIEW Natural Farming and its Ecological Implications Introduction:
In recent decades, the urgency to address the environmental impacts of conventional agriculture has prompted a renewed interest in alternative farming practices that prioritize ecological sustainability and harmonious interactions with natural ecosystems.Natural farming, an approach rooted in indigenous wisdom and modern ecological principles, has emerged as a promising solution to mitigate the negative consequences of intensive agricultural methods.Ecological Benefits of Natural Farming: Natural farming emphasizes a holistic perspective on agriculture, emphasizing the importance of working in tandem with nature rather than against it.Smith et al. (2020) highlight that this approach encompasses various techniques, such as minimal tillage, cover cropping, and integrated pest management, that collectively contribute to enhancing soil health and promoting biodiversity.Gupta and Patel (2017) note that by reducing synthetic inputs and minimizing soil disturbance, natural farming fosters a favorable environment for beneficial soil microorganisms, leading to improved nutrient cycling and increased plant resilience.
Soil Health and Microbial Diversity:
A key focus of natural farming is nurturing soil health, recognizing that healthy soils are the foundation of productive and sustainable agriculture.Williams (2016) explains that the reduction of chemical inputs in natural farming encourages the growth of diverse microbial communities, leading to enhanced soil structure and nutrient availability.This sentiment is echoed by Brown and Miller (2018), who emphasize that the shift from monoculture to diverse cropping systems in natural farming encourages symbiotic relationships between plants and soil microorganisms, contributing to overall ecosystem stability.
Integration of Traditional Knowledge and Modern Science:
A significant finding is the recognition of the integration of traditional agricultural knowledge with modern scientific insights in natural farming.Jones (2019) underscores that this fusion not only preserves cultural heritage but also paves the way for innovative, contextspecific farming techniques.Gupta and Patel (2017) further emphasize that such integration enhances the resilience of agricultural systems in the face of changing environmental conditions.Soil Health and Microbial Diversity: The literature consistently underscores natural farming's positive impact on soil health and microbial diversity.Williams (2016) asserts that reduced reliance on synthetic chemicals leads to a thriving microbial community, resulting in improved soil structure and nutrient availability.This aligns with the findings of Brown and Miller (2018), who highlight the importance of diverse cropping systems in encouraging symbiotic relationships between plants and soil microorganisms.Cultural and Socioeconomic Implications: Several studies recognize the social and cultural implications of natural farming.Jones (2019) notes that the preservation of indigenous knowledge not only benefits local communities but also contributes to the cultural sustainability of farming practices.Gupta and Patel (2017) highlight that the empowerment of local farmers through natural farming techniques can enhance food security and rural livelihoods.The synthesis of findings from the literature review showcases that natural farming offers a holistic and sustainable approach to agriculture.It presents ecological benefits through improved soil health and microbial diversity, economic viability through reduced input costs, and socio-cultural advantages by integrating traditional knowledge with modern science.The literature collectively underscores the potential of natural farming as a transformative pathway toward a more resilient and harmonious agricultural future.
METHODOLOGY Search Strategy:
The literature review on natural farming was conducted through a systematic search of academic databases, scholarly journals, and reputable publications.The primary databases used for the search included PubMed, Web of Science, Google Scholar, and Agricola.The search terms employed included "natural farming," "sustainable agriculture," "ecological farming," "soil health," "microbial diversity," "traditional knowledge," and "modern science" (Smith et al., 2020;Gupta & Patel, 2017).
Inclusion and Exclusion Criteria:
To ensure the relevance and quality of the sources, a set of inclusion and exclusion criteria were applied.Included sources were peer-reviewed articles, research papers, conference Volume 3 Issue 2 April -June 2024 proceedings, and academic books published within the last 10 years (2013-2023).Articles focused on the ecological, economic, and socio-cultural aspects of natural farming were prioritized.Sources that provided empirical data, case studies, theoretical frameworks, and reviews were included.Non-English language sources were excluded to maintain consistency and accessibility (Jones, 2019;Nguyen et al., 2019).Data Extraction and Analysis: Upon identification of potential sources, a thorough review was conducted to extract relevant information.Key themes and concepts related to natural farming, such as ecological benefits, soil health, economic viability, and integration of traditional and modern knowledge, were systematically extracted.The extracted data were organized into categories for further analysis and synthesis (Williams, 2016).Synthesis and Discussion: The extracted data were synthesized to develop coherent themes and insights.By analyzing the identified patterns, connections, and contradictions within the literature, a comprehensive understanding of the subject matter was developed.The synthesis process involved identifying commonalities, summarizing key findings, and highlighting notable trends in the literature.Citation and Referencing: To maintain academic rigor and integrity, proper citation and referencing were employed.All sources referenced in the literature review were cited using a consistent citation style, adhering to the guidelines of the chosen referencing format (e.g., APA, MLA).The methodology employed for this literature review on natural farming involved a systematic search, rigorous inclusion criteria, thorough data extraction, comprehensive synthesis, and appropriate citation.This methodological approach ensured that the literature review was based on credible and recent sources, providing a well-rounded understanding of the ecological, economic, and cultural implications of natural farming.
CONCLUSION Implications and Pathways of Natural Farming
The comprehensive synthesis of the literature on natural farming reveals a clear narrative that underscores its multifaceted implications and transformative potential.Natural farming stands as an ecological, economic, and socio-cultural alternative that addresses the challenges posed by conventional agricultural practices while offering a harmonious and regenerative pathway forward.Ecological Harmony and Soil Health: The reviewed literature consistently highlights the ecological benefits of natural farming.The emphasis on reduced chemical inputs, minimal soil disturbance, and diverse cropping systems leads to enhanced soil health, microbial diversity, and nutrient cycling.This holistic approach nurtures ecosystems, fostering a balanced coexistence between crops, soil organisms, and the environment.Economic Viability and Resource Efficiency: The findings suggest that the economic feasibility of natural farming cannot be underestimated.While initial yield variations might occur, the long-term gains in improved soil health, reduced input costs, and enhanced resource efficiency position natural farming as an economically viable option.The reduced reliance on synthetic inputs can potentially alleviate the economic burden on farmers and enhance their profitability.Cultural Resilience and Integration of Knowledge: An important aspect emerging from the literature is the integration of traditional agricultural knowledge with modern scientific insights.This convergence not only preserves cultural heritage but also generates innovative and adaptable farming techniques.By empowering local communities and fostering a deeper connection to the land, natural farming contributes to the resilience of agricultural systems and enhances rural livelihoods.A Blueprint for Sustainability: In the face of mounting environmental challenges, natural farming emerges as a blueprint for sustainable agriculture.It bridges the gap between ancient wisdom and contemporary understanding, offering a nuanced approach that respects ecosystems while providing solutions to food security and environmental degradation.By adopting a diverse range of practices, from reduced tillage to cover cropping, natural farming demonstrates its capacity to regenerate landscapes and nourish communities.Towards a Regenerative Future: As this literature review elucidates, natural farming holds immense promise as a catalyst for positive change.Its ability to restore ecological balance, ensure economic stability, and preserve cultural heritage positions it as a viable solution to address the complex challenges of our time.By embracing the principles of natural farming and fostering a deeper connection with the land, we pave the way toward a regenerative and abundant agricultural future.In essence, the literature affirms that natural farming represents not just an agricultural practice but a holistic philosophy that nurtures the intricate web of life.As we strive to coexist with nature, enhance sustainability, and secure food for future generations, natural farming offers an inspirational pathway to harmonize with the Impact:The economic feasibility of transitioning to natural farming practices is a subject of interest.Nguyen et al. (2019) highlight that although initial yields might show variations, the long-term benefits of improved soil health, reduced input costs, and reduced environmental impacts position natural farming as a financially viable option.White et al. (2021) emphasize that such transitions can lead to Volume 3 Issue 2 April -June 2024 decreased reliance on synthetic fertilizers and pesticides, ultimately contributing to reduced pollution and improved water quality.Integration of Traditional Knowledge and Modern Science: One of the distinctive aspects of natural farming is its integration of indigenous and traditional agricultural knowledge with contemporary scientific insights.Jones (2019) discusses how this fusion not only promotes cultural heritage but also encourages the development of context-specific natural farming techniques that can address the unique challenges faced by different regions.This approach, as highlighted byGupta and Patel (2017), can contribute to the resilience of agricultural systems in the face of climate change and other external pressures.The literature reviewed underscores the potential of natural farming as a sustainable and ecologically conscious agricultural approach.It promotes soil health, biodiversity, and economic viability, while also showcasing the potential for cultural preservation and adaptation to diverse agroecological contexts.As agricultural landscapes continue to evolve, natural farming remains a beacon of hope, offering a harmonious way forward that benefits both humanity and the planet.Literature Review on Natural Farming Ecological Benefits of Natural Farming: The literature reveals a consensus on the ecological benefits of natural farming practices.Studies by Smith et al. (2020) and Brown and Miller (2018) highlight that natural farming's emphasis on reduced chemical inputs and minimal soil disturbance contributes to improved soil health and microbial diversity.This in turn enhances nutrient cycling, fosters beneficial soil microorganisms, and promotes overall ecosystem stability.Economic Viability and Environmental Impact: Evidence suggests that natural farming holds economic promise.Nguyen et al. (2019) assert that although initial yields may vary, the long-term benefits of improved soil health and decreased input costs position natural farming as a financially viable alternative.Furthermore, a shift away from synthetic fertilizers and pesticides, as emphasized by White et al. (2021), can result in reduced pollution and improved water quality. | 2024-05-23T15:30:48.404Z | 2024-05-21T00:00:00.000 | {
"year": 2024,
"sha1": "ca5f9527c58d8f93449cf7af4ee536bb242e41f5",
"oa_license": "CCBYNCSA",
"oa_url": "https://ijmpr.org/index.php/IJMPR/article/download/217/140",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a304dbf981e6b657710f7c9c3793ac8a08884463",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": []
} |
236413902 | pes2o/s2orc | v3-fos-license | Can Neutrophil-Lymphocyte Ratio Be a Useful Criterion for Neuroleptic Malignant Syndrome in the Absence of Leukocytosis?
Objective: Neuroleptic malignant syndrome (NMS) is a rare but severe side effect of antipsychotic medication. Neutrophil-lymphocyte ratio (NLR) is a simple marker used to measure systemic inflammation. Method: In this case report we explore the relationship of inflammation in the etiology of NMS. In our case involving NMS, although there was no leukocytosis, the NLR was increased up to systemic infection levels. Conclusion: We hypothesized that systemic inflammation may take a role in developing NMS. If so, NLR could be a new marker of NMS that may be able to provide more sensitive results than leukocyte levels.
Neuroleptic malignant syndrome (NMS) is a severe side effect of antipsychotic medication. Although mechanisms underlying NMS are not fully understood, hypodopaminergic state has been proposed as a potential triggering factor. The main symptoms of NMS are hyperthermia, muscle rigidity, autonomic imbalance, and changes in consciousness after initiation of antipsychotic treatment (1). Laboratory findings of NMS may include increased levels of creatinine phosphokinase (CPK), leukocyte, and liver enzymes. About 40% of cases demonstrate increase in leukocyte levels (2). In our case report we aimed to discuss the role of inflammation in NMS and the usefulness of neutrophil to lymphocyte ratio (NLR) as a potential criteria to utilize, even in the absence of leukocytosis .
Case Report
A 30-year-old woman with a 17-year history of bipolar affective disorder was transferred to acute care psychiatry service with acute exacerbation of symptoms of manic episode. She had no previous medical or substance abuse history. Before hospitalization, zuclopenthixol acuphase 50 mg IM, haloperidole 10 mg IM, biperiden five mg IM., chlorpromazine 100 mg IM. were applied as needed because of affective elevation at another hospital. At admission, the patient received treatment with clozapine 800 mg/day, amisulpirid 800 mg/day, and chlorpromazine 200 mg/day. She was treated with haloperidol 20 mg/day IM, and biperiden 10 mg/day IM. The dose of clozapine, chlorpromazine, and valproat were diminished gradually and amisulpride was stopped . On the seventh day of hospitalization, the patient developed rigidity, tremor and incontinence without fever. Haloperidol was stopped for probability of NMS, and ECT was initiated on the eighth day of hospitalization. On day 11 and 12 of hospitalization, haloperidol IM was used for acute agitation. Rigidity, confusion, diaphoresis, tremor, tachycardia, and incontinence occurred at the following day. The patient's temperature was 36.3 C°, heart rate was 110 per minute, respiratory rate was 15 to-20 per minute, blood pressure fluctuated between 90/70 and 110/70 mm/hg, and oxygen saturation was 97%. Laboratory findings revealed increased levels of CPK (> 2000 IU/L; normal range= 20-200), aspartate aminotransferase (AST) (188 U/L; normal range= 5-45), alanine transaminase (ALT) (66 U/L, normal range= 5-40), lactate dehydrogenase (LDH) (581 IU/L; normal range = 60-200), C-reactive protein (CRP) (1.84 mg/dL; normal range= 0-0.5), and erythrocyte sedimentation rate (ESR) (44 mm/hour; normal range= 0-20). Leukocyte count (6.35 103/µL; normal range= 4.1-11.2) was in normal range. Neuroimaging, EEG (electroencephalogram), and chest x-ray imaging showed no abnormalities. Urinalysis and thyroid hormones were in normal range. There was no focus for infection (pulmonary, urinary etc). On day 19, a confusional state developed and by day 20 of the hospitalization, bromocriptin 5mg/day was initiated with the diagnosis of NMS according to DSM-5 (Diagnostic and Statistical Manual of Mental Disorders). On the 29th day of admission, all symptoms of NMS disappeared. Antipsychotic treatment was initiated with clozapine 12.5 mg/day because of agitation. The patient developed rigidity as cogwheel sign, dystonia, altered mental status, and autonomic dysregulation after 6 days of clozapine initiation and was admitted to the intensive care unit on the 37th day of admission. After 6 days of hospitalization in intensive care unit, she was transferred back to psychiatry service. On the 43th day of hospitalization, the patient's bromocriptine was elevated to 10 mg/day to address rigidity and dystonia. At the 57th day, no symptom of NMS were observed (nearest NLR value mesasured on the 62nd day). The patient's informed consent was provided for the publication of this case presentation.
Discussion
Previous case reports suggest an association between systemic inflammatory response and developing NMS (3,4). Increased levels of acute phase reactants such as α-1 antichymotrypsin and fibrinogen, elevated ESR, Creactive protein, and IL-6 cytokine levels, decreased negative acute phase reactants as albumin, serum Fe indicate that an inflammatory reaction occurs in NMS (3,4). Based on these findings, Anglin et al. (2010) speculate that there may be a neuroimmunological involvement in these cases. The authors argue that NMS and acute phase response have common findings of fever, tachycardia, diaphoresis, unstable blood pressure, tissue injury, altered consciousness, and leukocytosis (5). The NLR is a relatively new and simple marker used to measure systemic inflammation (6). In previous studies, mean NLR levels of neuropsychiatric illness were reported in bipolar disorder manic episode as 3.09 ±1.9 (7), 2.8±1.67 (8), in euthymic episode as 2.8 ± 0.81 (7), and in schizophrenia as 2.6 ± 1.1 (9). NLR is a relatively new marker for psychiatric disorders; thus, no cut-off level or disease-specific values have been identified. A recent study, investigating the NLR values in healthy adults has shown that NLR has ranged from 0.78 to 3.53 (10). In another study conducted by Gurol et al. (11), NLR levels in a healthy group were 4.19 ± 4.36, a group with local infection had findings of 5.68 ± 8.99, and findings for systemic infection were 11.78 ± 14.04, with additional findings for sepsis as13.16 ± 6.38, and septic shock as 16.87 ± 9.55. The authors recommend a cut-off NLR value <5 for healthy group. Increasing levels of NLR indicates different levels of infections such as local infection (5 to 10), systemic infection (10 to 13), sepsis (13 to 15), and septic shock (≥15). (11) This study aimed to make a comparison and better understanding of NLR findings. In a recent study, it has been suggested that NLR levels above 4 may be useful for diagnosing NMS (12). In our case, the patient's NLR level was above 9 at the time of NMS diagnosis before initiation of NMS treatment. We did not observe fever and leukocytosis during hospitalization. Although leukocyte levels were in a normal range, NLR levels were nearly as high as those found at a systemic infection level. In addition, the NMS development after starting clozapine showed a NLR level of >5, which is between the cut-off level of local infection levels. Strikingly, in our case we observed a connection between NMS treatment and NLR levels. Leukocyte, neutrophil, lymphocyte counts and NLR levels during hospitalization are presented in Table 1 .
Limitation
Methodologically, it is difficult to perform a clinical study regarding the relationship between inflammatory markers and NMS because of the rarity of cases and difficulties in storing blood specimens. It may be insufficient to support our hypothesis over one case.
Conclusion
In our case NLR level was as high as systemic infection levels, even though with the absence of leukocytosis. In our opinion, NLR could be a new criterion of NMS which would allow for more sensitive findings than leukocyte levels alone. Further studies are needed to support our hypothesis. Specifically, we propose that NLR may be a useful choice for a prospective study to test the neuro-inflammation hypothesis of NMS. | 2021-07-27T00:05:11.178Z | 2021-05-30T00:00:00.000 | {
"year": 2021,
"sha1": "0798f89932e14d728d961aa4eb1ad6b813f4177e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18502/ijps.v16i3.6264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b65d0355be69c9fbb722a8cf9a885c571917d6b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2398306 | pes2o/s2orc | v3-fos-license | The relationship of Asperger’s syndrome to autism: a preliminary EEG coherence study
Background It has long been debated whether Asperger’s Syndrome (ASP) should be considered part of the Autism Spectrum Disorders (ASD) or whether it constitutes a unique entity. The Diagnostic and Statistical Manual, fourth edition (DSM-IV) differentiated ASP from high functioning autism. However, the new DSM-5 umbrellas ASP within ASD, thus eliminating the ASP diagnosis. To date, no clear biomarkers have reliably distinguished ASP and ASD populations. This study uses EEG coherence, a measure of brain connectivity, to explore possible neurophysiological differences between ASP and ASD. Methods Voluminous coherence data derived from all possible electrode pairs and frequencies were previously reduced by principal components analysis (PCA) to produce a smaller number of unbiased, data-driven coherence factors. In a previous study, these factors significantly and reliably differentiated neurotypical controls from ASD subjects by discriminant function analysis (DFA). These previous DFA rules are now applied to an ASP population to determine if ASP subjects classify as control or ASD subjects. Additionally, a new set of coherence based DFA rules are used to determine whether ASP and ASD subjects can be differentiated from each other. Results Using prior EEG coherence based DFA rules that successfully classified subjects as either controls or ASD, 96.2% of ASP subjects are classified as ASD. However, when ASP subjects are directly compared to ASD subjects using new DFA rules, 92.3% ASP subjects are identified as separate from the ASD population. By contrast, five randomly selected subsamples of ASD subjects fail to reach significance when compared to the remaining ASD populations. When represented by the discriminant variable, both the ASD and ASD populations are normally distributed. Conclusions Within a control-ASD dichotomy, an ASP population falls closer to ASD than controls. However, when compared directly with ASD, an ASP population is distinctly separate. The ASP population appears to constitute a neurophysiologically identifiable, normally distributed entity within the higher functioning tail of the ASD population distribution. These results must be replicated with a larger sample given their potentially immense clinical, emotional and financial implications for affected individuals, their families and their caregivers.
Background
Autism or Autism Spectrum Disorder (ASD) is one of the most common neurodevelopmental disorders, with an estimated incidence of 1 in 88 children [1]. According to the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV), a diagnosis of ASD requires the fulfillment of a minimum of six behavioral diagnostic criteria from the following three domains: at least two symptoms of impairment of social interaction, at least one symptom of impairment in communication, and at least one symptom of restricted repetitive and stereotyped patterns of behavior [2]. Moreover, ASD requires symptoms of delay or abnormal functioning with onset prior to age 3 years in at least one of the following three domains: social interaction, language as used in social communication, and symbolic or imaginative play.
In order to establish a diagnosis of Asperger's syndrome (ASP) [3][4][5][6], the DSM-IV requires, as for ASD, the fulfillment of at least two symptoms of impaired social interaction and at least one symptom of restricted, repetitive behavior. However, the ASP diagnosis, in contrast to the ASD diagnosis, does not require a symptom of impairment in communication, nor must any of the symptoms show an onset before age 3 years. According to the DSM-IV, ' Asperger's Disorder can be distinguished from Autistic Disorder by the lack of delay in language development. Asperger's Disorder is not diagnosed if criteria are met for Autistic Disorder' [2]. Data for the prevalence of ASP are not reliably available, owing to the use of slightly differing diagnostic criteria in the literature. For example, Mattila et al. [7] applied four different criteria on the same group of 5,484, eight-year-old children and found prevalence rates varying from 1.6 to 2.9 per 1,000. Kopra et al. [8] similarly compared various diagnostic criteria and concluded that 'the poor agreement between these sets of diagnostic criteria compromises comparability of studies (of Asperger's syndrome) '.
The specificity of the DSM-IV diagnostic criteria and the classification of ASP as a separate entity have been reconsidered by the Neurodevelopmental Disorders Work Group, resulting in a redefinition of diagnostic boundaries. In the new DSM-5, ASP falls into ASD with essential equivalence to high functioning autism (HFA) and the ' Asperger's Syndrome' name has been dropped [9]. Although clearly intended as a reasonable nosological correction, it places children with severe autism, who have significantly impaired language and/or interaction capacities, under the same ASD umbrella as those who have milder forms, such as HFA and ASP, who lack social skills yet possess normal to high intelligence and typically vast knowledge, albeit often in narrow subject areas. Families fear that the loss of the specific Asperger's diagnosis, as is the case with DSM-5, may result in the loss of specially tailored, individualized and, importantly, reimbursable, appropriate services for their children [10][11][12][13]. Serious concerns have been raised regarding the DSM-IV to −5 changes [14][15][16][17][18][19].
Although there are no agreed upon neuro-imaging criteria to diagnose ASP, there have been a number of studies that raise the potential for this possibility. In 2008, McAlonan et al. differentiated subjects with ASP and HFA on the basis of magnetic resonance imaging (MRI) differences in grey matter volumes [20], and in 2009 on the basis of differences in white matter volumes [21]. In 2011, Yu et al. differentiated ASP and 'autism' on the basis of grey matter volume: 'Whereas grey matter differences in people with Asperger's Syndrome compared with controls are sparser than those reported in studies of people with autism, the distribution and direction of differences in each category are distinctive' [22]. However, the regions delineated by Yu et al. do not coincide completely with the regions defined by McAlonan et al. [20].
Comparisons between older ASP and HFA subjects have demonstrated better language and potentially differing brain anatomy and/or function within the ASP population [23][24][25][26][27]. Although these findings suggest that initial group differences of early language developmentrequired for HFA by definition [2] -persist to later ages, they do not demonstrate that ASP and HFA subjects can be reliably differentiated. The findings suggest that ASP and HFA could be physiologically different entities but they do not distinguish between this possibility and the alternative possibility that the group differences may simply reflect differing degrees of the same basic underlying brain pathophysiology.
A known disease may constitute the tail end of a population distribution function or it may constitute a second, separable distribution of its own. Defining ASP as a separate entity from ASD might be as simple as defining a reliable, critical point on the ASD population distribution's high functioning tail beyond which ASP is present and before which it is not. On the other hand, ASP may demonstrate a non-overlapping, separate distribution of its own. Recognition of complicated multimodal combinations of separate distributions is a complex statistical process [28,29].
The approach chosen in the current study was to determine whether there might be objective, unbiased, electrophysiological markers that can significantly distinguish ASP from ASD. For this determination EEG spectral coherence was chosen. EEG coherence represents the consistency of phase difference between two EEG signals (on a frequency by frequency basis) when compared over time and thus yields a measure of synchrony between the two EEG channels and an index of brain connectivity between the brain regions accessed by the chosen electrodes. High coherence represents a measure of strong connectivity and low coherence a measure of weak connectivity [30].
A great advantage of coherence is that it provides a quantifiable measure of between-region (electrode) connectivity that is essentially invisible to unaided visual inspection of raw EEG. There are at least three possible explanations for this phenomenon. First, coherence is calculated on a frequency by frequency (sine wave by sine wave) basis and EEG typically presents a complex and simultaneous mixture of many sine waves, each of a different frequency. Second, high coherence reflects a stable phase relationship (stable phase difference) between sine waves of the same frequency over time. The human eye is relatively poor in the visual assessment of phase shift stability over time, especially when many sine waves at multiple frequencies are simultaneously present as is the case in typical EEG. Furthermore, phase shift stability typically varies among differing spectral frequencies.
Third, reliable and replicable coherence measures typically require relatively long EEG segments -minutes in length. These long epochs further confound an electroencephalographer's ability to reliably estimate by unaided visual inspection the coherence between two channels of EEG. One of the best examples to graphically illustrate the difference between simple correlation and coherence in EEG was provided by Guevara and Corsi-Carbrera in 1996; however, the authors primarily utilized only simple sine wave segments for their explanatory illustrations [31].
Coherences among all possible electrodes and all frequencies produce thousands of variables. Principal components analysis (PCA) allows objective reduction of coherence data dimensionality to a much smaller number of statistically independent coherence factors, typically no more than 40, with minimal loss of information content [32][33][34][35][36]. Furthermore, PCA reduction of coherence data sets obviates the need to reduce data on the basis of a priori specified brain connectivity selections, and thus avoids the potential of investigator bias.
In 2012, the authors demonstrated that a stable pattern of EEG spectral coherence factors separated ASD subjects from neurotypical control subjects [36]. For this demonstration the two extremes of the ASD spectrum had been excluded from the ASD sample studied, namely HFA and ASP on one hand, and global developmental delay on the other. Subjects with Pervasive Developmental Disorder not otherwise specified (PDD-nos) were retained in the ASD sample. The resulting analyses conclusively demonstrated highly significant, reliable, stable classification success of neurotypical controls versus subjects with ASD on the basis of 40 coherence factors [36].
The first aim in this study was to test how a new independent ASP sample would be classified using discriminant rules that were developed on the 40 PCA-based EEG coherence factors that had previously, successfully distinguished subjects with ASD from neurotypical controls [36]. The second aim was to explore whether new EEG coherence-based classification rules could be derived to separate the ASP from the ASD population.
Methods
All analyses were performed at the Boston Children's Hospital (BCH) Developmental Neurophysiology Laboratory (DNL) under the direction of the first author. This laboratory maintains a comprehensive database of several thousand patients and research volunteers including unprocessed (raw) EEG data in addition to referral information. Patients typically are referred to rule out epilepsy and/or sensory processing abnormalities by EEG and evoked potential study. Only EEG data are utilized and reported in this study.
Patients with autism spectrum disorders and with Asperger's syndrome
The goal of the current study was to select only those patients, ranging in age from 2 to 12 years, diagnosed by experienced clinicians as having ASD or ASP. Excluded were all subjects with co-morbid neurological diagnoses that might exert an independent and confounding impact upon EEG data.
The inclusion criteria for ASD and the ASP groups consisted of an age of 2 to 12 years and a disorder diagnosis, as determined by an independent child neurologist, psychiatrist or psychologist specializing in childhood developmental disabilities at BCH or at one of several other affiliated Harvard teaching hospitals. Diagnoses relied upon DSM-IV [2], Autism Diagnostic Interview, revised (ADI-R) [37] and/or Autism Diagnostic Observation Schedule (ADOS) [38,39] criteria, aided by clinical history and expert team evaluation. All clinical diagnoses were made or reconfirmed within approximately one month of EEG study, thereby obviating diagnostic variation related to time from diagnosis to EEG assessment, a recently recognized important issue [40,41].
Exclusion criteria for both ASD and ASP were: (1) comorbid neurologic syndromes that may present with autistic features (for example, Rett's, Angelman's and fragile X syndromes and also tuberous sclerosis and mitochondrial disorders); (2) clinical seizure disorders or EEG reports suggestive of an active seizure disorder or epileptic encephalopathy such as the Landau-Kleffner syndrome (patients with occasional EEG spikes were not excluded); (3) a primary diagnosis of global developmental delay or developmental dysphasia; (4) expressed doubt by the referring clinician as to the clinical diagnosis; (5) taking medication(s) at the time of the study; (6) other concurrent neurological disease processes that might induce EEG alteration (for example, hydrocephalus, hemiparesis or known syndromes affecting brain development); and (7) significant primary sensory disorders, for example, blindness and/or deafness.
A total of 430 subjects with ASD met the above study criteria and were designated as the study's ASD sample. For further detailed sample description see Duffy and Als [36]. A total of 26 patients met the above study criteria for ASP and were designated as the study's ASP sample.
Healthy controls
From among normal (neurotypical) children recruited and studied for developmental research projects, a comparison group of children was selected as normally functioning, while avoiding creation of an exclusively 'super-normal' group. For example, subjects with the sole history of prematurity or low-weight birth and not requiring medical treatment after birth hospital (Harvard affiliated hospitals) discharge were included.
Necessary inclusion criteria were age between 2 and 12 years corrected for prematurity (as indicated), living at home and identified as functioning within the normal range on standardized developmental and/or neuropsychological assessments performed in the course of the respective research study.
Exclusion criteria were as follows: (1) Diagnosed neurologic or psychiatric illness or disorder or expressed suspicion of such, for example, global developmental delay, developmental dysphasia, attention deficit disorder and attention deficit with hyperactivity disorder; (2) abnormal neurological examination as identified during the research study; (3) clinical seizure disorder or EEG report suggestive of an active seizure disorder or epileptic encephalopathy (individuals with rare EEG spikes again were not excluded); (4) noted by the research psychologist or neurologist to present with ASD or ASP features; (5) newborn period diagnosis of intraventricular hemorrhage, retinopathy of prematurity, hydrocephalus or cerebral palsy, or other significant conditions likely influencing EEG data; and/or (6) taking medication(s) at time of EEG study.
A total of 554 patients met the criteria for neurotypical controls and were designated as the study's control sample. For further description of the control sample see Duffy and Als [36].
Institutional review board approvals
All neurotypical control subjects and their families gave informed consent, and assent as age appropriate, in accordance with protocols approved by the Institutional Review Board, Office of Clinical Investigation of BCH, in full compliance with the Helsinki Declaration. Subjects with ASD or ASP, who had been referred clinically, were studied under a separate BCH Institutional Review Board protocol, also in full compliance with the Helsinki Declaration, which solely required de-identification of all personal information related to the collected data without requirement of informed consent.
Measurements and data analysis EEG data acquisition
Registered EEG technologists, naïve to the study's goals, and specifically trained and skilled in working with children within the study's age group and diagnostic range, obtained all EEG data for the study from 24 gold-cup scalp electrodes applied with collodion after measurement: FP1, FP2, F7, F3, FZ, F4, F8, T7, C3, CZ, C4, T8, P7, P3, PZ, P4, P8, O1, OZ, O2, FT9, FT10, TP9, TP10 (see Figure 1). EEG data were gathered in the awake and alert state assuring that a minimum of eight minutes of waking EEG was collected. Data were primarily gathered with Grass™ EEG amplifiers with 1 to 100 Hz band-pass filtering and a 256 Hz sampling rate (Grass Technologies Astro-Med, West Warwick, RI, USA). One other amplifier type was utilized for five patients with ASD (Bio-logic™; Bio-logic Technologies, San Carlos, CA, USA; 250 Hz sampling rate, 1 to 100 Hz band-pass), and one other amplifier type was utilized for 11 control subjects (Neuroscan™; Compumedics Neuroscan, Charlotte, NC, USA; 500 Hz sampling rate, 0.1 to 100 Hz band-pass). Data from these two amplifiers, sampled at other than 256 Hz, were interpolated to the rate of 256 Hz by the BESA 3.5™ software package (BESA GmbH, Gräfelfing, Germany). As the band-pass filter characteristics differed among the three EEG machines, frequency response sweeps were performed on all amplifier types to permit modification of data recorded to be equivalent across amplifiers. This was accomplished by utilizing special software developed in-house by the first author using forward and reverse Fourier transforms [42].
Measurement issues
EEG studies are confronted with two major methodological problems. First is the management of the abundant artifacts, such as eye movement, eye blink and muscle activity, observed in young and behaviorally difficult to manage children. It has been well established that even EEGs that appear clean by visual inspection may contain significant artifacts [43,44]. Moreover, as shown in schizophrenia EEG research, certain artifacts may be group specific [45]. Second is capitalization upon chance, that is, application of statistical tests to too many variables and subsequent reports of chance findings in support of an experimental hypothesis [43,46]. Methods discussed below were designed to specifically address these two common problems.
Artifact management
As previously outlined in greater detail [36], the following steps were instituted for artifact management: (1) EEG segments containing obvious movement artifact, electrode artifact, eye blink storms, drowsiness, epileptiform discharges and/or bursts of muscle activity were marked for removal from subsequent analyses by visual inspection. (2) Data were subsequently filtered below 50 Hz with an additional 60 Hz mains filter. (3) Remaining lower amplitude eye blink was removed by utilizing the source component technique [47,48], as implemented in the BESA software package. These combined techniques resulted in EEG data that appeared largely artifact free, with rare exceptions of low level temporal muscle fast activity artifact and persisting frontal and anterior temporal slow eye movements, which remain, none-the-less, capable of contaminating subsequent analyses. (4) A regression analysis approach [49] was employed to remove these potential remaining contaminants from subsequently created EEG coherence data. Representative frontal slow EEG spectral activity representing residual eye blink and representative frontal-temporal EEG spectral fast activity representing residual muscle artifact were used as independent variables within multiple regression analysis, where coherence variables were treated as dependent variables. Residuals of the dependent variables, now uncorrelated with the chosen independent artifact variables, were used for the subsequent analyses.
Data reduction -calculation of spectral coherence variables
Approximately 8 to 20 minutes of awake state, artifact free, EEG data per subject were transformed by use of BESA software, to the scalp Laplacian or current source density (CSD) estimates for surface EEG studies.
The CSD technique was employed as it provides reference independent data that are primarily sensitive to underlying cortex and relatively insensitive to deep/remote EEG sources, and minimizes the effect of volume conduction on coherence estimates by emphasizing sources at smaller spatial scales than unprocessed potentials. This approach obviates coherence contamination from reference electrodes and minimizes contaminating effects from volume conduction [30,50]. Spectral coherence was calculated, using a Nicolet™ software package (Nicolet Biomedical Inc., Madison, WI, USA) according to the conventions recommended by van Drongelen [51] (pages 143-144, equations 8. 40, 8.44). Coherency [52] is the ratio of the cross-spectrum to the square root of the product of the two autospectra and is a complex-valued quantity. Coherence is the square modulus of coherency, taking on a value between 0 and 1. In practice, coherence is typically estimated by averaging over several epochs or frequency bands [51]. A series of two-second epochs was utilized over the total available EEG segments. Spectral coherence utilizing 24 channels and 16, 2 Hz wide spectral bands from 1 to 32 Hz, results in 4,416 unique coherence variables per subject, purged of residual eye movement and/or muscle artifact by regression as explained above. The data processing described above was used in the current as well as our prior study of ASD [36].
Creation of 40 coherence factors
Forty coherence factors had been created utilizing PCA with Varimax rotation prior to this study from the 4,416 coherence variables per subject individual of the independent study population consisting of the combined neurotypical controls and subjects with ASD [36]. The 40 factors described over 50% of the total variance within that combined population. These 40 coherence factors were created in the current study for each individual of the new sample of 26 subjects with ASP. The inherently unbiased data reduction by PCA eliminated capitalization on chance and investigator selection bias.
Data analysis
The BMDP2007™ statistical package (Statistical Solutions, Stonehill Corporate Center, Saugus, MA, USA) [53] was utilized for all standard statistical analyses with the exception of PCA (see above and [36]).
Discrimination of groups by EEG spectral coherence data
Program 7M was used for two-group discriminant function analysis (DFA) [54][55][56]. Program 7M produces a new canonical variable, the discriminant function, which maximally separates two groups based on a weighted combination of entered variables. DFA defines the significance of a group separation, summarizes the classification of each participant, and provides an approach for the prospective classification of individuals not involved in discriminant rule generation or for classification of a new population. The analysis reports the significance of group separation statistically by Wilks' lambda with Rao's approximation. To estimate prospective classification success, the jackknifing technique, also referred to as the leaving-one-out process, was used [57,58]. By this method, discriminant function is formed on all individuals but one. The left-out individual is subsequently classified. This initial left out individual is then folded back into the group (hence 'jackknifing'), and another individual is left out. The process is repeated until each individual has been left out and classified. The measure of classification success is then based upon a tally of the correct classifications of the left-out individuals.
Assessment of population distribution
The samples' distribution characteristics were described by Program 2D. It incorporates the standard Shapiro-Wilk or W-test of normality for large samples, considered to be an objective and powerful test of normality [59,60]. It also calculates skewedness, a measure of asymmetry with a value of zero for true symmetry, and a standard error (value/SE). Positive numbers above +2.0 indicate skew to the right and below −2.0 skew to the left. In addition, the W-test calculates kurtosis, a measure of long-tailedness. The tail-length value of a true normal tail is 0.0. If the tail length, value/SE, is above +2.0, the tails are longer than for a normal distribution, and if it is below −2.0, the tails are shorter than for a true normal distribution.
Muratov and Gnedin recently described two relatively new techniques that search for bimodality within a given population distribution [29]. Gaussian mixture modeling determines whether the population deviates statistically from unimodality. It also searches for all potential underlying bimodal populations and determines the significance of the best possible bimodal solution. These authors also described the Dip test [61], which statistically compares the actual population distribution with the best possible unimodal distribution to look for flat regions or dips between peaks as would be found in bimodally distributed populations.
Multiple regression program
Program 6R facilitates the multivariate prediction of a single dependent variable on the basis of a set of selected independent predictor variables. The program calculates a canonical variable formed from a rule-based linear combination of independent variables, which predict the independent variable. Program 6R was used for prediction of coherence measures from multiple EEG spectral measures sensitive to known EEG artifacts (for example, temporal muscle fast beta and frontal slow delta eye movement). The fraction of a coherence measure that was predicted by artifact was removed and the 'residual' coherence measures were subsequently utilized as variables, now uncorrelated with any known artifact signal.
Asperger's syndrome classification as control or autism spectrum disorders
The 26 new subjects with ASP had a mean age of 7.07 years with a range from 2.79 to 11.39 years and consisted of 18 males and 8 females (male to female ratio of 2.25:1), comparable in age and gender distribution to the previously studied neurotypical control and ASD groups [36]. The 26 subjects with ASP and the populations of 554 controls and 430 subjects with ASD were submitted to a two-group DFA with the 40 coherence factors as input variables. The ASP subjects were designated to be passively classified on the basis of rules generated to differentially classify the control and ASD groups. As shown in Table 1, 96.2% of the ASP group (25 out of 26) were classified as belonging within the ASD group, and just 3.8% (1 out of 26) were classified as belonging within the control group. Factor 15 was the highest loading variable, that is, the first coherence factor chosen, on the discriminant function. Thus, within a neurotypical control versus ASD dichotomy, ASP subjects were securely classified as belonging to the ASD population. Asperger's syndrome classification as within or separate from autism spectrum disorders An additional two-group DFA was performed comparing the new ASP (n = 26) population with the ASD population (n = 430), again with 40 coherence factors as input variables. The overall classification, as Table 2 shows, was highly significant (F = 6.05; degrees of freedom =16,439; P ≤0.0001). Jackknifing techniques correctly classified 92.3% of the patients with ASP (24 out of 26) and 84.4% of the patients with ASD (363 out of 430). Thus the coherence factors separated the ASP population from the ASD population with excellent classification success. As Table 2 and Figure 2 illustrate, Factor 15 again was the first coherence factor chosen for the ASD-ASP discrimination. Factor 15 similarly had been the first factor chosen for most of the control versus ASD population discriminations in the prior study [36]. This factor indicates a reduced coherence between the left anterior and posterior frontal-temporal regions, and to a lesser degree between the right anterior temporal-frontal regions, for the ASP group compared with the ASD group. In contrast, the loading of the next factor chosen, Factor 3, demonstrated enhanced coherence between the left mid temporal region and the left central, parietal and occipital regions for the ASP group compared with the ASD group. The loadings of the next two factors selected, Factor 33 and Factor 40, demonstrated reduced right temporal-frontal coherence and reduced occipital to bilateral parietal coherence for the ASP compared with the ASD group. These first four were the most important factors; their coherence loading patterns are depicted in Figure 2. Twelve additional factor designations are also provided; their loading patterns are depicted and discussed in a previous publication [36].
Five subsamples, each consisting of 26 subjects with ASD, were randomly selected from the larger ASD population. The DFA process was repeated to determine whether these randomly selected subsets of subjects with ASD could be classified as separate from the remaining ASD population. As Table 3 shows, jack-knifed classification success for the five random sets averaged just 48.5%, that is, below the chance level of 50%. None of the five DFA demonstrated significant Wilks' lambda. Note that the list of chosen factors did not include Factor 15 as had been selected first in the current and prior analyses. Note, also, that there is a lack of consistency in factor selection among the five-group analyses. Thus, random samples of 26 subjects with ASD were not significantly and reliably separable by discriminant analysis from the remaining ASD population. Asperger's syndrome population, tail of the autism spectrum disorders distribution curve or separate population?
The distribution characteristics of the canonical variable defined by the DFA separating the ASP from the ADS groups were described for each sample separately. The ASD population distribution parameters were as follows: normality statistic, W = 0.9881, P = 0.8375; skewedness statistic, W = 0.03, value/SE = −0.0265; kurtosis statistic, W = 1.35, value/SE = 5.728. This indicated that the ASD sample was found to be within the limits of a normal distribution, was symmetrical, and had somewhat longer tails than the typical normal distribution, not unusual for a clinical population. All five randomly selected subsets of the ASD population also demonstrated normal distributions as anticipated by statistical theory [62].
The new sample of 26 subjects with ASP showed distribution parameters as follows: normality statistic, W = 0.9606, P = 0.4222; skewedness statistic, W = −0.61, value/ SE = −1.281; kurtosis statistic, W = 0.33, value/SE = 0.347. This indicated that the ASP sample distribution was also within the limits of a normal population, was symmetrical, and had tails that conformed to expected lengths (see Figure 3) and was therefore characterized as Gaussian normal.
When the ASD and ASP populations were combined and displayed (Figure 3), the ASP population appeared as a small Gaussian distribution in the left end of the ASD population. However, the Gaussian mixture modeling process indicated that the best bimodal means, nevertheless, were close and did not differ statistically. The Dip test similarly indicated that the probability for a deviation from unimodality was not significant.
Discussion
The goal of this study was to explore the relationship between a sample of subjects clinically defined as having ASP, and a population of previously well-studied neurotypical controls and subjects with ASD. The dependent variables of interest, detailed in a prior study [36], were 40 EEG coherence factors derived from systematically de-artifacted EEG data.
Specific goals and findings
The study's first goal was to determine how a previously defined and statistically validated discriminant function, developed to classify individuals as belonging to a control or an ASD population, would classify subjects with ASP, whose data had not influenced the derivation of the discriminant function. Results (Table 1) showed that the control versus ASD discriminant function classified 25 of 26 patients with ASP (96.2%) as belonging to the ASD sample. This indicates that subjects with ASP are neurophysiologically closer to the ASD population than to the neurotypical control population.
The study's second goal was to determine if the 26 subjects with ASP were, nonetheless, systematically separable from the larger population of 430 subjects with ASD. Using DFA, the subjects with ASP were indeed significantly separated (P ≤0.0001) from the ASD population; 92.3% (24 out of 26) of those with ASP were classified as ASP rather than as ASD. These results show that subjects with ASP, although associated with the broader autism spectrum population, manifested significant physiological differences in EEG connectivity (as measured coherence factors) to distinguish them from the subjects with ASD. To test whether this subsample separation was a random result, that is, whether a randomly chosen subsample of individuals could also be classified as a distinct subgroup, five randomly selected sets of 26 subjects with ASD were also compared by DFA to the remaining ASD population. The average classification success was 48.5%, that is, less than chance; the highest classification success reached was 53.8%. These results suggest that the ASP subgroup discrimination from the larger ASD group was not the result of sampling artifact but in fact due to true group differences, because the findings held for the ASP separation but not for the ASD subsample discrimination attempts.
The pattern of coherence difference, as shown by the loading patterns depicted in Figure 2 (Factor 15), demonstrated that the ASP population showed even more reduction of left lateral anterior-posterior coherence than the ASD group. This was an unexpected finding as Factor 15 was postulated to be a language-related factor based upon its similarity to the spatial location of the Arcuate Fasciculus [36], and subjects with ASP typically have better language function than do those with ASD. The solution to this unanticipated finding became clearer through inspection of the Factor 3 coherence loadings, which showed that the ASP group demonstrated markedly increased left mid temporal to central parietal-occipital coherence. It is speculated that Factor 3's broadly increased left temporal connectivity may partially compensate for the language deficiency suggested by Factor 15, potentially facilitating acquisition of language skill in ASP without significant developmental delay. It is also proposed that the postulated compensation may not completely facilitate all aspects of normal language development, and may result in the several, readily identifiable, higher level differences of language use observed in subjects with ASP. Examples include excessive pedantic formality, verbosity, literal interpretation devoid of nuance and prosodic deficiency, to name a few [63]. The final two factors chosen, Factors 33 and 40, show a pattern of reduced coherence loadings in the ASP group that may correspond to differences in visual-spatial functioning and right hemispheric characteristics that have been described as part of the lack of social nuance and special kind of 'oblivious to context' personality characteristics observed in individuals with ASP [64,65]. The study's third goal was to determine whether the subjects with ASP represent a tail of the ASD population distribution or a distinct population. Inclusion of the ASP to the ASD population ( Figure 3) did not result in a statistically significant bimodal distribution as would be seen if the ASD and ASP populations represented completely differing clinical entities. However, the asymmetrically high ASD/ASP population ratio of 16.5:1 was above the maximally tested ratio of 10:1 for the Gaussian mixture modeling and Dip tests employed [29]; typical ratios are 3 or 4 to 1. The small size of the tested ASP population limits definitive determination of whether ASP is a separate entity to ASD. Study of a larger ASP population is necessary to asses this important question in a more conclusive manner. Nevertheless, it is striking that the relatively small sample of 26 randomly referred subjects with ASP manifested a normal Gaussian distribution as opposed to one demonstrating an asymmetrical distribution as might be expected if the sample simply constituted subjects non-randomly selected from the high functioning end of the ASD population curve. At this point, current study results are consistent with ASP forming one end of the ASD population. This is similar to the demonstration by Shaywitz et al. that reading disability represents the 'low end tail' of the reading ability curve and not a distinctly separate population [66].
Additional questions concern the portion of the ASD population distribution that overlapped with the ASP population distribution (Figure 3), including the 69 individual misclassifications within the ASD versus ASP discriminant analysis ( Table 2). The population overlap may represent clinical misdiagnoses or constitute noise within the statistical classification process. Alternatively, the population overlap may indicate that HFA and ASP are the same physiological entity. Indeed, it has been clinically observed that the diagnosis of ASP by DSM-IV criteria [2] may be obscured by poor reliability in a family's recollection of early language delay or by the belief of some clinicians that the diagnosis of ASP should be made on the basis of the patient's current behavioral profile without weighting the presence or absence of early language delay. ASP and HFA are often spoken of, especially by neurologists, as a single entity or at least closely related entities.
The limitation of the small ASP sample size is the main drawback of the current study. A larger prospective study must be conducted to address whether - separately or together -ASP significantly differs neurophysiologically from ASD, and whether ASP and HFA constitute single or separable populations.
Although the findings above in many ways agree with the DSM-5 [9] placement of ASP within the broad autistic spectrum, they also demonstrate that patients with ASP can be physiologically distinguished from those with ASD. Recognition of ASP as a separate entity is important from the patients' perspectives of obtaining appropriate medical and educational services as well as of establishing a personal identity. As an example of the latter, the well-read author with Asperger's Syndrome, J E Robinson [67], reported in a televised interview that it 'was life changing … ' to discover as an adult that he had a known, named syndrome and that ' … there were so many people like me.'
Conclusion
A diagnostic classifier based upon EEG spectral coherence data, previously reported to accurately classify controls and ASD subjects [36], has identified ASP subjects as within the ASD population. Thus, there is justification to consider Asperger's Syndrome as broadly belonging within the Autism Spectrum Disorders. However, there is also evidence demonstrating that ASP subjects can be physiologically distinguished from ASD subjects. Just as dyslexia is now recognized as the low end tail of the reading ability distribution curve [63], so Asperger's Syndrome may be similarly and usefully defined as a distinct entity within the higher functioning tail of the autism distribution curve. Larger samples are required to determine whether ASP subjects should be considered as an entity physiologically distinct from the ASD population or whether they form an identifiable population within the higher-functioning tail of ASD.
EEG spectral coherence data, as presented, provide easily obtained, unbiased, quantitative, and replicable measures of brain connectivity differences relevant to these issues.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
Study concept and design, and interpretation of the results was performed by all authors. FHD and AS selected clinical patients to be illustrated. FHD and HA selected specific neurotypical controls. FHD was responsible for the acquisition and preparation of neurophysiologic data. FHD and GBM performed the statistical analyses. FHD had full access to all the data in the study and takes responsibility for all aspects of the study including integrity of data accuracy and data analysis. All authors collaborated in writing and editing the paper and approved the final manuscript.
Authors' information FHD: Physician, child neurologist, clinical electroencephalographer and neurophysiologist with undergraduate degrees in electrical engineering and mathematics. Current research interests are in neurodevelopmental disorders and epilepsy, including the development and utilization of specialized analytic techniques to support related investigations. AS: Cognitive neuroscientist with specialized interests in the neurophysiological identification of neurodevelopmental disorders, particularly developmental language disorders. GBM: Neuropsychologist and statistician with interests in pediatric neurodevelopment. HA: Developmental and clinical psychologist with research interests in newborn, infant and child neurodevelopment including generation of early predictors of later outcome from behavioral, magnetic resonance imaging and neurophysiologic data. | 2017-07-13T02:13:08.861Z | 2013-07-31T00:00:00.000 | {
"year": 2013,
"sha1": "2174c5c502c3ed689f882baf974dc1ba9f3fd1de",
"oa_license": "CCBY",
"oa_url": "https://bmcmedicine.biomedcentral.com/track/pdf/10.1186/1741-7015-11-175",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0838dd84f46e1076407b2285941eff5c68dcd5a1",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118567548 | pes2o/s2orc | v3-fos-license | Non-local Plasma Spectrum of Graphene Interacting with a Thick Conductor
Self-consistent field theory is used to obtain the non-local plasmon dispersion relation of monolayer graphene which is Coulomb-coupled to a thick conductor. We calculate numerically the undamped plasmon excitation spectrum for arbitrary wave number. For gapped graphene, both the low-frequency (acoustic) and high frequency (surface) plasmons may lie within an undamped opening in the particle-hole region. Furthermore, we obtain plasmon excitations in a region of frequency-wave vector space which do not exist for free-standing gapped graphene.
I. INTRODUCTION
Recent research on plasmon excitations 1-4 has covered fundamental aspects such as nonlocality 5 , quantum effects in nanoscale structures including fullerene 6-8 , graphene 9,10 , carbon nanotubes 11,12 , silicene 13,14 and metallic dimers 15 , surface plasmon lasing 16 , plasmon-electron interaction 17 and the potential role played by plasmon excitations in electronic sensors 18,19 and radiation degradation of electronic and optoelectronic devices 20 . The surge in activity to understand and discover novel plasmonic materials is stimulated by possible applications such as light concentration for solar energy 21 , devices for telecommunications 22 , and near-field instrumentation 23 . Investigation of the damped terahertz plasmons in grpahene, interacting with sruface plasmons of a substrate with a large doping due to a large scattering rate, was addressed in 24 . The authors demonstrated that the field spread of the graphene plasmons into the substrate suppressed.
In view of the stated importance of achieving a detailed understanding of plasma excitations, we devote this paper to a specific area which has not been adequately covered so far in the literature. It concerns plasmon excitations in monolayer graphene. There are several papers dealing with calculations of the dispersion relation for monolayer graphene that is doped 9,25-27 as well as pristine graphene whose collective charge density oscillations are driven by temperature 28 . The work on gapped graphene 10 was partially motivated by the observation that when monolayer graphene is on a substrate such as boron nitride, an energy gap between the valence and conduction bands is produced yielding a plasmon and single-particle excitation spectrum which can be drastically different from gapless monolayer graphene. In Refs. 9 through 10 , a detailed calculation of the undamped plasmon excitations was carried out for all wavelengths. Although computationally challenging, these calculations proved useful since our goal is to obtain a full understanding of the response properties of nanoscale structures to external probes. In a recent paper 29 , it was demonstrated that the plasmon excitations in graphene has a linear dispersion rather than a square root dependence on the wave vector. This startling result came as a surprise because since theoretical calculations on free-standing graphene clearly does not yield a linear dependence in the long wavelength limit. As a matter of fact, this linear dependence of plasmon frequency on wave vector was initially attributed to local field corrections to the random-phase approximation. Horing 30 showed that when graphene is Coulomb-coupled to a conductor, the surface plasmon causes the low-frequency π-plasmon to have a linear dispersion. In this paper, we calculate the full dispersion relation for undamped plasmons in a hybrid monolayer graphene-conductor structure. We exploit our simulations to consider how the plasmon dispersion is affected when there is an energy gap between the valence and conduction bands, thereby generalizing the results in 10 where a surface is assumed to play a role.
The longitudinal excitation spectra of allowable modes will be determined from a knowledge of the frequencydependent non-local dielectric function (r, r ; ω) of the composite system, which depends on the position coordinates r, r and frequency ω. Alternatively, the normal modes correspond to the resonances of the inverse dielectric function K(r, r ; ω) satisfying dr K(r, r ; ω) (r , r ; ω) = δ(r − r ). The significance of K(r, r ; ω) is that it embodies manybody effects 31,32 through screening by the medium of an external potential U (r ; ω) to produce an effective potential V (r; ω) = dr K(r, r ; ω)U (r ; ω). In Sec. II, we briefly review the formalism for calculating the inverse dielectric function for a 2D layer interacting with a semi-infinite conductor. Section III is devoted to our numerical results arXiv:1412.7198v1 [cond-mat.mtrl-sci] 22 Dec 2014 for the dispersion relations at arbitrary wavelength for this hybrid structure. We show explicitly how the gap for monolayer graphene affects both the dispersion relation for the surface plasmon and the low-frequency acoustic mode. Specifically, we demonstrate that the low-frequency plasmon branch may exist in a region of frequency-wave vector space that was not obtained for free standing gapped graphene. We conclude with a summary of our results in Sec. IV.
II. GENERAL FORMULATION OF THE PROBLEM
In this work, we consider a composite nano-scale system consisting of a 2D layer separated from a thick dielectric material. The 2D layer may be monolayer graphene (or a 2DEG such as a semiconductor inversion layer or HEMT (high electron mobility transistor)). The 2D graphene layer may have a gap, thereby broadening the applicability of the composite system model which also incorporates a separation layer and a semi-infinite plasma, as depicted in Fig. 1. The excitation spectra of allowable modes will be determined from a knowledge of the non-local dielectric function (r, r ; ω) which depends on position coordinates r, r and frequency ω or its inverse K(r, r ; ω) satisfying dr K(r, r ; ω) (r , r ; ω) = δ(r, r ). The self-consistent field structure for K(r, r ; ω) is determined, using the technique of Ref. [30]. In operator notation, the composite dielectric functionˆ and its inverse,K =ˆ −1 , for the 2D layer and semi-infinite substrate is given by adding their polarizabilitiesα 2D andα SI , respectively, i.e., Multiplication of Eq. (1) from the right byK and left byK SI yields the basic random-phase approximation (RPA) integral equationK Additionally,K SI is the inverse dielectric function for the semi-infinite substrate alone, whose surface lies in the z = 0 plane. In explicit integral form, after Fourier transforming with respect to coordinates parallel to the translationally invariant xy-plane and suppressing the in-plane wave number q || and frequency ω, we obtain Here, the polarization function for the 2D layer is given by where v is the Coulomb potential energy and the 2D response function's localization to the layer at z = a is expressed as F from a conducting surface, whereas the curve in between these two solutions corresponds to the zeros of 1 + 2πe 2 /( sq ) Π (0) 2D (q || , ω) = 0 for free standing graphene. In (b), only the solutions for SC (q , ω) = 0 are presented. In both (a) and (b), the plasmon energy is scaled with respect to the chemical potential µ and we superimpose all plasmon curves on a background of a density plot of Π 2D (q || , ω) to illustrate the effects due to Landau damping.
with Π 2D (q || , ω) as the 2D ring diagram. Upon substituting this form of the polarization function for the monolayer into the integral equation for the composite inverse dielectric function K, we have We now set z 1 = a in Eq. (6) and obtain 2D (q , ω + i0 + ) whose values determine Landau damping. The red lines correspond to undamped plasmons when the magnitude of the plasmon dispersion function SC q , ω + i0 + vanishes. Panels (a) and (b) show the case of ∆ = 0.95 and 0.5, panels (c) and (d) demonstrate the behavior of the plasmon spectra for µ = 1.5µ0 and ∆ = 0.93µ and ∆ = 0.33µ, respectively. Here, µ0 = 0.2 eV is the chemical potential used in the calculations of Fig. 3. This value for µ0 is chosen to ensure the applicability of isotropy of the energy band structure at low doping. 27 Also, in our notation, Solving algebraically for K(a, z 2 ) yields with the "dispersion function"' S C (q || , ω) given by whose zeros determine the plasmon resonances of the composite system. In our numerical calculations, we employ K SI (z, z ) given by Eq. (30) of Ref. [32] for the semi-infinite metallic substrate in the local limit, whence Eqs. (6) through (9) yield 30 with S C (q || , ω) = 1 + 2πe 2 Although the principal focus here is to examine the role of 2D graphene plasma nonlocality embedded in Π (0) 2D (q , ω) on the coupled plasmon spectrum of the composite system, we briefly revisit the local results of Ref. [30] to point out their generalization to include gapped graphene along with the previously discussed gapless results. In this regard, the graphene polarizability is also taken in the local limit with Π (0) 2D (q || , ω) ≈ Cq 2 /ω 2 so that Eq. (11) yields where the inclusion of a gap is described by µ is the chemical potential and ∆ is the gap between valence and conduction bands. Consequently, Eq. (12) yields the plasmon frequency as follows 30 : with K 1 and K 2 defined by In the low wave number limit q 1/a these expressions are reduced to: which are both linear in q || , differing from the q 1/2 || -dependence for free-standing graphene or the 2DEG 9,10,33-36 . Nonlocality of the graphene plasma introduces changes in the features of K(z 1 , z 2 ) of Eq. (10) and in its coupled 2D-surface plasmon spectrum in two respects. First, the local coupled mode spectrum described in the preceding paragraph is modified by nonlocality corrections in Eq. (11) with the use of the polarization function Π (0) 2D (q , ω) for all wave numbers as calculated by 10 for gapped graphene. Secondly, nonlocality introduces natural damping through the occurrence of regions in which plasmons can decay into electron-hole pairs consistent with energy-momentum conservation. The intersection of the plasmon dispersion curve ω(q ) with such a particle-hole excitation region (PHER)signals the onset of damping at T=0K with S C (q || , ω) = 0. However, it is the undamped coupled plasmons that are of interest with The features of the interacting graphene-surface plasmon spectrum are analyzed here numerically using the real and imaginary parts of the polarizabilities of Wunsch 9 for gapless graphene and Pyatkovskiy 10 for gapped graphene (all at zero temperature).
III. CALCULATED RESULTS AND DISCUSSION
First, we consider graphene with no energy gap and linear energy dispersion for the valence and conduction bands. The boundaries of the particle-hole modes region are linear, enclosing a triangular region, where the plasmons are not damped. The plasmons for gapless graphene are shown in Fig.[2]. We discern two plasmon branches, one attributed to the surface (the upper branch, originating from ω p / √ 2 frequency) and the other to the graphene sheet (starting at the origin). We present results for various values of the distance a between the layer and the surface. When this separation is increased, the two branches evolve into a merged spectral line, similar to the plasmon of extrinsic gapless graphene. The surface plasmon branch tends to be dispersionless and to exist in the long wave length limit only. For all presented cases, the upper plasmon mode shows a stronger and broader peak. We display the absolute value of the real part of S −1 C (q || , ω) to emphasize each peak.
We also solve the equation S C (q , ω) = 0 numerically, demonstrating the exact solution for the plasmon dispersion relation for both cases of zero (see Fig. 3) and finite (Fig. 4) energy band gap. These solutions become extremely interesting when the upper branch splits into two parts for the case of small energy gap. When the gap is zero, once again we see that the upper branch (which we attribute to the presence of a surface) adopts certain features of the plasmon in gapless graphene mainly because the branch is located in the same {ω, q } regions, both inside and outside the PHER. However, according to our analytical results, for long wavelengths both branches possess finite slope, in contrast to √ q behavior in free standing graphene. The results in the panels were obtained for various chosen values of the energy gap ∆, the distance a between the surface and the graphene layer and the chemical potential µ, so that panel (a) shows the case when kF a = 1 and ∆ = 0.6µ, (b) kF a = 5 and ∆ = 0.6µ, (c) µ = 1.5µ0, kF a = 1 and ∆ = 0.93µ and in (d) µ = 1.5µ0, kF a = 1 and ∆ = 0.33µ. In our notation, µ0 is an arbitrary doping, parameter in terms of which we measure chemical potential. We introduced k ∆ F ≡ µ 2 − ∆ 2 /( vF ).
The case of a small energy gap is presented in Fig. 4 for various energy gap and doping values. Similar to free standing graphene, the upper branch is extended due to splitting of the PHER. It might also be split into two different branches as mentioned in Ref. [10]. When the distance a of the 2D layer from the surface is increased, the two plasmon branches merge into a single branch, which is similar to the plasmon dispersion in gapped graphene. The general conclusion is that when one of the factors (energy gap, chemical potential or the separation a) is appreciable, the changes caused by a sizable change in one of the others is not significant.
The role played by the energy band gap is an important part of our investigation. For monolayer graphene, an energy gap leads to to an extended region of undamped plasmons 10 . In Fig. 5, we present the regions of the real and imaginary parts of the non-interacting polarization function which have distinct functional forms. We pay particular attention to the regions outside of the single-particle excitation continuum since, as mentioned previously, they encompass plasmon frequencies in the domains of {ω, q } where the plasmons are not damped. We denote these planar regions (Ω1, Ω5 and Q4) with reddish colors. The condition Π (0) 2D (q , ω) = 0 is also satisfied in Q3, but no plasmons are observed in this region. Region Q4 with v F q > ω plays a crucial role in our study because this is where the extended undamped lower plasmon branch is located. This is a new situation, which was not encountered in previous works of Refs. [9,10,25,38] and it is attributed to screening by the carriers in the thick substrate adjoining the 2D layer. Figure 7 exhibits our results for plasmon excitations of a composite system consisting of a layer of gapped graphene and a thick substrate for various values of the energy gap, chemical potential and the distance between the two bodies. The PHER and its boundaries constitute an important factor determining the plasmons. Consequently, the upper branch, located mainly in Ω1 and Ω5 regions, bears some similarity to the plasmons in free standing gapped graphene, including its splitting into two parts in the vicinity of the boundary of Ω 2 . The results for both the lower and upper branches definitely depend on the gap. In the long wavelength limit, we demonstrate that ω 1 √ C and ω 2 ω p / √ 2 + · · · C, where C 1 − ∆ 2 /µ 2 . The plasmon dispersion relation for a free standing graphene layer with a finite energy gap is ω Cq , which differs from our solution and Ref. [30]. However there is an interesting similarity in that the plasmon frequency is decreased with increased energy gap. This dependence is observed for increased values of q .
The important differences in the plasmon spectra between free standing graphene and graphene interacting with a half space arise from the lower plasmon branch which lies on both sides of the straight line ω = v F q and has a linear dispersion for small q . According to previously published results 10 , the size of the Q4 region is determined by doping as well as the energy gap. The boundary between Q4 and Q2 (with finite Π (0) 2D (q , ω)) is described by The plasmon dispersion for various doping concentrations is presented in Fig. 7. Increasing both µ and ∆, we find more extended branches where undamped plasmons exist. Figure 7(d) clearly demonstrates anti-crossing and an extended region of undamped plasmons for both branches. In all cases, the lower plasmon branch does not rise above the line ω = ω p / √ 2. The curvature of the upper branch is determined by the ratio ∆/µ rather rather than by the gap itself. For certain values of this ratio, the upper branch consists of two different, separated plasmon branches.
We note that the exact numerical solutions in Fig. 4 corresponding to S C (q , ω) = 0 are in agreement with the data for the density plots in Figs. 6 and 7. The results in these plots confirm the anti-crossing and the extension of the lower plasmon branches with increased doping and energy gap. We also note that for large values of the ratio ∆/µ ≥ 0.9 the lower branch becomes nearly dispersionless.
IV. CONCLUDING REMARKS
In summary, we have calculated the nonlocal plasmon dispersions within RPA for monolayer graphene interacting with a substrate, for arbitrary wavelength. In this, we investigated numerically the effects of the energy gap for extrinsic graphene, as well as the effects of its distance from the surface, on the plasmon dispersion relation. Our considerations were motivated by recent experimental work showing a linear plasmon dispersion in the long wavelength limit 39 and the subsequent theoretical work by one of the authors 30 to account for this observation, which is extended here to a fully general numerical description of nonlocal effects in monolayer graphene when the separation a is varied and when the energy gap is increased. Our new results in this paper vividly demonstrate that a thorough investigation necessitates incorporating the polarization into the dispersion equation at shorter wavelengths.
The distance a between monolayer graphene and the surface was varied in our nonlocal numerical calculations. In all cases, there are two plasmon branches; one originating from the surface plasmon and the other from the graphene layer. Both gapless and gapped graphene have been investigated. The most important consequence of introducing the energy gap in graphene is the extended region of undamped plasmons for both branches. Specifically, referring to Fig. 3(a), we note that the upper plasmon dispersion curve enters the gap in the particle-hole spectrum like that for gapped free standing graphene and these two curves are close to each other within this gap. In addition, the lower plasmon branch is undamped for a wider range of wave vectors q by entering the gap in the particle-hole region. As revealed in Fig. 4(c), the lower branch may anti-cross with the upper one for sufficiently high doping concentration and large band gap. Both plasmon frequencies decrease with increased energy gap. This is also the behavior for free standing gapped graphene. However, the exact mathematical dependence is different in each case. Also, either one of the plasmon branches may bifurcate into two branches in the the single-particle excitation region, as demonstrated in Fig. 7(b). These new results for the plasmons may potentially lead to a number of applications in electronic devices since the plasmons play an important role in the response properties to external electromagnetic fields. | 2014-12-22T22:56:09.000Z | 2014-12-22T00:00:00.000 | {
"year": 2014,
"sha1": "c2e36d45d8ceffa31d9aaa5aa93a87993b5697ba",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.91.235416",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c2e36d45d8ceffa31d9aaa5aa93a87993b5697ba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119192429 | pes2o/s2orc | v3-fos-license | Keck HIRES Spectroscopy of SkyMapper Commissioning Survey Candidate Extremely Metal-Poor Stars
We present results from the analysis of high-resolution spectra obtained with the Keck HIRES spectrograph for a sample of 17 candidate extremely metal-poor (EMP) stars originally selected from commissioning data obtained with the SkyMapper telescope. Fourteen of the stars have not been observed previously at high dispersion. Three have [Fe/H]<=-3.0 while the remainder, with two more metal-rich exceptions, have -3.0<=[Fe/H]<=-2.0 dex. Apart from Fe, we also derive abundances for the elements C, N, Na, Mg, Al, Si, Ca, Sc, Ti, Cr, Mn, Co, Ni, and Zn, and for n-capture elements Sr, Ba, and Eu. None of the current sample of stars is found to be carbon-rich. In general our chemical abundances follow previous trends found in the literature, although we note that two of the most metal-poor stars show very low [Ba/Fe] (~-1.7) coupled with low [Sr/Ba] (~-0.3). Such stars are relatively rare in the Galactic halo. One further star, and possibly two others, meet the criteria for classification as a r-I star. This study, together with that of Jacobson et al. (2015), completes the outcomes of the SkyMapper commissioning data survey for EMP stars.
INTRODUCTION
As reviewed by Frebel & Norris (2015), the detailed study of the most metal-poor stars in the Galaxy can provide vital clues to the processes of star formation and to the synthesis of the chemical elements at the earliest times. Such stars, however, are extremely rare and at the present time only a handful are known with [Fe/H] −4.5 dex. The importance of these objects has nevertheless prompted a number of previous and on-going searches for such extremely metal-poor (EMP) stars (e.g. the HK survey (Beers et al. 1992), the HES (Christlieb et al. 2008;Frebel et al. 2006), the SDSS (see Aoki et al. 2013), the 'Best and Brightest' survey (Schlaufman & Casey 2014), LAMOST (see Li et al. 2015a), and Pristine (Starkenburg et al. 2017), and references therein). The discovery of such stars is one of the prime science drivers behind the SkyMapper imaging survey of the southern hemisphere sky (Keller et al. 2007;Wolf et al. 2018). The metallicity sensitivity is achieved through the incorporation of a relatively narrow v-filter, whose bandpass includes the Ca ii H and K lines, into the SkyMapper filter set (Bessell et al. 2011). The SkyMapper uvgriz photometric survey of the southern sky is ongoing but during the commissioning of the telescope a number of vgi images were taken to search for EMP candidates. Despite the suboptimal quality of many of the images, the program, which we will refer to as the "SkyMapper commissioning survey for EMPstars" (to distinguish it from current on-going work) was successful in that it resulted in the discovery of the currently most-iron poor star known SMSS J031300.36-670839.3 (Keller et al. 2014;Bessell et al. 2015;Nordlander et al. 2017). The analysis of high dispersion spectra of a large sample of additional EMP-candidates selected from the commissioning survey was presented in Jacob-son et al. (2015). Here we present the final results from that survey -the outcome of high dispersion spectroscopic observations of a further sample of SkyMapper EMP-candidates drawn from the commissioning survey photometry. SkyMapper commissioning-era photometry was also employed in the search for EMP stars in the Galactic Bulge (Howes et al. 2015(Howes et al. , 2016. The paper is organised as follows. The following section describes the target selection, the observations, and the data reduction process. Section 3 then describes the determination of the atmospheric parameters for the stars and the subsequent analysis to derive the chemical abundances. The abundance results are compared with existing halo EMP-star studies, such as those of Yong et al. (2013), Placco et al. (2014), and Jacobson et al. (2015), in §4. The results are briefly summarised in §5.
TARGET SAMPLE AND OBSERVATIONS
As discussed briefly in Jacobson et al. (2015), the initial sample of EMP candidates was selected on the basis of location in a 2-colour diagram in which a photometric metallicity index m i = (v − g) 0 −1.5(g − i) 0 is plotted against (g − i) 0 (see also Keller et al. 2007). Because of the variable quality of the commissioningepoch data, and because of the calibration approach employed, the photometric candidate list required additional input to identify the best candidates for high dispersion spectroscopic follow-up. This was achieved by obtaining low-resolution (R ≈ 3000) spectra of the candidates with the WiFeS spectrograph (Dopita et al. 2010) on the ANU 2.3m telescope at Siding Spring Observatory. The resulting flux calibrated spectra, which cover the wavelength range ∼350-600 nm, are then compared with a grid of MARCS 1D model atmosphere fluxes and the best-fit determined, as described in Norris et al. (2013). Because the spectra cover the Paschen continuum as well as the Balmer jump and the Balmer lines of hydrogen, the bestfit temperature and gravity are generally well determined. Consistency with the temperature/gravity relation for an old metal-poor isochrone 1 , which is appropriate for halo stars, provides a constraint on the adopted reddening while the strengths of metal-lines such as Ca ii H and K and Mg i b provide the metallicity information for a given temperature and gravity. The outcome of the 2.3m spectroscopy is then a sample of EMP candidates that can be used with some confidence as a basis for follow-up studies at high dispersion.
The candidates observed with the HIRES spectrograph (Vogt et al. 1994) at the Keck-I telescope were those in the commissioning survey EMP candidate sample that had not been previously observed at high-dispersion (cf. Jacobson et al. 2015), that were accessible from the Keck Observatory on the scheduled date, and that had low-resolution spectroscopic abundance estimates [Fe/H] 2.3m −2.5 dex, as determined from the 2.3m spectra. In all, HIRES spectra were obtained for 15 candidate EMP stars on the ANU-allocated night of 21 September 2013 (UT), together with spectra of two stars that had also been observed at Magellan with the MIKE spectrograph in the Jacobson et al. (2015) study. One further star, SMSS J221334.13-072604.1, was subsequently found to be included in the sample analysed by Aoki et al. (2013) under the designation SDSS J2213-0726.
Observing conditions were good with the seeing slowly rising from 0.6 to 1 by the end of the night. The spectrograph was configured with the HIRESb cross-disperser and the C1 decker that has a slit width of 0.86 yielding a resolution R ≈ 50,000. Detector binning was 2 (spatial) × 1 (spectral) and the low-gain setting (∼2e − /DN) was used for the 3 CCDs in the detector mosaic. Details of the observations are given in Table 1. The table lists the SkyMapper survey designations, the positions, and the SkyMapper g, (g−i) 0 and m i photometry taken from the SkyMapper DR1.1 data release , which supersede the original commissioning-era photometry. The reddening corrections follow the procedure outlined in Wolf et al. (2018) while m i is the metallicity index, defined as (v − g) 0 − 1.5(g − i) 0 , for which more negative values at fixed colour indicate potentially lower metallicity (see Keller et al. 2007;Da Costa et al. 2019). Also given are the integration times and the S/N per pixel of the reduced spectra at 450 nm and 600 nm. The median values are 22 pix −1 at 450 nm and 26 pix −1 at 600 nm.
In Fig. 1 we show the location of the observed stars in the SkyMapper metallicity-sensitive diagram based on the DR1.1 photometry. Shown also in the figure is the selection window that is used in defining photometric EMP candidates for the current (post commissioning) survey, where the lower boundary is set by the location of the [M/H]=−2.0 dex, 12.5 Gyr isochrone in this plane (see Da Costa et al. 2019, in prep. for details). While photometric uncertainties, particularly in the v-magnitudes, introduce scatter in this diagram, it is reassuring that all but one of the 12 candidates which are found in the analysis here to have [Fe/H] LTE −2.5 dex (where LTE means that the Fe abundance is obtained assuming the local thermodynamic equilibrium) are within the selection window while there is only one contaminant -a star found here to have [Fe/H] LTE > −2.0 despite lying (just) in the selection region. Although the sample is small, Fig. 1 does verify that the current SkyMapper photometric selection process efficiently finds stars with [Fe/H] LTE −2.5 with only a very minor degree of contamination. In fact, Da Costa et al. (2019) show that in the current on-going program, ∼85% of the SkyMapper DR1.1 photometric EMP candidates that lie within the selection window shown in Fig. 1, and which also possess metallicity estimates from low resolution 2.3m spectra, have [Fe/H] 2.3m −2.0 dex, while ∼40% have [Fe/H] 2.3m −2.75 dex. The best candidates are then followed-up at high dispersion with the MIKE echelle spectrograph on the Magellan 6.5m telescope.
The observed spectra were processed with the standard HIRES reduction pipeline MAKEE to obtain flat-fielded, extracted, wavelength-calibrated, velocity-corrected spectra for each echelle order. For the subsequent analysis the individual spectral orders were merged into a single continuous spectrum for each of the 3 CCD detectors, which was then continuum normalized and wavelength-offset by the observed geocentric velocity.
Radial velocities (RVs) were derived using the IRAF@FXCOR task, which cross-correlates the object spectrum with a template spectrum. For the template, we used a synthetic spectrum obtained through the June 2014 version of MOOG (Sneden 1973). This spectrum was computed with a stellar model atmosphere interpolated from the Castelli & Kurucz (2004) grid, adopting parameters (effective temperature T eff , surface gravity log g, microturbulence ξ t , metallicity [M/H]) = (4800 K, 1.5, 2.0 km s −1 , −2.50). The errors associated to RVs due to the cross-correlation technique are generally small, in our case are between ∼0.2 and ∼0.6 km s −1 . As we do not have repeated observations of the same star, we cannot provide more realistic estimates of the internal velocity uncertainties. Independent radial velocity measurements are available for four of our stars: the two in common with Jacobson et al. (2015) (SMSS J010839.58-285701.5 and SMSS J034249.52-284215.8), the star in common with Aoki et al. (2013), and the star SMSS J202059.17-043447.0 which has a radial velocity tabulated in Gaia DR2 (Gaia Collaboration et al. 2018). SMSS J010839.58-285701.5 also has a radial velocity listed in Gaia DR2. Comparison with these independent values reveals that a correction of 84±2 (standard error of the mean) km s −1 to our velocities is required for agreement. We can find no obvious explanation for this velocity offset but have verified its existence via an independent reduction of a subset of the observed spectra. With the offset applied our velocities agree well with the published values, and there is no evidence for any significant velocity variability in these four stars. Table 1 then lists the heliocentric radial velocity for each star in our sample after applying the velocity offset. Many of the stars have large heliocentric velocities as expected for a sample dominated by Galactic halo stars.
CHEMICAL ABUNDANCES ANALYSIS
Chemical abundances were derived from a local thermodynamic equilibrium (LTE) analysis by using the June 2014 version of the spectral analysis code MOOG (Sneden 1973), together with the alpha-enhanced Kurucz model atmospheres of Castelli & Kurucz (2004), whose parameters have been obtained as described in Sect. 3.1. The reference solar abundances adopted were those of Asplund et al. (2009).
In the following sections we detail the approach employed to derive the adopted atmospheric parameters, and describe the spectral features used to infer the chemical abundances. In general, we follow the procedures outlined in Jacobson et al. (2015) in order to facilitate a direct comparison of the results obtained here with those in that work.
Atmospheric parameters
The atmospheric parameters were derived via a number of different steps. First, as in Jacobson et al. (2015), initial values of T eff and the microturbulence ξ t were determined by imposing excitation potential (E.P.) equilibrium for Fe i, to yield T eff , and by removing any trend between Fe i abundance and the reduced equivalent width (EW) to fix ξ t . For the majority of the stars observed, however, there was a paucity of measureable Fe ii lines in the spectra invalidating the determination of a spectroscopic log g value by matching Fe i and Fe ii abundances. Instead, we derived log g by matching the T eff value with a 12 Gyr Yonsei-Yale isochrone (Demarque et al. 2004) that has [α/Fe]=+0.4 and the appropriate metallicity for the star as derived from the initial analysis. The procedure was then iterated until the T eff , log g and ξ t values did not change appreciably -usually only one iteration was required.
However, as noted by Jacobson et al. (2015, and the references therein), spectroscopic T eff values for metal-poor red giants are generally cooler than those inferred from photometry due to departures from LTE. Jacobson et al. (2015) dealt with this issue by adopting corrections to the spectroscopic effective temperatures as described in Frebel et al. (2013). These corrections shift the spectroscopic temperatures to a scale that is more consistent with photometrically derived temperatures. Such a shift is also supported by the recent detailed 3D non-LTE calculations for Fe (Amarsi et al. 2016) and H (Amarsi et al. 2018). Consequently, in order to allow direct comparison of the abundances derived here with those of Jacobson et al. (2015), we have followed the same approach: the spectroscopic T eff values have been corrected as described in Frebel et al. (2013) leading to updated values of ξ t and the isochrone-based log g values. Again the process was iterated until convergence was achieved, and the resulting values used in the abundance analysis. In the end, the corrections applied to the spectroscopic T eff ranged between ∼150 and ∼220 K, being larger for the cooler stars.
We can verify the suitability of the final adopted stellar parameters by comparing them with the T eff and log g values derived from the spectrophotometric fits to the 2.3m low-resolution spectra. The comparison is shown in the upper panels of Fig. 2. The top panel compares the corrected spectroscopic T eff values with the spectrophotometric determinations: it shows excellent agreementthe points scatter about the 1:1 line and the mean difference between the determinations is only 10 K (spectroscopic T eff hotter) with a standard deviation of 150 K. Ascribing equal uncertainties to each method then indicates that the uncertainty in the adopted spectroscopic T eff values is of order 100 K. The largest discrepancy occurs for star SMSS J212217.52-295552.7 where the corrected spectroscopic temperature is ∼450 K hotter than the spectrophotometric determination. There is no straightforward explanation for this difference although we note that, based on the other stars in the sample, the spectrophotometric temperature for this star is too cool by ∼200 K for its (g−i) 0 colour. We also note that with g ≈16.4, this star is fainter than the usual g=16 limit for 2.3m follow-up observations, while the HIRES observations have one of the lowest S/N values in the sample. For consistency of approach we retain the use of the corrected spectroscopic temperature for this star, although the uncertainty is likely larger than the typical ±100 K value.
The middle panel shows the comparison for the log g values. Here the mean difference, in the sense log g 2.3m −log g spec , is +0.05 dex with a standard deviation of 0.35 dex after excluding SMSS J212217.52-295552.7 where the large difference in temperature and our isochrone-based approach to fix log g spec , results in a significant offset from the spectrophotometric value. Again as- (2) In Jacobson et al. (2015) suming equal uncertainties in each method, this suggests that the uncertainty in the adopted log g values is of order of ∼0.25 dex. The adopted atmospheric parameters and the resulting [Fe/H] LTE values are given in Table 2.
Together with the [Fe/H] LTE we list the [Fe/H] non−LTE values obtained by applying non-LTE corrections to
Fe i lines as in Lind et al. (2012Lind et al. ( , 2017. For comparison reasons, in the following we will use our [Fe/H] LTE values. An independent check on the adopted atmospheric parameters is provided by a comparison with those adopted in Jacobson et al. (2015) for the two stars in common. For SMSS J010839.58-285701.5 we find T eff /log g/ Jacobson et al. (2015). The differences in the parameters are reassuringly low giving confidence that the results derived here can be straightforwardly compared with those of Jacobson et al. (2015). We also note that for star SMSS J221334.13-072604.1, Aoki et al. (2013) list parameters of 5150/1.8/−2.55 while we find 4810/1.52/−2.89; the higher abundance given by Aoki et al. (2013) is likely largely a direct consequence of the more than 300 K higher temperature employed in that study. For completeness, we note that Aoki et al. (2013) did not determine spectroscopic temperatures, rather they used the temperatures determined by the SEGUE Stellar Parameter Pipeline (SSPP) from the SEGUE low resolution spectra -see Lee et al. (2011) and references therein. For this particular star, however, the temperature estimates given (but not used) by Aoki et al. (2013) from the (V − K) 0 and (g − r) 0 colours, namely 4724 K and 4867 K, are much more consistent with our determination of 4810 K than the SSPP value used by Aoki et al. (2013).
For completeness we also show in the bottom panel of Fig. 2 a comparison between the [Fe/H] 2.3m values estimated from the fits to the low-resolution spectra and the final [Fe/H] LTE values determined from the analysis of the high-resolution Keck spectra. Given that the low-resolution values are quantized at the 0.25 dex level, the agreement is reasonable: the mean difference is 0.33 dex, with the low-resolution estimates being lower, and the standard deviation of the differences is 0.32 dex.
In the following estimates of the internal uncertainties in chemical abundances due to the adopted model atmospheres will be estimated by varying the stellar parameters, one at a time, by
Chemical species analysed
A list of the spectral lines used in the abundance analysis, together with the excitational potentials (E.P.), the total oscillator strengths (log g f ) employed, and the measured equivalent widths (EWs) is provided in Tab. 3. The atomic data come from Jacobson et al. (2015), with the exception of a few lines highlighted in Tab. 3. In most cases the analysis is based on the measurement of EWs via gaussian fits to the profiles of well-isolated lines, as described in Marino et al. (2008); exceptions to this approach are discussed below. When required, and when atomic data is available from the literature, hyperfine and/or isotopic splitting was incorporated in the analysis, as indicated in the last column of Table 3. We now comment in detail on the transitions used in the analyses for different element classes, noting that for some species abundances are determined only for a subset of the sample depending on the S/N of the spectrum and the adopted atmospheric parameters. Fig. 3. Similarly, nitrogen abundances come from spectral synthesis of the CN bands B 2 Σ − X 2 Σ at ∼3880 Å and ∼4215 Å, using the carbon abundance derived from the G-band fits. Sodium abundances were inferred from the Na resonance doublet at ∼5893 Å. For three stars we were able to estimate Na from the doublet ∼5685 Å. Sodium abundances were then corrected for NLTE effects, as in Lind et al. (2011), and listed in Tab. 4. For most stars, we were able to infer Al abundances from the spectral synthesis of the lines used also in Jacobson et al. (2015), namely at ∼3961 Å and ∼3944 Å.
α-elements
We determined chemical abundances for the α-elements Mg, Si, Ca, and Ti. For magnesium, silicon and titanium the abundances could be determined for all the stars in the observed sample, since at least one up to three strong lines were available for these elements; a larger number of lines were generally detectable for Ti, particularly for Ti ii. Calcium abundances were inferred from only one or two lines -see Table 5.
Iron-peak elements
A few lines were available for each of the iron-peak elements Sc, Cr, Mn, Co, Ni and Zn (see Tab. 6). The abundances for these elements were determined from the measured EWs except for Mn where we synthesised the triplet at ≈ 4033 Å to take into account hyperfine structure. We also report the number (#) of spectral features analysed for each element as well as the standard deviations of the abundances (σ). (1) Given the lack of Fe II abundances, Fe I has been used for the Ti II abundances relative to Fe.
Neutron-capture elements
We derived abundances for the neutron-capture elements Sr (from the resonance lines 4078,4215 Å), Ba (from the resonance lines 4554,4934 Å and the spectral feature 5854 Å), and Eu (from the resonance line 4130 Å). Specifically, we employed a spectrum synthesis approach to the analysis since hyperfine and/or isotopic splitting and/or blended features needed to be taken into account. For example, the spectral features of Eu ii have both significant hyperfine substructure and isotopic splitting. For this element solar-system isotopic fractions were assumed in the computation. The right panels of Fig. 3 show examples of the synthetic spectrum fits to the strong Ba ii line at 4934.1 Å. Our Ba abundances were computed assuming the McWilliam (1998) r-process isotopic composition and hyperfine splitting. The derived abundances are listed in Tab. 7.
Abundance errors
Estimates of the uncertainties in the chemical abundances due to errors in the atmospheric parameteres have been obtained by rerunning the abundances, one at a time, varying T eff /log g/[m/H]/ξ t by ±100 K/±0.40/±0.30/±0.40 km s −1 , assuming that the errors are symmetric for positive and negative changes. The uncertainties used in T eff , log g and [m/H] are reasonable, as suggested by the comparison with the spectrophotometric fits to the 2.3m lowresolution spectra and stars in common with Jacobson et al. (2015) (see Sect. 3.1). As internal errors in ξ t , we conservatively adopt ±0.40 km s −1 . The variations in chemical abundances for each element are listed in Table 8.
To obtain the total error estimates we follow the approach by N lines (N lines ) with a dispersion of 0.20 dex (a conservative value for the abundance dispersion of Fe i lines as listed in Table 6). Then, we derive max(σ, 0.20)/ √ N lines . Typical values obtained for each element are listed in column (6) of Table 8. The total error is obtained by quadratically adding this random error with the uncertainties introduced by atmospheric parameters. For Sr and Ba we conservatively adopt an uncertainty of 0.30 dex, considering that the abundances for these elements mostly come from strong resonance lines. Finally, we note that this 1D LTE analysis is subject to abundance uncertainties from three-dimensional (3D) and non-LTE effects (Asplund 2005). Fig. 4 (2015), we caution against using the commissioning-era results to constrain the metallicity distribution function at low abundances, as the selection biases cannot be reliably established. Future papers based on a much larger sample of stars selected from SkyMapper DR1.1 photometry and observed at low resolution, coupled with an extensive follow-up investigation with Magellan, will, however, address this issue.
The distribution of [Fe/H] LTE for the sample of 17 commissioningera SkyMapper EMP candidates observed at Keck is shown in
In the following subsections we consider the abundance trends among and between elements of different nucleosynthetic groups. We use as our comparison samples those of Jacobson et al. (2015) and the giant stars in the compilation of Yong et al. (2013), noting that the parameter determination approaches and the line-lists in those works are not identical to those used here so that the possibility of systematic differences cannot be ruled out. Unless otherwise noted all abundances and abundance ratios are 1D LTE values.
Carbon
As a star ascends the red giant branch, the envelope expands inwards, reaching layers affected by CN-cycling, a consequence of which is a reduction of the carbon abundance in the surface layers (and an increase in the surface abundance of N). Since we are interested in the carbon abundance at the star's birth, the so-called 'natal' abundance, our measured carbon abundances need to be corrected for the effects of this evolutionary mixing. The evolutionary mixing corrections depend on T eff , log g and [Fe/H] and have been discussed in detail in Placco et al. (2014). Dr. V. Placco (Placco, 2018, priv. comm.) kindly generated the appropriate corrections to our observed carbon abundances by assuming a natal [N/Fe] = +0.0 and applying the Placco et al. (2014) procedure. Table 4 lists the observed [C/Fe] values and the correction for evolutionary mixing: the estimated 'natal' [C/Fe] is formed by adding the correction to the observed value.
In Figure 5 The most likely explanation lies in the selection of EMP candidates from the SkyMapper photometry. As discussed in Da Costa et al. (2019), the strong CH-bands in the spectrum of a CEMPstar can depress the flux in the SkyMapper v-filter sufficiently that the inferred metallicity index mimics a more metal-rich star, and thus decreases the probability it will be selected for low resolution spectroscopic follow-up. Nevertheless the commissioning survey did result in the discovery of the most iron-poor star currently known, a star that is extremely C-rich (Keller et al. 2014;Bessell et al. 2015;Nordlander et al. 2017). Evidently at sufficiently low overall abundance the contaminating carbon features in the v band weaken enough that selection as a photometric candidate again becomes possible.
Nitrogen, sodium and aluminum
The nitrogen, sodium and aluminum abundance ratios with respect to iron for our sample are shown in Fig. 6 as a function of [Fe/H] LTE . Table 8. Sensitivity of derived abundances to the uncertainties in atmospheric parameters, the limilted S/N (σ S/N ) and the total error due to these contributions (σ tot ). Because of low S/N at the wavelength of the CN-bands in many of the spectra, [N/Fe] values could be determined only for five stars in our sample. The values, which lie between 0.5 and ∼1.0, and which are listed in Table 4, are nevertheless consistent with the midpoint of the substantial range of [N/Fe] values found in the sample of Yong et al. (2013).
∆T
For sodium, while noting that the NLTE corrections would result in lower abundance ratios, we have plotted the LTE abundance ratios to facilitate comparison with the Yong et al. (2013) Table 4.
Aluminum abundance ratios of our sample are comparable to both those of Yong et al. (2013) and Jacobson et al. (2015) (lower panels of Fig. 6). We note that the uncertainties associated with our [Al/Fe] values are large due to the relatively low S/N of our spectra, especially below 4000Å. As for [Na/Fe] we expect the application of NLTE corrections to generate systematic offsets in the [Al/Fe] LTE values; such corrections can be as large as +0.65 dex (Baumueller & Gehren 1997). As discussed in previous work, such higher NLTE [Al/Fe] abundances would be more consistent with predictions of chemical evolution models (e.g. Kobayashi et al. 2006).
α-elements
The individual α-element (Mg, Si, Ca, Ti i, Ti ii) abundances for our sample are displayed as a function of [Fe/H] LTE in Fig. 7 and listed in Table 5. With the exception of one star, all our stars are α-enhanced and their location in the [element/Fe] panels is fully consistent with the larger comparison samples of Jacobson et al. (2015) and Yong et al. (2013).
The one star that does not show any α-enhancement is the star SMSS J034249.52-284215.8 which was identified as a "Feenhanced" star in Jacobson et al. (2015, specifically §5.1 (−0.24, +0.11, −0.08, −0.28, −0.20), values that are fully consistent with those of Jacobson et al. (2015), which are (−0.17, +0.14, −0.16, −0.37, −0.13). We find also that the other elements analysed in this star generally have sub-solar ratios, again consistent with Jacobson et al. (2015). We note that in Section 3.1 we have used α-enhanced isochrones for all the stars. A solar-scaled [α/Fe] isochrone, more appropriate for this star, results in a lower log g by ∼0.10 dex, which does not significantly affect the derived abundances relative to Fe (see Table 8).
Discussion of the possible origin(s) of this star is given in Jacobson et al. (2015). We only note that as mentioned above, the low [Na/Fe] for this star, plus its "alpha-poor" nature, is reminiscent of abundance ratios seen in dSph stars. The kinematics of the star are not unusual in comparison with those for the rest of the sample. This star also has the lowest [Al/Fe] in both our sample and that of Jacobson et al. (2015).
Fe-peak elements
In Figure 8 we show our results for the abundance ratios with respect to iron for the iron-peak elements Sc, Cr i, Cr ii, Mn, Co, Ni and Zn as a function of [Fe/H] LTE . The values are listed in Table 6 along with both the number of spectral features analysed and the standard deviations (σ). Also shown in the panels are the equivalent data, where available, for the stars in the comparison samples of Yong et al. (2013) and Jacobson et al. (2015). Although our sample is not large compared to the others, it is evident from the figure that our results are consistent with the abundance ratio trends seen in the comparison samples. There is, however, a suggestion that the Keck data presented here have some systematic differences relative to the comparison samples.
Strontium and Barium
Among the n-capture elements, those which could be analysed in the spectra of the majority of the stars observed here are Sr and Ba. As regards the s-process, Sr is a first s-process peak element while Ba occurs in the second s-process peak. Both can be generated by the 22 Ne or the 13 C neutron source depending on the neutron exposure. These elements can also have r-process contributions and thus the relative abundances of these elements in metal-poor stars can provide information on nucleosynthetic processes at early times.
The abundance ratios [Sr/Fe] and [Ba/Fe] for the stars in our sample are shown as a function of [Fe/H] LTE in the upper and middle panels of Fig. 9 and are listed in Table 7. We note first that none of our stars show high (>1 dex) abundance ratios for these elements, i.e., none can be classified as s-enhanced stars. This is consistent with the lack of CEMP-stars in our sample, as discussed in §4.1. As is also apparent in the panels of Fig. 9, our results are generally consistent with those of Yong et al. (2013), which include CEMP-s stars, and Jacobson et al. (2015). There is some indication that perhaps our [Sr/Fe] values are systemically lower than those of Jacobson et al. (2015), by approximately 0.3 dex, which is however within our observational uncertainties (see Tab. 8).
It is well-known that as overall abundance decreases, the dispersion in the abundance ratios for the n-capture elements relative to iron increases markedly (e.g. McWilliam et al. 1995;Frebel & Norris 2015, and references therein), undoubtedly reflecting variations in the relative contributions of the numerous nucleosynthetic origins for these elements. This is illustrated in the lower panels of Fig. 9 Detailed abundances for other n-capture elements for these stars would provide important information on n-capture nucleosynthesis processes at early times, e.g., the weak r-process versus the main r-process (e.g. Roederer 2013; Li et al. 2015b). 2014), shown as grey and red filled diamonds with the latter marking CEMP stars. In the right panel we compare our evolutionary-mixing corrected values with those of Jacobson et al. (2015), plotted as grey filled circles, that are also corrected for evolutionary-mixing effects. The two stars indicated with blue open circles are the stars with low neutron-capture elements, as will be discussed in Section 4.4.1.
Europium
Europium is predominantly synthesized by the r-process (e.g. Sneden et al. 2008) (Barklem et al. 2005). We have measured Eu abundances for as many of the stars in our sample as possible, and derived upper limits for the others. The results are shown in the upper panel of Fig. 10 where we compare our results with those of Jacobson et al. (2015) (we note that Yong et al. (2013) did not determine Eu abundances). The agreement is reasonable. The [Eu/Fe] determinations are also given in Table 7. Overall, the scatter in the [Eu/Fe] values is comparable to that seen in Jacobson et al. (2015) and to that in the literature compilation of Frebel (2010). One star in our sample, SMSS J202400.03-024445.9 is a probable r-I
SUMMARY
We have presented here the results of an analysis of high-resolution spectra, obtained with the Keck telescope and the HIRES spectrograph, of 17 candidate extremely metal-poor stars selected from SkyMapper commissioning-era photometry. Fourteen of the stars had not previously been observed at high-dispersion. We find that, as in Jacobson et al. (2015), the candidate selection process, i.e., photometry plus low-resolution spectroscopy, is robust with almost half of the sample having [Fe/H] LTE −2.8 and with only one 'false positive' -an EMP-candidate for which [Fe/H] turned out to exceed [Fe/H] LTE =−2.0 dex. In general, the distribution of element abundances and abundance ratios for this sample closely mimics the earlier results of Jacobson et al. (2015) that was based on Magellan/MIKE high-dispersion spectroscopy of a large sample of SkyMapper commissioning-era EMP candidates. Specifically, we find that none of the present sample can be classified as CEMP stars. Further, we confirm the results of Jacobson et al. (2015) that the star SMSS J034249.52-284215.8 is an example of the relatively rare class of objects known as "Fe-enhanced" stars -stars with generally sub-solar abundance ratios, including for the α-elements. The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technol-ogy, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS).
The high dispersion spectra presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Ad- The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
We also acknowledge the traditional owners of the land on which the SkyMapper telescope stands, the Gamilaraay people, and pay our respects to elders past and present. Jacobson et al. (2015) sample are the stars with upper limits in both Ba and Eu. One star in our sample also has only upper limits, and has been represented with a small star-like symbol, without upward limit. The black dashed line and the grey dotted-dashed line are the mean [Ba/Eu] abundances in our sample and Jacobson et al. (2015), respectively. The size of the y axis has been kept the same as in the upper panel. | 2019-02-27T16:03:55.000Z | 2019-02-27T00:00:00.000 | {
"year": 2019,
"sha1": "5580497a3b907d7f2a197ee1f310c7e2d84a7cd1",
"oa_license": "CCBYNCSA",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/128724/2/1902.10611.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5580497a3b907d7f2a197ee1f310c7e2d84a7cd1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
216466536 | pes2o/s2orc | v3-fos-license | Intonational variation and incrementality in listener judgments of ethnicity
The current study examines how listeners make gradient and variable ethnolinguistic judgments in an experimental context where the speaker’s identity is well-known. It features an openguise experiment (Soukup, 2013) that assessed whether sociolinguistic judgments are subject to incrementality, with judgments increasing in magnitude as variable stimuli demonstrate more extreme differences. In particular, this task tested whether judgments of President Barack Obama as sounding ‘more’ or ‘less’ black (e.g., Alim & Smitherman, 2012) are sensitive to differences in intonation. Half of critical stimuli featured an L+H* pitch accent, which occurs more frequently in African American Language than in Mainstream U.S. English (Holliday, 2016). Four stimuli apiece were created from these phrases by making each pitch accent more extreme by semitonebased F0 steps. Seventy-nine listeners rated these stimuli via the question, “How black does Obama sound here?” Mixed-effects modeling indicated that listeners rated more phonetically extreme L+H* stimuli as sounding blacker, regardless of listener identity. A post-hoc analysis found that listeners attended to different voice quality features in L+H* stimuli. We discuss implications for research in intonation, ethnic identification, incrementality, language attitudes, and sociolinguistic awareness.
Introduction
Recent research in perceptual sociolinguistics has investigated a host of phonetic and phonological variables-primarily segmental-to assess the extent to which social meanings are constructed in perception, similar to the way they are constructed in ongoing production. Despite production research in sociolinguistics demonstrating how speakers use intonational variation to index various ethnic identities and social stances (Burdin, 2015;Holliday, 2016;Reed, 2016), there has been a general lack of perceptual research on the social meanings of intonational variables. In addition, while decades of research have demonstrated U.S. listeners' ability to distinguish African American and white voices (cf. Thomas & Reaser, 2004), these studies have also revealed challenges inherent in isolating speaker-specific variables that drive ethnic identification (Holliday & Jaggers, 2015;Purnell, Idsardi, & Baugh, 1999); indeed, there has been little research on prosody more generally in ethnolinguistic and regional varieties of English (Burdin, Holliday, & Reed, 2018). In the present study, we address these gaps in research by investigating the extent to which listeners perceive specific aspects of intonational variation as indexes of ethnic identity.
In addition, research in perceptual sociolinguistics has rarely confronted the issue of whether social meanings are incremental-that is, how the social meanings of gradient features are affected by these features' phonetic shape. Put differently, does a more phonetically extreme token of a socially marked variable correspond to a stronger social meaning? This gap is partially due to the common practice of treating continuous socially marked variables as categorical, such as /ɹ/ vocalization and /aɪ/ monophthongization (e.g., Labov, Ash, & Boberg, 2004). Even when investigating inherently continuous variables such as vowel quality, research on social meanings also tends to bin variables into discrete categories (Villarreal, 2018). In the present study, we address these gaps by investigating whether listeners' judgments of aspects of intonation are sensitive to the strength of the variable of interest in the phonetic signal.
We pursued these questions about intonational variation and social meaning via a task in which listeners rated samples of President Barack Obama's speech on the degree of 'sounding black.' 1 Critical stimuli contained either one or more L+H* pitch accents or no L+H* pitch accents. The L+H* pitch accent has been shown in production studies to be a resource for performance of African American identity (Holliday, 2016;McLarty, 2018). Pitch accents in critical stimuli also varied according to degree of phonetic extremeness (i.e., the magnitude of F0 excursions). Listeners perceived stimuli with at least one L+H* token as sounding more black than those without, but only for stimuli with more phonetically extreme L+H* realizations (i.e., those with a larger difference between F0 maximum and minimum). These findings contribute to our understanding of how listeners make ethnic judgments based on intonational variation, and how listeners assign social meaning to gradient phonetic variation. 2
Ethnic identification in the U.S.
A body of linguistic research on ethnic identification dating back nearly 70 years has found that U.S. listeners are generally rather accurate (70-100%) at distinguishing black speakers from white speakers (cf. Thomas & Reaser, 2004). Recent studies have attempted to unpack the role of suprasegmentals in ethnic identification. Thomas and Reaser (2004) found that listeners were equally accurate at ethnic identification for monotonized and unmodified stimuli, suggesting that listeners do not rely solely on F0 cues in ethnic identification. They also discovered that some cues relevant to pitch accents are recoverable even from monotonized stimuli (i.e., amplitude, duration, and segmental qualities), so it is conceivable that pitch accents may aid identification even in monotonized stimuli. Holliday and Jaggers (2015) examined listeners' ability to identify the ethnicity of U.S. politicians based on single-word stimuli, in order to assess the effects of voice quality on listener judgments. Building on some of the earlier findings of Purnell et al. (1999), Holliday and Jaggers found that several suprasegmental variables, including jitter and harmonics-to-noise ratio, influenced ethnic identification, though they note that a combination of multiple speakers and contexts may cause challenges in isolating speaker-specific variables influencing ethnic identification. For this reason, in the present study, we attempt to control for the effect of speaker-specific voice quality variation and more carefully isolate the prosodic variables that may affect ethnic identification by employing stimuli from a single speaker. 1 Listeners were intentionally not provided guidance on how to interpret this question, because earlier ethnic identification studies allowed for speakers to answer with their own conceptualizations of race and ethnicity (cf. Thomas & Reaser, 2004). Since one of the aims of the current study was to test for incrementality in ethnic judgments, it was important that listeners' judgments were shaped by their own state of knowledge about ethnolinguistic patterning of intonational variation. 2 Portions of this data appeared in print in the University of Pennsylvania Working Papers, Selected Papers from NWAV46, as "How black does Obama sound now?: Testing listener judgments of intonation in incrementally manipulated speech." in listener judgments of ethnicity Art. 3, page 3 of 21
Intonational variation: Pitch accents
This study focuses on one particular type of intonational variable as a starting point for understanding how listeners may react to ethnically-linked suprasegmental features, using methods based in the auto-segmental/metrical (AM) intonational framework (Pierrehumbert, 1980). Essential to the AM theory is the idea that movements in fundamental frequency (F0), the main correlate of what we perceive as pitch, result from an underlying sequence of tones that determine their structure. In the AM theory, these tones are either low or high, and all movements of the pitch contour are composed of a series of low and high sequences. The labeling system for intonational phenomena that is based on the AM theory is called the Tones and Breaks Index system (ToBI). Each language, and indeed a number of dialects and varieties, have distinct ToBI systems that reflect the variety's intonational specifications (Beckman & Ayers-Elam, 1997). The ToBI system for Mainstream American English (MAE), originally developed by Beckman and Ayers-Elam (1997) and based on the findings of Pierrehumbert (1980), is the only ToBI system generally in use for examining variation within American English. MAE-ToBI has previously been used for descriptions of Jewish English (Burdin, 2015), Appalachian English (Reed, 2016), as well as African American Language (AAL) (Holliday, 2016;Jun & Foreman, 1996;McLarty, 2018). 3 MAE-ToBI contains two types of pitch movements: pitch accents, which occur on some stressed syllables, and edge tones, which occur at phrase boundaries. The current study focuses only on the movement of pitch accents, though it is important to note that we also tested for the perceptual effects of edge tones. This study focuses on the difference between two types of pitch accents in MAE: a simple high tone, labeled as H*, and a fallrise, labeled as L+H*. Though other types of pitch accents exist, H* and L+H* are by far the most common pitch accents in most varieties of U.S. English, including AAL (Burdin et al., 2018).
Earlier studies have shown that pitch accents are perceptually salient for listeners and that naïve listeners can be trained to identify them quickly (McLarty, Vaughn, & Kendall, 2017;Thomas, 2011). Especially relevant to the current study, studies such as Loman (1975), Holliday (2016), andMcLarty (2018) have found that MAE and AAL exhibit different rates and contexts of use for H* versus L+H*. In particular, Loman (1975) and McLarty (2018) each found that L+H* pitch accents are more common in some varieties of AAL.
Recent work by Holliday (2016), Burdin (2015), and Reed (2016) inter alia has also found that a greater rate of use of the L+H* pitch accent may also be a resource in production for performance of different types of ethnic identity. For example, Holliday (2016) recorded 25 men (age 18-32) with one black parent and one white parent in Washington, DC to examine their rates of use of different types of pitch accents in ethnic identity performance. The participants were recorded in casual peer dyad conversations, and the analysis of their intonational patterns was taken from these recordings. A sociolinguistic interview also elicited ideologies about race and self-identifications. The participants who identified more as black, as opposed to multiracial or mixed, were more likely to use a greater quantity of L+H* accents than H* accents. This finding supports Loman's (1975) and McLarty's (2018) findings that L+H* is more prevalent in AAL than in MAE; also relevant for the current study, this finding demonstrates that speakers' production of intonational variation is gradient in terms of frequency. in listener judgments of ethnicity Art. 3, page 4 of 21
Incrementality in intonation and perception
This study's focus on intonational and suprasegmental variation presents an opportunity to address questions about phonetic detail and social meaning. One of the most significant recent advances in sociolinguistic theory has been the advent of sociophonetics (e.g., Foulkes & Docherty, 2006), with the notion that paying attention to phonetic detail can enrich our understanding of sociolinguistic variation-especially for variables that have traditionally been considered binary or categorical (Labov, Ash, & Boberg, 2006).
Although the binary treatment of phonetic variables reveals structure in sociolinguistic variation, a sociophonetically informed approach recognizes that the distribution of these variables' continuous acoustic correlates is not always compatible with discrete categorization. For example, Jacewicz and Fox (2018) use a continuous measure of /aɪ/ monophthongization (trajectory length) to analyze preadolescent Appalachian English speakers. They find that these preadolescents produce variants that are more diphthongal than Appalachian adults but less diphthongal than central Ohio adults. The authors' continuous approach pays off, in other words, by revealing finer-grained phonetic variation than is suggested by the monophthong/diphthong binary.
At the same time as research on production in sociolinguistics has increasingly turned to phonetic detail, the role of such detail remains under-theorized and under-investigated in the study of social meaning. To that end, Podesva (2011) proposes a framework for salience in sociolinguistic variation that reconciles the roles of frequency and phonetic detail. He hypothesizes that salience takes one of two linguistic forms: 'categorial salience' (frequent productions of a marked feature are salient) and 'phonetic salience' (more extreme productions are salient). In particular, with respect to phonetic salience, Podesva argues that a more extreme production signals a stronger social meaning: "If an axis of phonetic variation indexes a particular social meaning, then outliers on that axis can be understood as the strongest indicators of meaning" (pp. 254, emphasis added).
These predictions about categorial and phonetic salience have been supported by a handful of findings on the distribution and social meaning of intonational variation in production. For example, Podesva (2011) found that one speaker constructed a 'life of the party' persona by using acoustically extreme falling contours to imbue partying-related narrative elements with extra emphasis. Burdin et al.'s (2018) comparison of L+H* pitch accents in Jewish English, AAL, and Appalachian English showed that both categorical and continuous properties of pitch accents are sites for sociolinguistic differentiation. The authors found that, across communities, L+H* pitch accents differed in both rates of use and acoustic properties (e.g., peak F0, peak offset).
As far as we are aware, only a handful of perceptual studies have investigated how social meanings are affected by phonetic detail. Plichta and Preston (2005) presented U.S. listeners with a synthesized continuum from monophthongal to diphthongal /aɪ/ and asked listeners to identify the speaker's geographic origin along an axis running from the U.S. north to the U.S. south. Listeners not only associated monophthongal /aɪ/ with the south and diphthongal /aɪ/ with the north, they also placed successive continuum steps linearly along the north-south axis. D' Onofrio (2018) found that labeling a speaker as a 'Business Professional' or 'Valley Girl' cued U.S. listeners to classify more ambiguous [ae~ɑ] tokens as /ae/, with 'Valley Girl' being especially associated with backer /ae/, compared to a 'Chicago Bears Fan' label or no label at all. In an experiment with Californian listeners, Villarreal (2016) found significant correlations between speakers' raising of /ae/ in bad and glass and listeners' ratings on the scales 'accented,' 'doesn't speak like me,' 'unfamiliar,' and 'not Californian. ' Foulkes, Docherty, Khattab, and Yaeger-Dror (2010) found that listeners' identification of Tyneside children's gender was affected by two continuous measures (amplitude and F0) as well as several categorical measures; however, the authors also report significant correlations between amplitude and F0 in stimuli, suggesting potential issues with collinearity in the modeling procedure. In terms of voice quality, Szakay (2012) found that in New Zealand, ethnic identification was affected by several continuous voice quality measures; speakers with higher mean H1-H2 (a measure of creakiness) were likelier to be identified as Māori.
The present study seeks to expand our understanding of the relationship between phonetic detail and social meaning by investigating this relationship through the lens of intonational variation. Building on Podesva (2011), we hypothesize that the social meanings of continuous variables will exhibit what we call incrementality: a monotonic relationship between the variable's phonetic extremeness and the strength of the social meaning it elicits in perceivers. 4 We focus on pitch accents, which are ideally suited to this question as they vary both in category (e.g., H* versus L+H*) and phonetic shape (e.g., peak offset, rise slope).
Methods
This study was designed to address three central research questions: 1. How do pitch accents affect listener judgments of ethnic identity? In particular, does the L+H* pitch accent carry a social meaning of blackness in perception, as it does in production? 2. To what extent are the ethnicity-based social meanings of these pitch accents mediated by incremental phonetic differences? 3. What other aspects of voice quality affect listener judgments of ethnicity?
These questions were investigated via a perceptual task in which listeners rated 120 samples of President Barack Obama's speech with respect to how much they thought he 'sounded black' in each particular sample.
Open-guise versus matched-guise technique
This task used the 'open-guise technique' (OGT) (Soukup, 2013); as in the more common matched-guise technique (MGT), OGTs offer insight into the social meanings of a focal feature, variety, or language, by comparing listeners' reactions to stimuli differing only by the focal linguistic structure (e.g., Campbell-Kibler, 2009). Unlike the OGT, the MGT axiomatically hinges on listeners' belief that they are listening to different speakers (Giles & Billings, 2004;Purnell et al., 1999); otherwise, it is assumed that listeners will not differentiate guises on personal characteristics that are considered intrapersonally stable qualities (e.g., intelligence). In OGTs, by contrast, listeners are openly informed that they are hearing the same speaker in different guises. Soukup (2013) shows that listeners responded differently to standard versus dialectal Austrian German guises in both OGT and MGT settings (with the OGT actually yielding stronger effects for some scales), undermining MGTs' key assumption about different speakers.
In the present study, we assumed that listeners (all from the United States) were highly likely to recognize our stimulus speaker, President Barack Obama, necessitating an OGT rather than MGT approach. We openly informed our listeners, "This study is designed to test how people respond to different speech excerpts from the same speaker." In so doing, we rejected the type of instrumental task framing often used in MGTs, such as evaluating prospective radio newsreaders (Labov et al., 2011;Villarreal, 2018). By contrast, Obama represented an ideal stimulus speaker to test our hypotheses, as his ability to command both AAL and MAE is well-known by the general public (Alim & Smitherman, 2012); our use of the OGT took advantage of this awareness. In the discussion, we make recommendations about the appropriateness of OGT versus MGT.
Stimulus creation
The 120 stimuli were based on excerpts of President Barack Obama's spontaneous speech from two different 2016 television interviews with Gayle King, a black broadcast journalist who co-anchors the CBS This Morning news program (Kaplan, 2016). Each stimulus excerpt was based on a single Intonational Phrase (IP) unit, ranging from 0.4 to 2.3 seconds in duration (median 0.9 seconds). Following Pierrehumbert and Hirschberg (1990) as well as subsequent works utilizing their methods, we identified IPs through looking for pausing and phrase-final lengthening, as well as the presence of characteristic boundary tones and smaller intermediate phrase units contained within the IPs. We attempted to select short phrases that were fairly semantically bland to avoid overly tilting responses in one direction, though it is impossible to completely control for content in listening tasks.
Sixty excerpts were selected: 20 critical excerpts and 40 filler excerpts. Ten critical excerpts were 'H* phrases,' which contained between 1-3 H* pitch accents and 0 L+H* accents; and ten were 'L+H* phrases,' which contained between 1-3 L+H* pitch accents and 0-2 H* accents. This imbalanced definition of H* versus L+H* phrases was necessary since L+H* pitch accents are relatively rarer, even in AAL (Burdin, Holliday, & Reed, 2018), so it was not possible to find enough excerpts that contained only L+H* accents. Filler excerpts contained 1-3 H* pitch accents and 0 L+H* accents.
In choosing excerpts, we intentionally sacrificed a degree of experimental control for the sake of presenting listeners with natural, spontaneously produced stimuli rather than unnatural, lab-like speech. The benefit of using spontaneous stimuli is that it more closely models real-world perception conditions, as listeners perceive spontaneous and read speech (including oratory) differently (Campbell-Kibler, 2009;Holliday & Jaggers, 2015). The drawback is that the distribution of H* versus L+H* pitch accents across stimuli prevented us from addressing Podesva's (2011) hypothesis about categorial salience; at the same time, total experimental control over stimuli is impossible to obtain, as features co-occurring in the stimuli can always shape interpretation of features of interest (Leach, Watson, & Gnevsheva, 2016), including propositional content (Campbell-Kibler, 2009). (We explore this issue further in a post hoc analysis of L+H* phrases.) The critical stimuli were created by manipulating critical excerpts to four manipulation steps, with the original excerpt as Step 1. Steps 2, 3, and 4 were created by making pitch accents' F0 minima and maxima successively more extreme. With each manipulation step, H* and L+H* maxima were increased by a semitone, and L+H* minima were decreased by a half-semitone. For example, the H* pitch accent in the top panel of Figure 1 has an F0 maximum at 118.3 Hz in step 2 and 125.2 Hz in step 3, a one-semitone difference; the L+H* pitch accent in the bottom panel has an F0 minimum at 101.6 Hz in step 1 and 99.1 Hz, a half-semitone difference. (In some cases, it was not possible to make the manipulations exactly one or one-half semitone.) We based F0 manipulations on semitones rather than constant magnitudes because semitones are psychoacoustically comparable regardless of the pitch accent's initial F0 (e.g., the difference between 100 and 105 Hz sounds much larger than the difference between 200 and 205 Hz). The first author created stimuli by hand using the Manipulation utility in Praat (Boersma & Weenink, 2015). Both authors listened to all manipulated critical stimuli and confirmed that they sounded natural.
Filler stimuli were created by modifying the final syllable of filler excerpts to include percepts of creaky voice: low F0 and damped pulses (Keating, Garellek, & Kreiman, 2015). A Praat script modified alternating cycles of the final syllable by lengthening their duration and lowering their amplitude. As with critical stimuli, both authors listened to all manipulated filler stimuli and confirmed that they sounded natural.
Task design
The task was administered via an online survey hosted by Qualtrics. In each of 120 randomly ordered trials, listeners heard a single stimulus auto-play twice and responded to the question "How black or white does Obama sound here?" on a continuous unit-less slider bar with "very black" and "very white" on opposite poles. As the recognizability of President Obama's voice would have likely rendered ineffective the type of instrumental task framing often used in MGTs (e.g., rating prospective radio newsreaders, as in Labov et al., 2011), we eschewed such framing; we instead informed listeners, "This study is designed to test how people respond to different speech excerpts from the same speaker." Listeners then completed a demographic questionnaire and were invited to comment on the task (see Appendix A).
The survey was distributed via social network sampling in May 2017, with a raffle incentive for one randomly selected listener to win an Amazon.com gift card. The listener sample for analysis contains 79 American English-speaking listeners who self-identified as black and/or white. Of these listeners, 24% self-identified as black and 77% as white (one listener identified as both); 65% identified as female and 35% as male. The majority of listeners also self-identified as politically liberal and indicated that they overwhelmingly approved of Obama's presidency; in particular, on a 1-7 scale (where 7 indicated "very liberal" and "strongly approve of Obama"), the median rating was 6 on both scales, and 91% of listeners rated 5 or above on both scales. In this respect, the listener sample is not representative of the United States voting population; however, our intent was not to survey a sample spanning the political spectrum but rather to determine how some listeners judge ethnicity based on intonational and voice quality variation (we return to this point in the Discussion).
As mentioned above, both authors listened to all stimuli and confirmed that they sounded natural. As a further check on stimulus naturalness, we coded listeners' responses to the final two questionnaire items: "How did the clips sound to you?" and "Do you have any other comments on the clips or on the survey?" Based on listeners' responses to these questions, the second author developed eight true-or-false codes that described sentiments listeners expressed in their responses and coded responses accordingly (with a single response capable of being coded "true" in multiple categories). For example, 21% of listeners reported something amiss with the quality of the clips (although numerous listeners commented positively about the clips' quality). More information about these codes, including examples, can be found in Appendix B. As we discuss below, however, none of these codes significantly improved our model of intonation results, so we did not find evidence that they impacted listeners' perceptions of the speaker's blackness. Slider-bar positions were converted to real numbers between 0 ("very white") and 100 ("very black") and standardized by listener to control for variable usage of the continuous slider bar. All results are reported in unit-less standard deviations (i.e., z-scores); the average listener's standard deviation was 16.6, so a difference of 1 standard deviation can be interpreted as a difference of roughly one-sixth of the length of the slider bar for the average listener.
Our task was specifically designed to address the first two research questions, about the role of pitch accents and phonetic incrementality in affecting listener judgments of ethnicity; we first present the analysis of intonation features. We then describe a post hoc analysis of voice quality characteristics that addressed the third research question, about the role of other voice quality features in affecting listener judgments of ethnicity.
Intonation analysis
We compared linear mixed-effects models of standardized ratings to find the predictor structure that best modeled the data in critical trials, via the lmerTest package for R (Kuznetsova, Brockhoff, & Christensen, 2016;R Core Team, 2018). The predictors that we tested were phrase type (H* versus L+H*), manipulation step, edge tone, nuclear pitch accent, stimulus duration, and numerous listener effects (race, gender, political ideology, approval of Obama's presidency, education, use of desktop versus mobile to complete survey, hometown, geographic mobility, experience with linguistics, musical experience, and qualitative questionnaire codes). Unfortunately, the distribution of H* and L+H* tokens in stimuli precluded predictors for the number of H* and number of L+H* pitch accents in critical trials. We also included random intercepts for excerpts as nested within phrase type, as each excerpt exclusively belonged to one of the two phrase types. Since ratings were standardized by listener, by-listener random intercepts would be redundant. Table 1 presents a summary of the best model for listener rates of blackness, which included predictors of phrase type, manipulation step, and their interactions. As is evident from this model, listener ratings of blackness tended to increase with the more extreme step manipulations, though this is only statistically significant for L+H* phrases. Also notable is that the model revealed no significant listener effects for gender, race, region, education, or political affiliation, indicating that listeners were remarkably similar in their ratings regardless of a number of potentially influential demographic factors. While previous studies have generally found that in-group community members may perform better in ethnic identification tasks (cf. Thomas & Reaser, 2004), there were no such effects observed here. In addition, none of the qualitative questionnaire codes significantly improved the model; this means that, for example, although some listeners commented negatively on the quality of the stimuli, we have no evidence that whether or not listeners commented on stimulus quality affected listener perceptions of blackness.
These results must be interpreted with caution, however, in light of their small effect size. The sole significant term in Table 1, PhrTypeL+H*:Step3 (which has a p value just under the predetermined α level of 0.05), differs from the intercept by about 0.17 standard deviations, or about 3 'notches' on the 0-100 slider bar (with the average listener's standard deviation being 16.6). Indeed, an R 2 calculation using the R package piecewiseSEM (Lefcheck, 2016) revealed that the model's fixed-effects predictor structure accounted for less than 1% of the variance in ratings, while random effects-the effect of individual excerpts-accounted for 11.3% of the variance. 5 With that caveat in mind, we proceed to discuss what these results mean.
Results by phrase type
The model indicated no main effect of phrase type on listener ratings of blackness, indicating that pitch accent alone did not trigger different blackness ratings. Figure 2 shows this result, with results for H* stimuli in the left panel and L+H* stimuli in the right panel. 5 An anonymous reviewer expresses doubt that this significant effect "would reliably reappear as an important factor" in a replication of the present study; we agree that this is an empirical question. As is evident in this figure, listener ratings of blackness were remarkably similar for the H* phrases at each step, though the L+H* phrases showed greater differences between manipulation step. There was also no main effect of manipulation step on listener ratings of blackness.
Results by manipulation step
Though the main effect of phrase type failed to reach significance, the model indicated a significant interaction between phrase type and manipulation step, with more extreme L+H* phrases rated as sounding blacker than less extreme L+H* phrases, and no perceived blackness difference for H* phrases regardless of step. Figure 3 presents these results, with each panel representing a manipulation step. This figure indicates that listeners appear to interpret the more phonetically extreme L+H* realizations (greater difference between F0 minimum and F0 maximum within a L+H* pitch accent) as blacker, but this is not the case for the more extreme H* realizations (which only had higher F0 maxima). This model also implies that listener judgments of blackness are affected by more than just pitch accent type and phonetic shape. As mentioned above, these results must be interpreted with caution, especially in light of the fact that the model's fixed-effects predictor structure accounted for less than 1% of the variance in ratings, while random effects-the effect of individual excerpts-accounted for 11.3% of the variance. In other words, listeners were much more attuned to features varying by excerpt, such as segmental, semantic, pragmatic, or voice quality characteristics, than the type and phonetic shape of pitch accents. However, this small effect size may represent an inherent challenge to studies of prosody, since the highly nested nature of such variables causes them to be difficult to isolate from one another. Despite this challenge, the finding of a significant difference here may be a step in the direction of discovering how these variables may operate both independently and together. The small effect size of this intonation effect motivated the post hoc analysis of voice quality features.
Voice quality analysis
Our perceptual experiment was specifically designed to test predictions about how listener judgments of ethnicity are influenced by the type and phonetic shape of pitch accents; however, sociophoneticians have long suspected that voice quality characteristics may also influence listener judgments of ethnicity (e.g., Holliday & Jaggers, 2015;Purnell et al., 1999). In line with Purnell et al. (1999), we conducted a post hoc analysis of perceived blackness ratings to determine if and how a number of voice quality measures were influential in shaping listener judgments. In particular, the results of their study indicate dialect-level differences in both harmonics to noise ratio (HNR) and peak pitch ratio, so we hypothesized that these same variables may also be of interest in the current study.
We ran a Praat script on critical stimuli to extract several measures that, according to previous studies, may pattern differently in AAL versus MAE: phrase speech rate, pitch ratio (Holliday & Jaggers, 2015), peak delay (Holliday, 2016;Reed, 2016), jitter (Holliday & Jaggers, 2015), shimmer (ibid.), HNR (Purnell et al., 1999), and intensity average (ibid.). 6 Phrase speech rate was calculated as the stimulus's duration divided by the number of syllables. Pitch ratio was calculated as the stimulus's maximum F0 (in Hz) divided by its minimum F0. The remaining measures were calculated for each pitch accent in each stimulus; since listeners reacted not to individual PAs but whole stimuli, for stimuli with multiple PAs we treated the mean of each PA's measurement as the measurement for that stimulus (e.g., we defined the jitter measurement for a stimulus with three PAs as the mean of the PAs' jitter measurements). Peak delay was calculated as the time difference between nucleus onset and hand-annotated pitch accent time. Jitter (relative average perturbation), shimmer (local amplitude perturbation), HNR, and intensity average (mean dB) were all calculated for the nucleus. An F0 floor of 75 Hz was used for all relevant measures in order to avoid erroneous measurements of non-periodic speech; we otherwise used Praat's default settings for all measurement functions.
As with the intonation analysis, we modeled standardized ratings via linear mixedeffects models. Because the intonation analysis revealed differences in patterning of responses to H* versus L+H* stimuli, we fit separate models to H* versus L+H* critical trials. We included manipulation step in these models to determine whether the intonation analysis's findings about the role of manipulation step-significantly affecting listener ratings of blackness in L+H* stimuli but not H* stimuli-remained after considering voice quality features. These models also included random intercepts for excerpts and random by-excerpt slopes for the manipulation step factor. Voice quality measures were normalized (z-scored) to account for widely differing measurement scales.
To account for likely collinearity of voice quality measures (e.g., jitter and pitch ratio are all different measures of changes in fundamental frequency), we adopted a model-comparison strategy that iteratively added interaction terms to the models based on correlations between measures. We first ran baseline models that included all voice quality measures as main effect predictors with no interactions. (Again, these models also included random intercepts for excerpts and random by-excerpt slopes for the manipulation step factor.) We then checked these baseline models for correlations between voice quality measures; any correlations with an absolute value correlation coefficient greater than 0.4 in either model were added as interaction terms into both models. After running these models, we again added interaction terms (including three-way interactions) based on correlations between voice quality terms. The resulting models included the following interactions: phrase speech rate × peak delay × HNR, shimmer × jitter × HNR, pitch ratio × intensity average. For both the H* and L+H* models, each successive model represented a significant improvement in model fit at an α = .05 significance threshold.
Voice quality results
Summaries of fixed effects for the voice quality models are in Appendix C. The voice quality model for H* critical trials revealed that few voice quality measures significantly affected listener perceptions of blackness: phrase speech rate and the interaction of peak delay and HNR. Phrase speech rate (seconds per syllable) had a positive effect on listener perceptions of blackness, with slower phrases rated blacker. While the model returned positive estimates for the effects of peak delay and HNR, neither of these main effects reached significance. Rather, the effect of peak delay on listener perceptions of blackness was constrained by the phrase's HNR. As Figure 4 shows, phrases with longer peak delay were rated blacker, but only if HNR was sufficiently high. This co-patterning of variables suggests that listeners may be attuning to a threshold of combined characteristics in order to make judgments, particularly in the absence of ethnolinguistically-salient intonational differences such as the L+H* pitch accent. Together, the roles of peak delay and phrase speech rate suggest a possible salience effect, as both relate to vowel duration; conceivably, longer vowels (which co-pattern with slower speech rates) with longer intonation rises can better carry indexes of social meaning. 7 7 Thanks to an editor for pointing this out. Figure 4: H* model predictions for perceived blackness ratings by peak delay (seconds) and HNR (dB). The five facets display peak delay slopes at the minimum, first quartile, median, third quartile, and maximum values for HNR among H* stimuli.
As with the H* model, few predictors reached significance in the L+H* model-including just one voice quality measure, jitter. Among L+H* stimuli, phrases with less jitter were rated blacker, suggesting that listeners are sensitive to the interaction of F0 movement and local periodic perturbations. Notably, the measures affecting listener perceptions of blackness did not overlap for H* versus L+H* phrases; the jitter term in the H* model, and the phrase speech rate & peak delay × HNR terms in the L+H* model, did not even approach significance. This finding provides additional evidence that listeners may respond to different intonation and voice quality cues in phrases containing L+H* pitch accents than those not containing L+H* pitch accents. As L+H* accents are far less common than H* accents, it is possible that L+H* accents cue listeners to adjust their expectations as to markers of ethnic identification.
In addition, manipulation step was significant in the L+H* voice quality model (manipulation steps 3 and 4 were rated blacker than steps 1 and 2) but not the H* voice quality model. This finding corroborates the generalization that the percept of blackness is subject to phonetic incrementality only with respect to the more socially marked L+H* pitch accent. However, this finding is tempered by the fact that the fixed effects in the L+H* model accounted for less than 3% of the variance, as compared with 10.2% for the fixed effects in the H* model ( Table 2); in other words, there remain properties of the stimuli with L+H* accents that listeners are reacting to, above and beyond the intonational and voice quality features that previous studies have suggested are implicated in differentiating AAL from MAE.
In short, the voice quality analysis found that listeners relied on multiple acoustic cuesbeyond those pertaining to pitch accents' type or phonetic shape-in making judgments of perceived blackness; crucially, in the presence of an L+H* pitch accent listeners not only relied on different voice quality cues than in the absence of one, but they apparently relied to a much greater degree on cues other than those relating to intonation or voice quality. This finding suggests a fundamental difference in how listeners judge phrases in the presence of an L+H* pitch accent, although this is an open question for future study. More broadly, this finding further supports the claim that understanding the interrelated nature of prosodic variables is a necessary part of their description.
Discussion
To summarize, this study has demonstrated that listeners are sensitive to the details of phonetic realizations of the H* and L+H* pitch accents in declaratives, and that a larger difference between the F0 maximum and minimum within L+H* pitch accents appears to cause listeners to rate a speaker (in this case, President Barack Obama) as sounding blacker. However, the difference between H* and L+H* pitch accent phrases alone is not sufficient to trigger this judgment; it is the actual realization of the pitch accents themselves that listeners seem to attune to. In addition to pitch accent type and phonetic shape, listeners also attend to voice quality cues in judging blackness, though the relevant cues are different for H* versus L+H* phrases: speech rate, peak delay, and harmonics to noise ratio for H* phrases, jitter for L+H* phrases. There is also some evidence that the number of L+H* and H* pitch accents in a phrase also affect listener judgments of blackness. We also obtained an unexpected finding with respect to speech rate; among H* stimuli, slower phrases were perceived blacker than faster phrases, which could possibly indicate that speakers have different expectations related to ethnolinguistic variation and speech rate (Kendall, 2013) or that longer vowels provide a greater site for the apprehension of social meaning. In this section we discuss the implications of these findings in more depth.
Intonation
This study's results show that in a perception task, listeners appear to be sensitive not only to the phonological category of pitch accents, but also their phonetic realization, as listeners appear to be sensitive to increasingly extreme manipulations of F0 within a single pitch accent type. In the traditional AM model of intonational phonology, pitch accent and edge tones have largely been binned into discrete categories, with meaning presumed to be attached to those categories and their combinations (Pierrehumbert & Hirschberg, 1990). The results presented here provide further motivation for considering intonational variation on a phonetic as well as a phonological level. This study also provides further motivation for the development of ethnolinguistic variety-specific ToBI models as well as phonetic methods for studying intonational variation cross-dialectically. While we have employed the MAE-ToBI conventions (Beckman & Ayers-Elam, 1997) in this study, the nature of the intonational system of AAL has not yet been fully described (McLarty, 2018;Thomas, 2015). As the current study's results provide evidence that listeners are sensitive to differences in the realization of F0 and timing of the L+H* pitch accent, future studies should examine whether the tonal inventory of AAL differs from MAE, as this could be one element that triggers the observed differences in listener judgments.
Relatedly, as much of the work on prosody has focused on the meaning of intonational contours in an imagined Standard American English as opposed to in specific varieties, it is clear that much more work is needed on both variation in speaker production and listener perception of contour meaning. Though the current study did not reveal differences in perception of 'sounding black' conditioned by listener demographics, future work should explore how such perceptions could potentially be affected by listeners with different backgrounds and sociolinguistic experiences.
This point about the role of demographics is especially relevant because (as mentioned above) the listener sample was overwhelmingly liberal and approving of Obama's presidency, more so than the US population at large. While this is not an issue for the present study-our aim was not to achieve political representativeness but rather to ascertain how intonational variation affected perceptions of blackness within a population of US listeners-it does contextualize the results. Theoretical frameworks that take as primary the role of experience in forming linguistic representations (e.g., Exemplar Theory, Pierrehumbert, 2016) would take the standpoint that listeners who are more inclined to listen to President Obama would have a greater opportunity to hear him in multiple situations, thus facilitating their awareness of his style-shifting; it is thus conceivable that the small-sized effects uncovered in this study would not reach significance in a sample more representative of the US political spectrum. This is a question open for future work to address.
Ethnic identification
In their 2004 study and summary of the body of research on ethnic identification of white and black speakers and the U.S., Thomas and Reaser reveal gaps in our knowledge about what triggers judgments of speakers as 'black' or 'white.' Most ethnic identification studies have focused on segmental features, at least in part due to the fact that so little is known about how non-standard varieties of American English employ intonational variation, though it is the case that such studies on prosodic variables have been carried out outside the U.S. (Szakay, 2012;Todd, 2002). While a number of segmental features, such as vowel quality, have been identified as important in triggering listener judgments, researchers still know relatively little about how suprasegmental features may contribute to these judgments. Dating back to the 1970s, researchers such as Tarone (1973) and Loman (1975) have suspected that suprasegmental features played a serious role in triggering these judgments, though few studies have been able to isolate the specific intonational and suprasegmental features involved. The results of the current study, especially those related to the fact that speakers are able to provide consistent judgments of how a speaker whose race is known to them adheres to their ideologies about what it means to 'sound black,' provide evidence that it may be possible to isolate the variables of interest using a single-speaker model. This has the advantage of eliminating other types of variation that are inherent in studies with multiple speakers, for whom it is impossible to control every level of linguistic variation, which may be important especially in light of our findings on the effects of voice quality.
This study also builds on the findings of Purnell et al. (1999) as well as Thomas and Reaser (2004), and Holliday and Jaggers (2015) by providing further evidence that a number of voice quality features, including jitter, HNR, and speech rate may be involved in triggering ethnicity judgments. The pattern that we observed wherein there appear to be important interactions of intonational and voice quality features obviates the need for more controlled studies that simultaneously focus on a number of suprasegmental features. Listeners appear not only to be sensitive to both intonational and voice quality features but also the ways in which they combine to create sociolinguistic meaning.
It is worth reiterating here the small effect size that we found in our intonation model, in which fixed effects accounted for just 1% of the variance in listener ratings of blackness. Some readers may interpret this small effect size and the proximity of the sole significant intonational model term's p value (0.0434) to our predetermined α level (0.05) as casting doubt upon the generality of the result. Although this effect size is modest, it is not without precedent in studies of sociolinguistic perception. Clopper (2010, p. 212), describing Clopper and Pisoni's (2007) study of free classification of regional dialects of American English, notes "grouping accuracy was still rather poor overall, which may indicate attention to talker-specific differences instead of dialect-specific variation." This greater attention to talker-specific differences parallels our finding that random effects accounted for a much greater percentage of the intonation model's variance. Likewise, Villarreal (2018) found that out of 12 ratings scales, a vocalic guise manipulation yielded only three significant differences, compared to eight significant differences for both speaker region and speaker gender and eleven significant differences for speaker ethnicity. In other words, while the effect revealed by the intonation model is modest, it is possible that this is a general property of phonetic guise manipulations, as well as an artifact of the interconnected nature of suprasegmental features in general and the resulting challenges in isolating them from one another.
Incrementality
These findings support the notion that listeners attend to phonetic detail in constructing social meanings of sociophonetic variation, given that listener ratings of blackness for L+H* increased stepwise as L+H* pitch accents became more phonetically extreme. In other words, there is some evidence that listeners map continuous social meanings to continuous variation, supporting our incrementality hypothesis; contra Podesva's (2011) phonetic salience hypothesis, these findings suggest that greater social meanings are not only attached to phonetic outliers, but also to phonetically intermediate realizations of L+H* pitch accents. This research also sheds light on how phonetic salience works in context. While the intonation analysis found a jump between manipulation steps 2 and 3 in listener ratings of blackness for L+H* phrases, the analysis of the L+H* voice quality model's random effects found considerable differences in step 1 ratings across L+H* phrases. That is, for some stimuli smaller differences in intonation were sufficient to trigger higher listener ratings of blackness; for others listener ratings of blackness only increased with larger differences in intonation. Thus, just as context shapes the social meaning of a variant's presence or absence (Campbell-Kibler, 2009;Gumperz, 1982;Leach et al., 2016;Pharao, Maegaard, Møller, & Kristiansen, 2014), context also shapes the way that phonetic detail affects social meanings.
Open-guise versus matched-guise technique
These findings expand our understanding of methods for probing language attitudes, countering the received wisdom in MGT research that these tasks only work if listeners believe they are judging different speakers (Giles & Billings, 2004). This work expands on the findings of Soukup (2013) in demonstrating additional support for the OGT: Listeners were aware that they were hearing the same speaker, but the guise manipulation nevertheless yielded a difference in listener responses. Soukup (2013) finds that the OGT yielded larger effects than the MGT on 'superiority' scales (Zahn & Hopper, 1985), while the MGT yielded larger effects on 'social attractiveness' scales. However, her comparison did not address socio-indexical traits like ethnicity that fall outside the superiority-versussocial-attractiveness rubric, but which nevertheless form an important part of listeners' awareness of language variation (e.g., Hay & Drager, 2010;Koops, Gentry, & Pantos, 2008;Niedzielski, 1999). Although it is impossible to determine how the results of this study would compare to a hypothetical companion MGT (as the MGT simply wouldn't work with such a recognizable stimulus speaker)-and the small effect size we found suggests that a hypothetical companion MGT could yield larger effects-the present study indicates that a socio-indexical trait, ethnicity, can also work in an OGT context.
Moreover, whereas stimulus speakers in typical MGTs are anonymous to listeners, representing blank attitudinal canvases save for small bits of contextual information provided via stimulus text and/or explicit labels, listeners in this study likely had salient prior impressions of President Obama and his racialized speech. The finding that the guise manipulation affected listener perceptions of Obama's blackness is even more persuasive against that backdrop. Indeed, among the qualitative questionnaire codes that failed to significantly improve the model was ObamaIsBlack (see Appendix B); that is, we found no evidence that listener perceptions of blackness were affected by whether listeners found it difficult to rate Obama as 'sounding white. ' Although the OGT worked in the present study, we caution readers against the assumption that the OGT will necessarily apply to any context, feature, or trait. First, while both Soukup's study and the present study intentionally violated the assumption that listeners should believe they are judging different speakers, in both studies listeners were not told which feature was manipulated; we argue that this remains an important element of methodological opacity in speaker evaluation tasks. It is likely that doing so would produce rather different results than if listeners are not informed, especially for those few sociolinguistic variables that attract public commentary. Indeed, only 20% of the listeners in the current study reported that they could detect the guise manipulation (DetectManip, Appendix B), and this failed to significantly improve the model; this is helped by the fact that, aside from high rising terminal (Tyler, 2015), intonational variation is generally not a subject of public commentary in American English.
Second, we argue that there remain contexts in which it is important to conceal the fact that the same speaker is behind both or all guises. While the majority of speaker evaluation tasks involve cognitive and/or affective responses, we predict that tasks involving behavioral responses (e.g., making a hiring decision) are likelier to hinge on listeners believing they are hearing different speakers. For example, if the landlords in Purnell et al. (1999) knew they were hearing John Baugh in multiple guises, they might have been on their 'best behavior' to avoid prosecution under the Fair Housing Act.
Third, we argue that the use of an OGT rather than MGT approach must be justified by a plausible style-shifting context. For example, this task relied on listeners' awareness of President Obama's style-shifting to sound more black in some contexts and less black in others (Alim & Smitherman, 2012); as mentioned above, it is conceivable that listeners' awareness in this respect was facilitated by their generally positive attitude toward Obama's presidency making them more likely to hear Obama's public speaking. In a similar justification of a plausible style-shifting context, Soukup (2013) relied on her observation that speakers routinely shift between standard and dialectal Austrian German in stylistic practice. If a speaker evaluation task involves styles that do not coexist in stylistic practice 'in the wild' (e.g., the same speaker commanding both an L1 and an L2 accent), the OGT is not likely to work.
Caveats about the OGT notwithstanding, it is clear that traditional approaches to linguistic perception do not give listeners enough credit for being aware of style-shifting; indeed, explicit public awareness of style-shifting (e.g., Meraji, 2013) indicates that listeners may be willing to accept reacting to the same speaker using different features, styles, or languages. Future research should explore the extent to which style-shifting itself, not just the individual styles involved in shifting, affects listeners' judgments of speakers.
Conclusion
The current study examined listener ratings of phonetically manipulated speech to test whether listeners were sensitive to such manipulations in the process of making judgments about speaker ethnicity. Regression models indicated that listeners systematically judged a familiar speaker as 'sounding blacker' when exposed to more extreme F0 manipulations of both the peak and valley of L+H* pitch accents. This effect was mediated by incrementality, with more extreme L+H* pitch accents mapping to greater perceptions of blacknessalbeit with an effect size that suggests caution in generalizing these results. Results of post-hoc testing also reveal that a number of voice quality features appear to also be involved in these judgments. In particular, speech rate, peak delay, HNR, and jitter also appear to influence listener judgments, though the salience of voice quality features may be mediated by the presence versus absence of L+H* pitch accents.
These results have important implications for future work examining both intonational variation from a formal perspective as well as sociophonetic studies on ethnic identification. The finding that listeners seem to attune differently to H* versus L+H* pitch accents in ethnicity judgments and that these perceptions are influenced by phonetic factors provides further motivation for studies that examine intonation from both a phonological and a phonetic perspective. Additionally, the finding that listener perceptions of ethnicity may be manipulated by alterations in F0 provides important context for studies that aim to isolate the phonetic features that may trigger listener judgments of ethnicity. This is especially important given the large body of work on linguistic profiling and discrimination and may provide additional resources for linguists who aim to describe and address racial inequality. Finally, these results indicate that listeners' sociolinguistic perceptions are sensitive to the magnitude of the input, a finding that indicates promising directions for research in language attitudes and sociolinguistic cognition. | 2020-04-02T09:14:24.159Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "d5a30fef4a17acdcb256c4d3fc9ae8a07e5c6d8a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5334/labphon.229",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d2acfaeeaa1b43877e5938b81d25e88a4ea066ef",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234033219 | pes2o/s2orc | v3-fos-license | Exploring the role of technology infrastructure capability and intrapreneurship to influence higher education institutions’ performance
The dawn of Fourth Industrial Revolution (4IR) has shifted the landscape of private higher education industry in Malaysia. It becomes more liberalized and competitive, raising the issue of sustainability among the private higher education institutions (PHEIs). Working within the theory of resource-based view (RBV), PHEIs need to develop their technology infrastructure capability and intrapreneurial skills among their academicians to enhance performance. The purpose of this paper is to develop a better understanding on the claim that technology influences organizational performance, and to investigate the mediating role of intrapreneurship in higher education industry. Data are collected from 261 respondents from 19 PHEIs in Malaysia and analyzed using SmartPLS 3.0. The result reveals that technology infrastructure capability have significant impact on PHEIs’ performance, and that intrapreneurship mediates the relationship between both variables. This paper also provides valuable insights for PHEIs to focus on enhancing technological infrastructure capability and developing intrapreneurial skills among their academicians. Furthermore, this paper adds value to by addressing multiple predictors in contributing to PHEIs’ performance.
Introduction
Without doubt, many organizations have given great emphasis in investing their technological capabilities. In a decade prior to the dawn of Fourth Industrial Revolution (4IR), technology has been regarded as applied knowledge that helps to fulfil market expectations, competition or needs [1]. In the revolutionized industry, 4IR speaks on the transformation of manufacturing processes from machinedominant to digital-oriented, which will eventually lead to significant differences in society, education, economy and trade.
Although technological advances are expected to pose problems such as difficulties to protect intellectual properties, it may enhance innovation capability, improve flexibility and lowering the operational costs [2]. This premise is critical to the success of innovation and corporate entrepreneurship within a company, contributed by the role of information and communication The same plot of event is expected to occur in the higher education industry. According to [4], the advancement of technology will push private higher education institutions (PHEIs) to change the way knowledge is delivered. The crucial roles of PHEIs as producers and disseminators of knowledge are acknowledged by many authors such as [5] and [6]. It is also paramount for the universities, not only to equip their graduates with value-added knowledge that will enhance their marketability, but also the institution itself to apply technological transfers and becoming innovative, proactive and ready to take risks associated in delivering values to their stakeholders.
In recent years, there seems to be a turbulence in the private higher education industry in Malaysia because of the rising competitive market to support the 4IR. Although the establishment of PHEIs in Malaysia were encouraged in the early 2000 because of their role as "supplementary and complementary" to the tertiary education, the overall productivity performance of PHEIs declined [7]. According to [8], sustainability has become the main problem among PHEIs. This statement was supported by the Minister of Higher Education, Datuk Seri Idris Jusoh, when he announced that 33 PHEIs were closed in 2017 due to the failure to manage their financial efficiently [9].
As a way to overcome this issue, [6] posit that the universities must prepare to link its education with technology as a way to enhance the skills of faculty members and students. [10] support this premise as they recognized that technology is will enable a university's capacity to manage its organizational knowledge, and thus meet its goals. The critical factors such as knowledge and technology are supported by [11], in which they regard for PHEIs to become sustainable. In addition, academic leaders must also enhance their problem solving and decision making skills to enhance PHEIs' performance. [12] posit that academic leaders must exhibit high level of innovativeness, proactiveness and inclination to take risk associated with their business.
By applying resource-based view theory of the firm, this paper contends that (1) technology infrastructure capability generates better PHEIs' performance; and (2) intrapreneurship influences the relations between technology infrastructure capability and PHEIs' performance.
PHEIs' performance
The liberalization of higher educations in Malaysia has forced the universities to assess their performance in the competitive environment. [13] highlighted that performance measurement helps PHEIs to evaluate their progress towards defined goals, recognize thieir strengths and weaknesses and establish future plans with goals to improve performance. [14] argues that financial instruments and metrices such as ROI and cash flow, are critical in measuring business performance. However, the financial approach has insignificant presence when it comes to measuring PHEIs' performance because of the ambiguity on the profit or non-profit role of universities due to the diversity in the objectives of the PHEIs when they were formed [15]. In addition, [16] explained that using financial performance measurement alone may also lead to inaccuracy for it only measures the financial terms, while an organization's value might derive from intangible measures such as intellectual capital.
Due to the abovementioned factors, this study also introduces non-financial approaches to measure university's performance. This approaches come in many dimensions such as the effectiveness and efficiencies of university education [17,18], input-process-output approach [19], and research activities [20][21][22]. [23] identified several critical agenda projects (CAP) that may assist the achievement of the Malaysian's National Higher Education Strategic Plan 2007-2020 such as the number of academics with double appointments, number of expert-based councils established and number of joint publications.
From the literatures, this study applies Balanced Scorecard (BSC) model in measuring perceived PHEIs' performance, which has been tested to measure universities' performance such as in India [31] and Lebanon [32]. As PHEIs are knowledge organizations [33], the implementation of the BSC model to measure performance rather than current quality is useful to ensure that the organizational knowledge will be preserved [27]. The four constructs of BSC model are internal business process, customer, learning and growth, and financial.
Technology infrastructure capability
Literatures on application of technology provides different arguments. However, it is agreed that technology helps to improve organizational performance (OP). According to [34], technology development had a positive impact on organizational strategy and performance. The positive relation between technology, information management and OP were also found in another studies such as [35] when they studied on business organizations in Taiwan. In addition, [36] confirmed that IT capability positively influenced OP and capable to improve profitability [37][38][39]. Therefore, this study asserts that technology infrastructure has an important contribution to organizations, especially on perceived PHEIs' performance. A hypothesis is derived from the reviews and stated as follow: H1: Technology infrastructure capability has a positive and significant impact on PHEIs' performance
Intrapreneurship
The idea of intrapreneurship started to evolve in 1996 with the adoption of the idea of "entrepreneurial orientation" (EO) as a new viewpoint of strategic management [40]. [41] defined intrapreneurship as an entrepreneurial activity within an established organization and includes entrepreneurial behaviors and orientations of existing organizations. According to [42], intrapreneurial trait among employees is among the requirements in future jobs. He claims that entrepreneurs must instil enthusiasm and creativity among their workers. In the context of higher education, intrapreneurship has become an agenda in which universities must engage. [43] considered intrapreneurship as university's third mission, other than teaching and research tasks. Intrapreneurship is also considered as an internal capability that can contribute to enhanced service delivery and PHEIs' performance [41].
It is crucial for a PHEI to nurture intrapreneurship as an internal capability. According to [44], IT resources are capable to develop an intrapreneurship culture. In addition, [45] also emphasize on technological skills to promote intrapreneurship initiatives. Therefore, this study hypothesizes: H2: Technology infrastructure capability has a positive and significant impact on intrapreneurship.
The dimensions of intrapreneurship are differed across literatures. This study adapts three dimensions of intrapreneurship from [46], namely innovativeness, proactiveness and risk taking behaviour, which are more applicable among academicians in the education industry. Innovativeness is defined as the capability of organizations to manage and deploy their resources to innovate in new ideas or products [47]. [48] assessed the role of organizational innovativeness and discovered that it improves organizational performance. [49] also explained that innovativeness influence Turkish manufacturing firms in terms of financially (e.g., profitability) and non-financially (e.g., customer and employee satisfaction). The second dimension of intrapreneurship is proactiveness, which refers to the degree to which firms seek to lead in an industry [50]. In a study by [51], proactiveness has been confirmed as one of the intrapreneurship elements that positively influenced organizational performance. According to [49], proactiveness influences customer satisfaction in a positive and significant way, which will contribute to higher organizational performance. The third dimension of intrapreneurship, risk taking behaviour, is defined as an act of taking instant actions in a risky situation [52]. [51] proposed that risk taking has been found to influence organizational performance. In addition, [53] proved that the risktaking dimension is positively related to performance. Therefore, the third hypothesis developed is: H3: Intrapreneurship has a positive and significant influence on PHEIs' performance
Linking technology infrastructure capability, intrapreneurship and PHEIs' performance
There are rising needs to study the various effects of intrapreneurship as determinant of of organizational capabilities [54]. Previous studies on intrapreneurship confirmed its role to predict firm's growth, sales, market share and performance [44,[55][56][57]. However, [40] suggested that intrapreneurship should be tested for moderating, mediating, independent or interaction effects. The alternative hypotheses on intrapreneurship is expected to enhance the comprehension of intrapreneurship theory [58]. Therefore, this study treats intrapreneurship as a mediating variable. [59] proved that intrapreneurship acted as strong mediator between organizational support and its performance. Intrapreneurship culture has also been recognized as organizational capability that mediates between IT resources and corporate performance [44]. [60] proposed that intrapreneurship mediates between technological knowledge sharing and academic leaders' performance. Meanwhile, [57] assert that intrapreneurship mediates the relation between knowledge acquisition and firm's performance. Their study validates what [61] proposed that entrepreneurial combination of knowledge-based resources, not the knowledge itself, will contribute to competitive advantage. Given these previous researches, it is crucial for this study to investigate the impact of intrapreneurship as a mediator between technology infrastructure and PHEIs' performance.
H4: Intrapreneurship significantly mediates the relation between technology infrastructure capability and PHEIs' performance
Methodology
A structured questionnaire was used to obtain data through convenience sampling. The respondents were targeted among academicians at management levels who are aware of and able to describe the PHEIs' policies levels [62][63][64]. Subsequently, 291 answers were received and after the screening processes, only 261 usable responses were further analysed. The sources of the measurement instruments and number of items are shown in Table 1.
Demographic information
The demographic information of respondents of this study is shown in Table 2. The table above concludes that a typical respondent was a Malay Muslim female, aged under between 25 to 44 years old who has been in the higher education industry for more than three years. This demographic information also concludes that academicians in Malaysian PHEIs came in diverse ethnicity and religions.
Analysis of multivariate assumptions
Five tests were conducted to fulfil the multivariate analysis assumptions: normality, linearity, homoscedasticity, multicollinearity, and common method bias. The results have shown that the data set is satisfactory for further multivariate analysis.
Measurement model analysis
This study analysed the reflective measurement model using SmartPLS 3.0. Four criteria were assessed as proposed by [69], namely internal consistency, indicator reliability, convergent validity, and discriminant validity. As shown in table 3, the Cronbach's alpha (CA) values and composite reliability (CR) values were greater than 0.70 as suggested by [69], signalling a high internal consistency. One indicator (INT9) has a loading value of 0.504, which falls between 0.40 and 0.70. However, as the AVE values were higher than 0.50, thus INT9 was retained. Therefore, this study has achieved construct reliability and convergent validity. For discriminant validity, three approaches were engaged. First, this study examines the crossloading test. It was found that each indicator's outer loadings on a construct is higher than all its cross loadings with other constructs as proposed by [69]. Secondly, Fornell & Larcker criterion has shown that the square root of the AVE of each construct are higher than its highest correlation with any construct [69]. To overcome the critics that Fornell & Larcker criterion do not reliably detect the lack of discriminant validity [70], this study used a third approach in the form of heterotrait-monotrait (HTMT) ratio of correlations. The HTMT ratio values as shown in Table 4 were all below the cut-off of 0.85 as proposed by [71]. Therefore, discriminant validity has been ascertained.
Structural model analysis
To estimate the structural model, this study employed a bootstrapping procedure. Table 5 and Figure 1 shows the results of regression analysis for hypotheses testing. It was found that both knowledge acquisition and knowledge protection processes have significant relationships with intrapreneurship (β=0.556, p<0.05 and β=0.278, p<0.05 respectively). Therefore, H1 and H2 are supported. Furthermore, a positive and significant relationship was found between intrapreneurship and PHEIs' performance (β=0.255, p<0.05). This confirms the H3. The results also reveal that knowledge acquisition process is a stronger predictor of intrapreneurship than knowledge protection process. This study employed [72] bootstrapping method to test the indirect effect of intrapreneurship between technology infrastructure capability and PHEIs' performance. After a bootstrapping procedure, it was revealed that the indirect effect (β=0.624*0.487=0.304) was significant with a tvalue of 6.995. The mediation indirect effect was confirmed given that the indirect effect 0.304, 95% Boot CI: [LL=0.218, UL=0.338] does not span a 0 in between. Therefore, this study concludes that the mediation effect of intrapreneurship on the relationship between technology infrastructure capability and PHEIs' performance is statistically significant, thus supporting the H4.
Discussions
As the 4IR becomes the talk-of-the-town topic, PHEIs must cope with the advancement of technology. It is pertinent for PHEIs to equip themselves as what the other business organizations do in the sense of providing adequacy in technology infrastructure that support daily operations. This study confirms that technology infrastructure able to increase PHEIs' performance. Academicians in PHEIs are obliged to increase their knowledge on academic matters, not only to support their teaching and learning activities, but also in determining the PHEIs' strategic moves. This includes the utilization of technology to analyze their competitive industry, which according to [73], could assist organizations to obtain competitive advantages by expanding their reach in business and achieving their goals.
This finding conforms to the results of previous studies such as [34], who confirmed that technology development resulted in changes in organizational strategy, which in turn will enhance organizational performance. Furthermore, the finding is also parallel to the studies by [35] and [36], who concluded that IT capability enhances business performance. Introducing intrapreneurship as a mediator also increases the understanding of technology application in influencing PHEIs' performance. Intrapreneurship is an emerging field of study that gains interest in recent organizational practices. This paper identifies intrapreneurship as a mediating factor that enable PHEIs to increase their performance. Despite the capability in managing technology infrastructure, PHEIs must strategize to equip their academicians with intrapreneurial skills (i.e. innovativeness, proactiveness and risk taking behavior). The result of analysis on the role of intrapreneurship as a mediating variable supports previous literatures such as [44], [59] and [60]. It also fulfilled the suggestion from [74] and [60] that intrapreneurship should be tested as a mediating variable.
The exposition of intrapreneurship's mediating role in this research lies on the fact that organizational infrastructures are important determinants that require direct and indirect control from the management to enhance intrapreneurship and organizational performance [75]. In a PHEI, academicians are required to utilize technology infrastructure and increase their organizational knowledge. The result demonstrates that intrapreneurship acts as a conduit. It is not adequate for an academician to apply technology in increasing their knowledge. They should also increase their intrapreneurial traits to ensure that technology becomes meaningful in increasing PHEIs' performance. Academicians must be innovative in exploiting technology. Furthermore, academicians who are proactive and risk takers will be able to add values by exploring, identifying and capturing new opportunities that will enhance the values of technology infrastructure.
This study also reveals a link between intrapreneurship and KM Capability unto PHEIs' performance. To the author's knowledge, this study is among the earliest to link intrapreneurship with KM technology infrastructure capability empirically in the higher education industry. In this study, intrapreneurship is highlighted and considered as a firm's internal capability that enhances the PHEIs' performance. Therefore, integrating intrapreneurship into the Gold's KM Capability model will expand the application of RBV theory.
Around the edge of a competitive world, PHEIs that offer Islamic studies as their core products (hereafter known as Islamic PHEIs) must find ways to create sustainable competitive advantage. With 18 Islamic PHEIs competing with the giant players in Malaysia, this paper proposes that adequate technology infrastructure should be installed to increase their academicians' knowledge, and instilling them with intrapreneurial traits. Using technology, the academicians in Islamic PHEIs must also prepare to acquire, convert, apply and protect their organizational knowledge to achieve organizational effectiveness. On top of that, academicians should be exposed and nurtured with innovativeness, proactiveness and risk taking behavior. These intrapreneurial traits are expected to intervene the technology infrastructure capability and increases PHEIs' performance.
Conclusion and future works
This paper presents adequate evidence to support the hypothesis that intrapreneurship mediates the relationship between technology infrastructure and PHEIs' performance. Prior researches has also focused on intrapreneurship, but has only investigated from corporate views. This paper provides an insight of how importance intrapreneurship's role in enhancing PHEIs' performance during the 4IR era. It points out that PHEIs' managers must instil the intrapreneurial traits among their academicians to support knowledge management initiatives.
However, the generalizability of the findings should be interpreted with caution given that this is a cross-sectional research. Longitudinal and experimental studies may provide further support to the results. Furthermore, the convenience sampling method was applied in the data collection process, which contribute to problem of generalizability. As intrapreneurship is treated as a unidimensional | 2021-05-10T00:04:01.860Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "58c92ec652fa38c7e58438c0d3c8632462538594",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1793/1/012012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "07e57fe3eead9927e8a799f29e320f41eb31ccbf",
"s2fieldsofstudy": [
"Education",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
227286701 | pes2o/s2orc | v3-fos-license | Diabetes and Atherosclerosis
Diabetes is known to be associated with a marked excess of atherosclerotic vascular diseases, but the underlying mechanisms still remain incompletely understood. Professor Robert W. Stout and his colleagues at the Belfast Medical School have written this book to give clinicians an update of information on the relationship between diabetes and atherosclerosis. The opening chapter gives a brief summary of the main features of the cell biology of the arterial wall and its alterations in the development of atherosclerotic lesions. The next chapter on the gastrointestinal regulatory peptide control of insulin secretion and its relevance to diabetes is interesting but has little relevance to the main topic of the book. The third chapter gives an excellent and concise summary of the current
leagues at the Belfast Medical School have written this book to give clinicians an update of information on the relationship between diabetes and atherosclerosis.
The opening chapter gives a brief summary of the main features of the cell biology of the arterial wall and its alterations in the development of atherosclerotic lesions. The next chapter on the gastrointestinal regulatory peptide control of insulin secretion and its relevance to diabetes is interesting but has little relevance to the main topic of the book. The third chapter gives an excellent and concise summary of the current Book reviews information on insulin resistance which has become a topic of great interest in research on the relationship between diabetes and atherosclerosis and in atherosclerosis research in general.
The following chapters deal mainly with different clinical and epidemiological aspects of the association between diabetes and atherosclerosis, but these chapters also provide appropriate information about current knowledge on biochemical and physiological links which may mediate and enhance atherosclerosis in diabetes. The chapter entitled 'Diabetes mellitus and atherosclerosis' is the key chapter of the book and gives a good summary of mortality data, autopsy studies, as well as clinical and epidemiological studies providing quantitive information about the magnitude of the excess of different clinical manifestations of atherosclerotic vascular disease in diabetic patients as compared to non-diabetic subjects. The next chapters discuss the effect of diabetes on general cardiovascular risk factors, such as serum lipids and blood pressure, and whether these and other risk factors have the same impact in diabetic patients as in non-diabetic subjects. The book also includes chapters covering some special aspects of the diabetic state itself which may be related to the enhanced development of atherosclerosis or its thrombotic complications. These aspects include hyperinsulinaemia and insulin resistance, glycation of proteins as a consequence of hyperglycaemia, haemostatic disorders, and proteinuria as an indicator of impaired vascular integrity. The wellwritten chapter on non-ischaemic heart disease in dia-betes is relevant in this context, because there is good evidence for the existence of a specific heart muscle disorder associated with diabetes, independent of coronary heart disease and hypertension, with a worse prognosis of acute myocardial infarction and more frequent occurence of cardiac failure in diabetic patients than in non-diabetic subjects.
The last chapter of the book on experimental atherosclerosis and diabetes is interesting and important, but could have been better placed among the basic chapters in the beginning of the book.
I missed a separate chapter on prevention and treatment of atherosclerotic vascular disease in patients with diabetes. Although there still remain many unanswered questions in this respect, the available evidence so far suggests that the life-style measures for primary and secondary prevention of atherosclerotic vascular disease appropriate for non-diabetic subjects are also appropriate in diabetic patients and should be pursued vigorously. Treatment of dyslipidaemia and hypertension, with particular emphasis on the selection of drugs to be used, is discussed in connection with chapters dealing with these risk factors, but a concise chapter summarising various aspects of preventive practice would have been valuable.
This book is of interest to clinicians of different specialties participating in the management of diabetic patients, but it is also a good reference source for researchers in this field.
KALEVI PYORALA
Professor of Medicine, Un iversity of Kuopio, Finland | 2020-12-05T19:05:49.842Z | 1992-10-01T00:00:00.000 | {
"year": 1992,
"sha1": "5667fdc6d2a3d7b2c58df363577abfaabd81313a",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dc87d1ec2f886458916d9e8f860d338ac732fd13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246829939 | pes2o/s2orc | v3-fos-license | Stakeholders’ hopes and concerns about the COVID-19 vaccines in Southeastern Nigeria: a qualitative study
Background Equitable access and high uptake of safe and effective vaccines is critical to ending the COVID-19 pandemic. To ensure the success of these vaccines, particularly in many developing and under-developed parts of the world, the concerns of local communities including fears, potency, and levels of acceptance should be addressed. This study assessed community stakeholders’ perceptions in parts of Southeastern Nigeria about COVID-19 vaccine, towards engaging them effectively to ensure the success of the vaccination campaigns. Methods A qualitative study was conducted involving fourteen stakeholders from the Southeastern geo-political zone of Nigeria selected using purposive sampling. In-depth semi-structured individual interviews lasting about 30 min were used to collect data. Data analysis was informed by a general inductive approach. Results Stakeholders hoped that the development and roll out of the vaccines will bring COVID -19 to an end, will help to maintain good health and allow people get back to normal life. On the other hand, stakeholders expressed their concerns and worries about the “speed” with which the vaccines are being produced, possibility of future adverse effects from vaccination, misinformation, and level of preparedness in the health system to implement the vaccine campaign. Conclusions This study identified that more needs to be done to improve perceptions of those who influence health decisions in communities towards COVID-19 vaccines. This includes firstly, the involvement of the community and religious leaders in vaccine promotion. Secondly, it is imperative to develop and disseminate persuasive messaging on vaccine effectiveness and safety targeted at both health professionals, policymakers, and the community which are culturally sensitive and address identified concerns among health workers. Thirdly, the health systems should be strengthened and system-level interventions that directly target one or more of the WHO’s six health system building blocks: service delivery, health workforce, health information systems, access to essential medicines, financing, and leadership/governance. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-022-12754-4.
also strained health systems, significantly reduced global Gross Domestic Product (GDP), and plunged many countries into economic recession [2][3][4]. Those identified to be at greater risk of severe illness include the elderly, people with underlying chronic illnesses and immunesuppressed individuals [5][6][7].
Recommended protective strategies against COVID-19 include the use of face masks, physical distancing and restrictions on social gatherings, and constant hand washing [8]. Although these strategies have proved somewhat effective in curbing the spread of the virus, there has been a resurgence of the disease in several countries, emphasizing the need for more innovative interventions such as vaccines. Effective roll out of vaccines globally, alongside recommended protective strategies, is required to help boost immunity against COVID-19 and mitigate the public health and economic impact of the pandemic.
As of December 2020 [9], about nine vaccines, including the Oxford-AstraZeneca, Pfizer-BioNTech, Moderna, Sputnik V, and Sinopharm had received emergency use listing (EUL) by the World Health Organization (WHO) [10]. The vaccines were at the time recommended for use among those that are eighteen (18) years and older, and particularly the target risk groups including the frontline health workers, those aged sixty (60) years and above as well as those with co-morbidities. These developments, alongside coordination from Gavi and COVAX saw the initiation of a global vaccination campaign against COVID-19 [11], including in several sub-Saharan African (SSA) countries [12]. Nonetheless, challenges, including slow roll out, funding, vaccine safety, and hesitancy among the general population etc., have been identified particularly in SSA countries [13].
In Nigeria, the immediate response to the pandemic was the activation of a nationwide lockdown and restriction in movement, which were eased gradually following attendant economic consequencies on the population. Other measures were soon introduced, including enforcement of handwashing and sanitization as well as marked social distance positions in public spaces. COVID-19 vaccinations in Nigeria began from 5 March 2021 after taking delivery of about 4 million doses of the University of Oxford and AstraZeneca vaccine. As at this time, about 158,042 cases with 1,954 deaths have already been recorded due to the pandemic in the country [13]. It was expected that the vaccines administered at primary health care centers and community headquarters should be mostly made available as soon as possible to the people most at risk ( health workers, the elderly and those with co-morbidities). For the first phase of the COVID-19 vaccine roll-out which mainly targeted frontline health workers and the vulnerable, 98.9% (3,980,600 doses) of the first tranche of Astra Zeneca vaccines was administered on more than 2.5 million persons [14]. However, the uptake of the vaccine has been slow and poor, with 95% of the country's population yet to receive their first dose as of 17 October, 2021 [14]. A recent study reported issues associated with willingness to receive the COVID-19 vaccine among Nigerians [15]. It is not in the least unexpected. Previous experiences with immunization/ vaccination activities in Nigeria indicate widespread vaccine apathy and hesitancy [16,17].
A typical example is the misconception in Northern Nigeria that the essence of polio vaccines and vaccines in general were part of the western agenda for fertility and population control. This led to poor vaccine uptake in the region and hampered original plans for the elimination of polio [18,19]. Some of the factors implicated include those related to religion [20,21], culture [22], health, and safety misconceptions [17,19,21,23]. Furthermore, there is generally very low awareness and uptake of adult vaccinations including those for Hepatitis B, typhoid fever, yellow fever etc. [22]. The COVID-19 vaccines have also been met with certain controversies, particularly due to the "new technology" used in their development which involves the activity of the viral mRNA in the host cell to trigger immunity [24,25]. As a result of this perceived mechanism of action, some are concerned that the vaccines could manipulate their genetic makeup with adverse consequences [25]. Again the ability of the virus to mutate into different variants is a course for concern for vaccine development and vaccination activities. These concerns could potentially compromise vaccine confidence, hindering the success of vaccinations and the anticipated herd immunity.
The smooth, successful, and sustained rollout of the COVID-19 vaccines hinge on the level of preparedness for its active administration, hence the need for effective and continuous sensitization, mobilization, and vaccine advocacy. These important roles are usually played by local health workers, community leaders and government workers who influence community members on issues bordering health care [26]. Many health interventions fail simply because these stakeholders are not properly engaged during planning and implementation [27]. Meanwhile securing their trust, understanding their hopes and concerns, and putting them into consideration while planning and implementing interventions have been shown to contribute significantly to the success of many health interventions [26,28].
This qualitative study was therefore aimed at assessing the hopes and concerns of stakeholders in parts of Southeastern Nigeria about the COVID-19 vaccine. Insights from this study were used to establish a set of recommendations to improve understanding and trust among local health workers and community leaders to support implementation of the vaccine programme in Nigeria.
Study design
This was an exploratory qualitative study which used in-depth interviews to gain insights from stakeholders regarding the COVID-19 vaccines in the study area [29].
The study was carried out between January and April 2021 in Owerri, South Eastern Nigeria.
Study setting
Owerri is the capital of Imo State, South Eastern Nigeria. The town has a diverse demographic population and is densely populated, with a population density of 2,766persons per Km 2 . Owerri, with a distance of 401 km from Abuja, the capital of Nigeria, is a choice destination for vacation and business activities. The inhabitants are mainly public servants, business men, traders and artisans. The communal style of settlement, customs, and traditions found in the area is akin to most of South Eastern Nigeria, thus is a good representation of the wider region. Individuals such as doctors, nurses, pharmacists, drug vendors, religious leaders, community leaders, policy makers and community health worker sare known to influence health decisions including vaccination of the populace and are therefore regarded as stakeholders in the area.
Recruitment of study participants
A formal enquiry was made at the State Ministry of Health, Owerri, Nigeria to identify the groups of individuals designated as stakeholders who could influence health opinions in communities in the area. Those identified as stakeholders include, the Local Government Area (LGA) department of health resident medical officers, the head of health department in Owerri Municipal Council, the community health extension workers/nurses, traditional rulers, male/female community presidents general and religious leaders in communities of Owerri municipal council, policy makers and the L.G.A Chairman. Based on this, a purposive sampling strategy was employed to achieve variation across the groups identified as stakeholders that could influence health opinions in Owerri municipality, South Eastern Nigeria. This sampling strategy provides an assortment in respondents for gender, age, ethnic and stakeholder type. This strategy is also cost and time effective for this study as restricted movements were still enforced in the State. Being aware that purposive sampling is non-probable and so prone to research bias in sample selection, to reduce this, we had an additional criterion for selection that the stakeholders must have lived in the area for five or more years and understands the population health dynamics in the area. However, four of them could not complete the interviews, leaving us with fourteen stakeholders (Supplement 1). Informed written consent was sought and obtained from them before the interviews were conducted. The criterion for selection was that the stakeholders must have lived in the area for five or more years.
Data collection
In-depth semi-structured individual interviews lasting about 30 min were used to collect data from the respective stakeholders. The interviews were conducted using a pre-developed tailored topic interview guide developed for the respective stakeholders. Interview continued until saturation was reached when the same comments were being made repeatedly without additional new information being provided. The interview guide was developed through an iterative process involving initial inputs by members of the research team (of diverse public health background), piloting outside Owerri, and a final modification and refinement.
The interviews were conducted by members of the research team who received training on qualitative data collection from an expert from the Ministry of Health, Owerri. All the interviews were conducted in English language and were audio recorded with the consent of the participants. Data was collected between January and February 2021. Data quality was assured by good profiling and checks as well a avoidance of data duplication.
Data analysis
Using Nvivo version 12, data was analysed by members of the research team following a general inductive approach [30]. Firstly, all audio files were transcribed verbatim by COE using naturalized transcription. Second, UMC and GNI independently read and coded the transcripts and summarised the data. Third, emerging themes were discussed by UMC, GNI, and UWD until agreement was reached on the themes and sub-themes identified. All audio recordings were deleted once transcribed, in keeping with standard ethical practices.
Characteristics of study participants
Fourteen stakeholders were enrolled in this study (Supplement 1). They comprised 4 males and 10 females between ages 25 and 55 years. Amongst them were 2 doctors, 3 nurses, 3 drug vendors, 1 policy makers and 3 community health workers, 1 community individual/ leader and 1 religious leader. Their working experiences ranged from 1 to 30 years. A total of 11 participants had tertiary education while the remaining three had secondary education. The highest work experience is 30 years, 11 had at least 5 years experience from health workers, community leader.
Stakeholder's hopes and concerns about the COVID-19 vaccine
This study found three broad themes relating to the hopes and concerns of stakeholders regarding the COVID-19 vaccine in South eastern Nigeria. They include: 1. Stakeholder perceptions of current COVID-19 vaccines; 2. Health system preparedness for the COVID-19 vaccination programme; 3. Determinants of COVID-19 vaccine uptake.
The three themes and their subthemes are presented below, together with illustrative quotes.
Theme 1: Stakeholder perceptions of current COVID-19 vaccines
The unprecedented speed in which the various COVID-19 vaccines have been developed generated both hopes and concerns among the stakeholders. While stakeholdersheld beliefs that vaccines could be effective in conferring immunity against the disease and would lead to positive public health impact there was a clear sense of doubt that effective and safe vaccines could be developed in such a short period of time.
Perceived benefits of a COVID-19 vaccine
There was a consensus among the stakeholders that the development of vaccines will bring COVID-19 to an end, reduce the death rate as a result of the virus and restore normalcy to daily activities. For some, the vaccine was seen as a means to end face mask mandates enabling the return to 'normal life': "I hope the vaccine works and there won't be any need for hand washing, face masks and we can go back to normal life"(RL) Furthermore, some of the respondents believed that the development of vaccines will improve the utilization of health care services among individuals at community level; and also improve health servicedelivery by enabling them to treat their patients without 'fear': "I expect that it will end the disease, so that people will no longer be scared when they visit drug shops and other health facilities" (DV2) "I hope that the vaccine ends the disease. I want to treat people without fear" (N1) In addition, management of new diseases usually poses heavy economic threat at the affected region(s). Thus, reduction of economic and health burden were also identified as one of the expected benefits of aCOVID-19 vaccine.
''Vaccine is a welcome development and hopefully it will drive away the disease in our society and reduce the burden and cost that comes with the disease"(PM)
Concern over the quick emergence of the current COVID 19 vaccines
Various concerns were raised regarding the development of COVID-19 vaccine.Some of the concerns raised by stakeholders centered around the accelerated development of vaccines for COVID-19, consideringthe mean development time for a new anti-infective vaccine is around 10 years and three COVID vaccines were approved for emergency use within 11 months after the SARS-CoV-2 sequence was published.
Some of the stakeholders were aware of the available evidence from WHO and other relevant bodies on the effectiveness and safety of the vaccines. Their level of knowledge could be correlated with the experience that goes with their level or cadre in the health system. However, some of these stakeholders still raised concerns about the quick emergence of the vaccines and the fear for any long term adverse effects including genetic mutation, underpinned by the vaccines use of mRNA technology. Other concerns such as uncertainty about the long term adverse effect and/or complications of the vaccine; inadequate information on its appropriateness for vulnerable groups and other conspiracies about its safety were mentioned by the respondents. Of particular concern were pregnant women and children: "Pregnant women and children should be exempted from vaccination until long term adverse effects (if any) are ascertained. They are the vulnerable groups" (DV3)
Concern over the neglect of other diseases
There were stakeholders who conveyed their concerns over the neglect of other diseases endemic in the country and factors affecting them as a result of priority attention given to COVID 19. They were bothered that the prevalence of diseases such as malaria, cholera etc. was on the increase because attention has shifted from efforts towards their control to COVID 19. They were of the opinion that COVID 19 was not as serious in the country as it was in other parts of the world, and therefore did not deserve the attention it received over other endemic diseases and hunger. They particularly question why a vaccine has not yet been developed for malaria which they perceived as a more serious disease.
''Compared to other countries in the world, we (Nigeria) are not facing worse Covid-19 cases. Other diseases such as malaria still kill more people than Covid-19 does in Nigeria. More people are dying of hunger. While attending to COVID 19,Government should not neglect these other diseases. They should also make the health system functional"(Dr1).
Theme 2: Health system preparedness for the vaccination program
In this theme, the stakeholders refer methods of vaccine deployment based on previous vaccination programs. These include adequate research and knowledge of the vaccines, human resources, education, and cold chain. Like the previous theme, there were hopes and concerns.
Hopes for health worker acceptance of the COVID-19 vaccine
Some stakeholders have high hopes that members of the society would be eager to receive the vaccines as many will like to witness normal life activities restored. However, some stakeholders were of the opinion that health workers taking the vaccines first will give others confidence to do the same.
Some stakeholders expressed high levels of self preparedness to receive the vaccine, and encourage other members of the society to do so in order to restore normal life activities in the society.
"I am prepared to receive the vaccine so that those I do attend to in the clinic will have confidence to go for it" (N1)
Concerns over health workers availability and capacity
Some of the stakeholders who respondedexpressed concerns over perceived inadequacies in the number of available health workers forthe distribution of the vaccines as well asinadequacies in the training of the available health personnel who will administer the vaccine and manage any adverse reactions from vaccination, "We need to train more hands to be able to distribute this vaccine" (CHW2) "We need to be adequately informed about this vaccine and how to manage any side effects"(N2)
Concerns over vaccines supply, storage, and access
It was a concern to some stakeholders that there may not be adequate supply of the vaccines to all Nigerians and that even the available vaccine may not be stored well.
"I don't know if the vials supplied will be sufficient for our population. " (Dr2) "I am worried about how cold chain will be maintained to avoid destroying the potency of the vaccine" (Dr2)
Theme 3: Determinants of COVID-19 vaccine uptake
Stakeholders reported a range of potential enablers and barriers to vaccine uptake, typically drawn from previous experiences in vaccination programs in the region. Some of the stakeholders expressed the need for both verbal and nonverbal persuasion about taking the vaccines. For example, one of the stakeholders shared the perception that people will be convinced to take the vaccine when they see others around them taking it based on their previous vaccination and immunization experiences.
"… you know this is a community. Even if people reject it at first when they see that others who take it are doing well, they will come and take"(CHW1)
Stakeholders were of the opinion that government officials as the leaders should be the first to take the vaccine. This is because they are influential in the communities and thus will motivate others who may be hesitant initially to take the vaccine.
"The government officials should take it first. This will encourage others to do so"(N2)
Perception on vaccine mandate
In seeking for opinion on whether the COVID 19 vaccines should be made mandatory, a majority of the participants were of the opinion that the vaccines should not be forced or made compulsory, as is seen in other countries but should come as a recommendation.
"We should not be forced to take the vaccine"(Dr1) "COVID 19 vaccination should come as a recommendation"(PM) "I don't think that people should be forced to take the vaccines"(RL) "Vaccination should not be forced on people"(DV1) "Since they did not force other vaccines on people, this one should not be different"(C1) On the contrary, few of the participants suggested that the vaccines should be made compulsory because of the emergency situation posed by COVID-19. They believe that making the vaccines compulsory will help reduce the number of people at risk of getting infected.
"Refusing to take the vaccine will put people at risk. Vaccination should be made compulsory for everybody"(N2)
Concerns over information sources, information dissemination and conspiracy theory
Stakeholders were also disturbed about the information fatigue and overload from various sources and their subsequent inability to identify the correct and relevant information. They desired to receive COVID-19 related information from credible sources such as the World Health Organization (WHO), the Federal Ministry of Health (FMOH), and the Nigerian Centre for Disease Control (NCDC) as it directly or indirectly influenced their decision to take the vaccine.
"I will like to receive information about COVID via emails from credible sources"(Dr2). "Information should come from the right channels. From the federal ministry, then to state, then to the local government and to us" (CHW 3). "The information about the vaccine is not reassuring. We don't know which one is true or false"(RL) " we heard in the media that the vaccine causes blood clots and killing people, and this creates a wrong perception of the vaccine" (DV1) Another area of concern is the existence of several conspiracy theories which may negatively affect vaccine uptake because the public may likely be susceptible to conspiracy theories underpinned by beliefs in Western tyranny. This particular concern is also subtly raised whenever there is a new vaccination programme particularly for adults.
"The conspiracy theory that Africans or blacks are considered inferior to the whites and treated likewise may make the public believe that the vaccines in the Western world could bedifferent from the one sent to African countries. This will affect the vaccines uptake. (Dr1)"
Concerns over accessibility and acquisition of vaccines
Nigeria is one of the countries in the world yet to achieve universal health coverage. Most health care expenses are borne out of pocket and road access to health facilities is poor in parts of the country, and this was raised as an issue among the stakeholders with regards to accessibility and possible cost of acquiring the vaccines.
"Access roads to health centers are bad. Most health centers are far and makes it hard to access"(CHW2) "Healthcare is expensive to get in this country"(CHW1) "Cost of healthcare in this part is expensive. The patients expect subsidized pay by the government when they visit the health facility. The community people don't have money to pay for health care, but we try to encourage them to pay for the N100 for deliverables such as cotton wool, etc. " (CHW3).
Concern over poor working conditions and welfare of health workers
Lack of motivation amongst stakeholders was reported by the respondents. These include none payment of salaries and allowances, none provision of protective equipment, in-conducive working conditions, to mention a few.
"We are being owed salary for months and we need these things to motivate us to work'' (N1) "……… what did the government give us? Nothing. Just one bottle of hand sanitizers" (N3)
Discussion
The Study employed qualitative techniques to uncover stakeholders' hopes and concerns about the COVID-19 vaccine in parts of South Eastern Nigeria. Considering the disruptions in normal daily life activities caused by the pandemic, the stakeholders hoped that the vaccine could help them return to normalcy. Also, the spread of the virus across nations since its outbreak has been unprecedented thus causing panic among individuals both in the health system and beyond. COVID-19 vaccine administration is therefore expected to create herd immunity. However, herd immunity can only be achieved when a good number of people have been vaccinated or infected and recovered. This will obviously be achieved faster with vaccination. In a similar way, most of the stakeholders who responded expressed hope in seeing the vaccine reduce the mortality rate of the disease as well as reduce its spread across states.
Additionally, SARS-CoV-2 being a novel strain of the corona virus has posed a challenge in its management and /or treatment since its outbreak in 2019. Consequently, utilization of health care services has witnessed a significant drop since the onset of the pandemic. Vaccine development and further research on the virus will, however,improve strategies in managing patients with the virus and also improve the use of available health care services. Majority of the stakeholders who responded expressed hope in seeing people using health care services without the fear of being infected by the virus. Also, they expected the vaccine to make healthworkers less vulnerable to the virus thus improving their ability to efficiently execute quality health care services to patients.
However, while stakeholders expressed their hopes to the vaccine development, several concerns were also raised regarding vaccine effectiveness, accelerated development, possible adverse effects, and willingness to get vaccinated. According to the International Federation of Pharmaceutical Manufacturers & Associations [31], vaccine production usually takes 10-15 years and also costsabout US$31-68 million. But due to the heavy global disruption caused by the virus and persistent increase in the number of infection, there was a call to accelerate the production and rollout of vaccines against COVID-19 [7,32,33]. Despite this obvious need, there are apprehensions in the population surrounding the efficacy, accessibility and distribution of the vaccine. In a similar way, the prevailing lack of information and misinformation about COVID-19 could have an overbearing influence on what people believe or not regarding the vaccines [34]. Some of the stakeholders revealed their astonishment at the cogency and the magnitude of attention devoted to COVID-19 by the government, insisting that such concerted efforts have not been geared towards solving other prevalent problems in the country. This makes it difficult for them to regard the efforts being made by government towards COVID 19 vaccination as credible. These concerns are apt in the sense that the government has been viewed as being less concerned in tackling the associating problems of hunger, poverty, insecurity, and other social issues affecting the country.
This study also reveals meaningful insights on stakeholders' inputs for preparedness for COVID-19 vaccine and the results obtained can be integrated with other outcomes for successful vaccination. Ways recommended to build public trust and prime the vaccines include human resource development to improve self-efficacy of those going to administer the vaccines as well as adequate sensitization on the mechanisms and potential side effects of the vaccines, taking good care to avoid the adulteration and loss of integrity. Others include making the vaccine affordable and accessible, and putting in place efficient structures for the preservation of the vaccine to avoid wastage are ways recommended to build public trust and prime the vaccine rollout for success. Oku et al. [26] identified similar reasons in their study on factors affecting the implementation of childhood vaccination communication strategies in Nigeria.
Furthermore, some of the stakeholders perceived COVID-19 as being given an exaggerated priority. This can be a possible avenue to misappropriate funds. This finding is quite similar to responses in earlier studies [35,36]. Moreover, tackling the social determinants of health is albeit important towards improving responses from the population concerning COVID-19 testing and preventive techniques [37], and could as well be a requisite for vaccine success. Another major concern raised is the mistrust and confusion surrounding the long term effects of the vaccines. A prevailing misconception is that the vaccines could be a means of population control. Such conspiracy theories have also trailed the origins of COVID-19. Reasons for the mistrust could be as a result of little efforts to dispel misconceptions and instill the necessary confidence particularly among stakeholders. Similar to the findings in another study [36], some of the stakeholders in the index study are concerned that the vaccines shipped to Africa could be for human experiments. It has been reported that COVID-19 conspiracy theories negatively influence the adoption of protective measures against the pandemic, and the willingness to get vaccinated against the disease [38].
Conclusion
In conclusion, while stakeholders have high expectations that the development and rollout of COVID-19 vaccines could reduce the disease and restore normal life activities, some were opinion reserved and could not openly express their preparedness in getting vaccinated for COVID-19. This study identified that more needs to be done to improve health worker and community perceptions towards COVID-19 vaccines to ensure success in the vaccination campaign. This include firstly, the involvement of community and religious leaders in vaccine promotion [19,39]. Secondly, it is imperative to develop and disseminate persuasive messaging on vaccine effectiveness and safety targeted at both health professionals, policy makers and the community which are culturally sensitive, and address identified concerns among health workers. Thirdly, the health systems should be strengthened and system-level interventions that directly target one or more of the WHO's six health system building blocks: service delivery, health workforce, health information systems, access to essential medicines, financing, and leadership/governance, developed. Disease-specific interventions that have important system-wide effects to support vaccine roll out should be put in place. Fourthly, relevant stakeholders are not effectively engaged in vaccination activities in Nigeria, as is the case with the COVID-19 vaccines in the study area. It is therefore important that relevant stakeholders who are able to influence public opinions and behaviours regarding vaccinations, are engaged right from the planning stages to implementation and follow up stages to ensure effective coverage and sustianable vaccination programmes. | 2022-02-16T14:46:39.001Z | 2022-02-16T00:00:00.000 | {
"year": 2022,
"sha1": "903eff0aafef440821b5a3e4c9d918a167f1a816",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "903eff0aafef440821b5a3e4c9d918a167f1a816",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119379004 | pes2o/s2orc | v3-fos-license | Measurements of Branching Fractions and CP Asymmetries in B->etah Decays
We report measurements of B to pseudoscalar-pseudoscalar decays with at least one eta meson in the final state using 140 fb^{-1} of data collected by the Belle detector at KEKB e^+ e^- collider. We observe the decay B^+->eta pi^+ and find evidence of B^+->eta K+; the measured branching fractions are B(B^+->eta pi^+)=(4.8^{+0.8}_{-0.7}(stat) +- 0.3(sys))*10^{-6} and B(B^+->eta K^+)=(2.1 +- 0.6(stat)+- 0.2(sys))*10^{-6}. Their corresponding CP violating asymmetries are measured to be 0.07+- 0.15 (stat) +- 0.03(sys) for eta pi^+- and -0.49+- 0.31 (stat) +- 0.07(sys) for eta K^+-. No significant signals are found for neutral B->eta h decays. We report the following upper limits on branching fractions at the 90% confidence level: B(B->eta K^0<2.0* 10^{-6}, B(B->eta pi0)<2.5*10^{-6} and B(B->eta eta)<2.0*10^{-6}.
PACS numbers: 13.25.Hw, 12.15.Hh,11.30.Er Charmless B decays provide a rich sample to understand B decay dynamics and to search for CP violation. An unexpectedly large B → η ′ K branching fraction [1, 2] has stimulated much theoretical interest. It was suggested even before the η ′ K measurement that two b → s penguin amplitudes are constructive in B → η ′ K decays but destructive in B → ηK [3]. The situation is reversed for B → η ′ K * and B → ηK * decays. Experimental results have more or less confirmed this picture; however, precise measurements of branching fractions are needed to quantitatively understand the contribution of each diagram. It was also pointed out that in the ηK mode the suppressed penguin amplitudes may interfere with the CKM suppressed b → u (tree) amplitude and result in direct CP violation [4]. The penguin-tree interference may also be large in B + → η ′ π + [5] and B + → ηπ + decays; however, theoretical expectations for the partial rate asymmetry (A CP ) can be either positive or negative [4,6]. Recently, the BaBar Collaboration has reported large negative A CP values in both ηK + and ηπ + , which are ∼ 2σ away from zero [7]. However, more data are needed to verify these large CP violating asymmetries. Furthermore, branching fractions and partial rate asymmetries in charmless B decays can be used to understand the tree and penguin contributions and provide constraints on the third unitarity triangle angle φ 3 [8].
In this paper, we report measurements of branching fractions and partial rate asymmetries for B → ηh decays, where h could be a K, π or η meson. The partial rate asymmetry is measured for the charged B decays and defined to be: where N (B − ) is the yield for the B − → ηh − decay and N (B + ) denotes that of the charge conjugate mode. The data sample consists of 152 million BB pairs (140 fb −1 ) collected with the Belle detector at the KEKB e + e − asymmetricenergy (3.5 on 8 GeV) collider [9] operating at the Υ(4S) resonance. The Belle detector is a large-solid-angle magnetic spectrometer that consists of a three-layer silicon vertex detector (SVD), a 50-layer central drift chamber (CDC), an array of aerogel thresholdČerenkov counters (ACC), a barrel-like arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter (ECL) comprised of CsI(Tl) crystals located inside a superconducting solenoid coil that provides a 1.5 T magnetic field. An iron fluxreturn located outside of the coil is instrumented to detect K 0 L mesons and to identify muons (KLM). The detector is described in detail elsewhere [10].
Two η decay channels are considered in this analysis: η → γγ (η γγ ) and η → π + π − π 0 (η 3π ). In the η γγ reconstruction, each photon is required to have a minimum laboratory energy of 50 MeV and the energy asymmetry, defined as the absolute value of the energy difference in the laboratory frame between the two photons divided by their energy sum, must be less than 0.9. Furthermore, we remove η candidates if either one of the daughter photons can pair with any other photon to form a π 0 candidate. Candidate η 3π mesons are reconstructed by combining a π 0 with a pair of oppositely charged tracks that originate from the interaction point (IP). We make the following requirements for the invariant mass on the η candidates: 516 MeV/c 2 < M γγ < 569 MeV/c 2 for η γγ and 539 MeV/c 2 < M 3π < 556 MeV/c 2 for η 3π . After the selection of each candidate, an η mass constraint is implemented by readjusting the momenta of the daughter particles.
Candidate neutral pions are selected by requiring the two-photon invariant mass to be in the mass window between 115 MeV/c 2 and 152 MeV/c 2 . The momentum of each photon is then readjusted to constrain the mass of the photon pair to be the nominal π 0 mass. To reduce the low energy photon background, each photon is required to have a minimum energy of 50 MeV and the π 0 momentum must be above 250 MeV/c in the laboratory frame. Charged tracks are required to come from the IP. Charged kaons and pions that form B candidates with η mesons are identified by combining information from the CDC (dE/dx), the TOF and the ACC to form a K(π) likelihood L K (L π ). Discrimination between kaons and pions is achieved through the likelihood ratio L K /(L π + L K ). Charged tracks with likelihood ratios greater than 0.6 are regarded as kaons, and less than 0.4 as pions. The typical kaon and pion identification efficiencies for 2.5 GeV/c tracks are (85.0 ± 0.2)% and (89.3 ± 0.2)%, respectively. With the same track momentum, the rate for pions to be misidentified as kaons is (7.3 ± 0.2)% while the rate for kaons to be misidentified as pions is (10.6 ± 0.2)%. Furthermore, charged tracks that are positively identified as electrons or muons are rejected. K 0 S candidates are reconstructed from pairs of oppositely charged tracks with invariant mass (M ππ ) between 480 to 516 MeV/c 2 . Each candidate must have a displaced vertex with a flight direction consistent with a K 0 S originating from the IP. Candidate B mesons are identified using the beam constrained mass, M bc = E 2 beam − P 2 B , and the energy difference, ∆E = E B − E beam , where E beam is the run-dependent beam energy in the Υ(4S) rest frame and is determined from B → D ( * ) π events, and P B and E B are the momentum and energy of the B candidate in the Υ(4S) rest frame. The resolutions on M bc and ∆E are around 3 MeV/c 2 and ∼ 20-30 MeV, respectively. Events with M bc > 5.2 GeV/c 2 and |∆E| < 0.3 GeV are selected for the analysis.
The dominant background comes from the e + e − → qq continuum, where q = u, d, s or c. To distinguish signal from the jet-like continuum background, event shape variables and the B flavor tagging information are employed. We form a Fisher discriminant [11] from seven variables that quantify event topology. The Fisher variables include the angle θ T between the thrust axis [12] of the B candidate and the thrust axis of the rest of the event, five modified Fox-Wolfram moments [13], and a measure of the momentum transverse to the event thrust axis (S ⊥ ) [14]. The probability density functions (PDF) for this discriminant and cos θ B , where θ B is the angle between the B flight direction and the beam direction in the Υ(4S) rest frame, are obtained using events in signal Monte Carlo (MC) and data with M bc < 5.26 GeV/c 2 for signal and qq background, respectively. These two variables are then combined to form a likelihood ratio R = L s /(L s + L qq ), where L s(qq) is the product of signal (qq) probability densities.
Additional background discrimination is provided by the quality of theB flavor tagging of the accompanying B meson. We use the standard Belle B tagging package [15], which gives two outputs: a discrete variable (q) indicating the B flavor and a dilution factor (r) ranging from zero for no flavor information to unity for unambiguous flavor assignment. We divide the data into six r regions. Continuum suppression is achieved by applying a mode dependent requirement on R for events in each r region based on N exp s / N exp s + N exp qq , where N exp s is the expected signal from MC and N exp qq denotes the number of background events estimated from data. This R requirement retains 58-86% of the signal while reducing 96-82% of the background. From MC all other backgrounds are found to be negligible except for the ηK + ↔ ηπ + reflection, due to K + ↔ π + misidentification, and the ηK * (892)(ηρ(770)) feed-down to the ηK(ηπ) modes. We include these two components in the fit used to extract the signal.
The signal yields and branching fractions are obtained using an extended unbinned maximum-likelihood (ML) fit with input variables M bc and ∆E. The likelihood is defined as: where N j is the yield of category j (signal, continuum background, reflection, ηK * /ηρ), P j (M bci , ∆E i ) is the probability density for the ith event and N is the total number of events. The PDFs for the signal, the reflection background and the ηK * /ηρ feed-down are modeled with two-dimensional M bc -∆E smooth functions obtained using MC. The signal peak positions and resolutions in M bc and ∆E are adjusted according to the data-MC differences using large control samples of B → Dπ and D 0 → K + π − π 0 /π 0 π 0 decays. The continuum background in ∆E is described by a first or second order polynomial while the M bc distribution is parameterized by an ARGUS function, , where x is M bc divided by half of the total center of mass energy [16]. Thus the continuum PDF is the product of an ARGUS function and a polynomial, where ξ and the coefficients of the polynomial are free parameters. Since B → ηK * branching fractions are well measured (∼ 20 × 10 −6 ) [1, 17], their feed-down to the ηK modes are fixed from MC in the likelihood fit. Since the decay B + → ηρ + is experimentally poorly constrained, the amount of this background in the ηπ modes is allowed to float in the fit. In the charged B modes, the normalizations of the reflections are fixed to expectations based on the B + → ηK + and B + → ηπ + branching fractions and K + ↔ π + fake rates, measured using D 0 → K + π − data. The reflection yield is first estimated with the assumed ηK + and ηπ + branching fractions and is then recalculated according to our measured branching fractions. No BB contributions are included for the B 0 → ηη mode.
In Table I we show the measured branching fractions for each decay mode as well as other quantities associated with the measurements. The efficiency for each mode is determined using MC simulation and corrected for the discrepancy between data and MC using the control samples. The only discrepancy we find is the performance of particle identification, which results in a 4.3% correction for the ηπ + mode and 1.7% for B → ηK + . The combined branching fraction of the two η decay modes is obtained from a simultaneous likelihood fit to all the sub-samples with a common branching fraction. Systematic uncertainties in the fit due to the uncertainties in the signal PDFs are estimated by performing the fit after varying their peak positions and resolutions by one standard deviation. In the ηK modes, we also vary the expected ηK * feed-down by one standard deviation to check the yield difference. The quadratic sum of the deviations from the central value gives the systematic uncertainty in the fit, which ranges from 3% to 6%. For each systematic check, the statistical significance is taken as the square root of the difference between the value of −2 ln L for zero signal yield and the best-fit value. We regard the smallest value as our significance incuding the systematic uncertainty. The number of B + B − and B 0 B 0 pairs are assumed to be equal.
The performance of the R requirement is studied by checking the data-MC efficiency ratio using the B + → D 0 π + control sample. The obtained error is 2.4-3.5%. The systematic errors on the charged track reconstruction are estimated to be around 1% per track using partially reconstructed D * events, and verified by comparing the ratio of η → π + π − π 0 to η → γγ in data with MC expectations. The π 0 and η γγ reconstruction efficiency is verified by comparing the π 0 decay angular distribution with the MC prediction, and by measuring the ratio of the branching fractions for the two η decay channels: η → γγ and η → π 0 π 0 π 0 . We assign 3.5% error for the π 0 and η γγ reconstruction. The K 0 S reconstruction is verified by comparing the ratio of D + → K 0 S π + and D + → K − π + π + yields. The resulting K 0 S detection systematic error is 4.4%. The uncertainty in the number of BB events is 1%. The final systematic error is obtained by first summing all correlated errors linearly and then quadratically summing the uncorrelated errors. Figure 1 shows the M bc and ∆E projections after requiring events to satisfy −0.10 GeV < ∆E < 0.08 GeV (−0.15 GeV < ∆E < 0.10 GeV for the η γγ and ηπ 0 modes) and M bc > 5.27 GeV/c 2 , respectively. No significant signals are observed for the neutral B meson modes; for these modes we set branching fraction upper limits at the 90% confidence level. The upper limit for each mode is determined using the combined likelihood for the two η decay channels with the reconstruction efficiency reduced by 1σ. We vary the signal PDF and the expected ηK * feed-down in the ηK 0 mode to compute the likelihood as a function of branching fraction; the largest branching fraction that covers 90% of the likelihood area is chosen to be the upper limit.
Significant signals are observed for charged B decays. We investigate their partial rate asymmetries by extracting signal yields separately from the B + and B − samples. Unbinned maximum likelihood fits are performed independently for the two η decay modes in order to reduce the systematic uncertainties. The same signal and background PDFs as used in the branching fraction measurement are applied. The parameters of the continuum PDF are fixed according to the branching fraction results. Contributions from BB backgrounds are required to be equal for the B + and B − samples. Figure 2 shows the M bc and ∆E projections. The A CP results for the two η decay modes are combined assuming that the errors are Gaussian. Systematic errors due to uncertainties in the signal PDF are estimated by varying the peak positions and resolutions. We also check the A CP values after varying the amount of the expected ηK * feed-down and the reflection background. The BB contributions are allowed to be different for the two samples to obtain the systematic error. The largest uncertainty is the asymmetry of the reflection. A possible detector bias in A CP is studied using B → Dπ + decays. The obtained uncertainty is 0.5%. Each A CP deviation is added quadratically to provide the total systematic uncertainty.
In summary, we have observed B + → ηπ + and found evidence for B + → ηK + ; the measured branching fractions and partial rate asymmetries are summarized in Table I. We conclude that the B + → ηπ + branching fraction is larger than that of B + → ηK + . The measured B + → ηπ + branching fraction is consistent with an earlier result published by the BaBar Collaboration; however, unlike the large negative A CP measured by BaBar, the central value in this analysis is small and positive, and is consistent with no asymmetry. For the decay B + → ηK + , the measured branching fraction is 40% lower than the published result of the BaBar experiment, corresponding to a 1.3 σ deviation. It is interesting to note that although the errors are still large, both experiments suggest a large negative A CP value for B + → ηK + , which is anticipated by some theories [18]. No significant signals are found in neutral B → ηh decays and upper limits at the 90% confidence level are given. | 2019-04-14T02:27:20.748Z | 2004-12-15T00:00:00.000 | {
"year": 2004,
"sha1": "e3348a8b7cd6f0f9989a0cc284ae13db455d5260",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ex/0412043",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e3348a8b7cd6f0f9989a0cc284ae13db455d5260",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
256541317 | pes2o/s2orc | v3-fos-license | Laboring Alone: Perinatal Outcomes during Childbirth without a Birth Partner or Other Companion during the COVID-19 Pandemic
During the first wave of the COVID-19 pandemic in the spring of 2020, the government of the Czech Republic issued a nationwide ban on visitors to maternity wards. We studied whether the absence of a close person during labor due to this ban impacted perinatal indicators. This study was performed using an administrative observational questionnaire focused on absolute frequencies of events sent to maternity facilities across the Czech Republic. Completed answers were received from 33 facilities covering 4805 births during the study period in 2019 and 4514 births in 2020. The differences in individual parameters were tested using Pearson’s chi-squared homogeneity test. There were no significant differences between the two periods in spontaneous pre-term births (p = 0.522) or in the number of cesarean sections (p = 0.536). No significant changes were seen in either local or systemic analgesia. Data showed a significantly shorter (p = 0.026) first stage of labor in 2020 compared to 2019, while there was no significant difference (p = 0.673) in the second stage of labor. There was no statistically significant difference found for newborn perinatal adaptation. There were also no significant differences in intrapartum maternal injuries. Overall, we found no significant differences in basic perinatal indicators during the first wave of COVID-19 in 2020 compared to 2019. Although the absence of a close person may cause stress for the laboring women, it does not impair objective clinical outcomes.
Introduction
Childbirth in modern hospital settings is a highly standardized process, with stringent requirements put on the quality of obstetric care, with health care quality also emphasizing non-medical issues. In many countries, it is common that the child's father or another close person can be present during the labor and delivery in any maternity facility [1]. Due to ethical considerations, it is normally impossible to perform studies on the contribution of a partner s presence during childbirth. However, a significant benefit to expectant mothers well-being has been repeatedly mentioned [2,3]. The fathers benefit from the right to a two-week paid paternity leave based on their sickness insurance. They can start their paternity leave on any day they choose within six weeks from the child's birth. This measure has also contributed to developing an environment where the father's presence at childbirth is considered an important contribution to supporting family bonds between the mother, father, and child [4,5]. Based on our research, several studies were found on the presence of a close person at birth. However, none of the good-quality studies dealt explicitly with the absence of a companion at birth, as it has probably not been possible to construct such a study design due to ethical reasons.
In the spring of 2020, strict anti-COVID-19 measures were adopted in the Czech Republic that included a ban on the presence of any person, including fathers, in maternity facilities, which allowed us to study the impact of the father s absence during childbirth on obstetric care outcomes. The authors stress the importance of the World Health Organization (WHO, Geneva, Switzerland) recommendations on intrapartum care for a positive childbirth experience, where the practical and emotional support from a birth companion(s) and kind, technically competent clinical staff are highlighted among others, such as respectful maternity care, effective communication, or continuity of care [6].
The worldwide epidemic of the Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV-2) caused COVID-19 to spread across Europe at the beginning of 2020 and led to a completely unprecedented response in health and societal systems. The first case of COVID-19 in the Czech Republic was identified at the beginning of March 2020, and the disease began to spread. As a result, the Czech government announced a state of emergency on 12 March 2020 for 30 days [7], fundamentally limiting the social and economic life of the country. The state of emergency was repeatedly prolonged until 17 May 2020 [8]. As a part of this situation, the Ministry of Health of the Czech Republic prohibited visits to inpatient facilities in the interest of public health [9]. This extraordinary measure also included the presence of a partner or a close person during labor and birth. As a result, from 18 March to 16 April 2020, women in the Czech Republic gave birth without the presence of the child s father or another partner/person. This was due to concerns about the transmission of infection from persons accompanying the woman at delivery to the staff of the birthing facilities. Thus, the right of the father to be present at the birth of his child was temporarily overridden by the protection of public health [10]. This was a unique situation in the Czech Republic that was unprecedented for many decades. Pregnant women form a significant and relatively large part of the population that is more vulnerable to mental health changes than the general population, and thus, they were at risk of becoming victims of measures that were epidemiologically justified but restrictive and highly stressful in their consequences.
Research to date has shown that this period was associated with negative outcomes on physical and psychological health, as well as changes in the use of health care, including fear of visiting medical facilities among the population [11]. A review published by Connor et al. [12] found that during the COVID-19 pandemic, women, in general, were exposed to a significant worsening of multifactorial stress. Pregnant women, whose changes in behavior and mental health have often been described, were monitored in particular during this period of increased psychological strain [13]. Caparros-Gonzalez et al. [14] stated that stress and depression in connection with pregnancy and labor during the COVID-19 pandemic were found worldwide, regardless of geographic or cultural factors.
Preis et al. [15] studied two types of stress situations associated with labor during the first few months of the pandemic. They distinguished the stress that is typically associated with the process of labor, so-called preparedness stress, and the stress from the fear of possible infection during the COVID-19 pandemic, so-called perinatal infection stress. Based on a questionnaire, the authors found that almost 18% of women reported a high level of both types of stress and that the perinatal infection stress was higher than the preparedness stress. In another questionnaire study of 336 pregnant women in Israel in March 2020, Taubman-Ben-Ari et al. [16] described a high level of stress in all areas of life, including worry about modes of travel, time spent in public places, fear of infecting family or the fetus, as well as fear of antenatal examinations and the birth itself. Karavadra et al. [17] used a thematic qualitative study to assess the experiences of pregnant women in association with COVID-19 and access to health care in Great Britain. The women reported fears about changes in the availability of services, including supportive care and the lack of the father during birth. The authors found that the worry of study participants about the lack of a partner during labor and birth included fears of being alone during birth if complications should arise. Surprisingly, the women were not worried about becoming infected and developing COVID-19 themselves but were worried about possible infections in their children. A high level of stress among pregnant women was also found in an Italian study [18]. Other studies [19] found that there were changes in the care provided to pregnant women during the COVID-19 pandemic. Such changes included both the presence of a partner during the birth itself (i.e., during the birth and one hour after), as well as limited rights of women for longer-term accompaniment, for example, during examinations, as was the case in Great Britain, France, and The Netherlands [20]. A more recent study on pregnant women during the COVID-19 pandemic in Denmark [21] showed that the greatest concern expressed in the survey was the risk of giving birth without the partner being present due to restrictions from the authorities.
Changes to the provided care, including limiting the presence of a partner during birth, are assumed to be an important stress factor for pregnant women, but robust evidence is lacking on how this influenced the health of pregnant women and their newborns. Most of the literature has focused on mothers that were ill with COVID-19, healthy newborns that were COVID-19 positive, as well as comparisons of births during the COVID-19 pandemic and previous epidemics such as SARS, which had markedly different conditions. Considering the urgency and unexpectedness of this issue, most published information has significant limitations, but it is, nevertheless, clear that the limitation or complete prevention of the presence of a partner during birth is considered to be a stress factor that negatively influences the psychological and physical health of pregnant women, even though information on the impact of this situation on the health of mothers and their newborns is largely lacking.
Although psychological stress is considered a causative factor in pre-term births, another Danish study found, on the basis of a national register, rather a decline in the incidence of extremely pre-term births (<28 completed weeks of gestation) during the first wave of lockdowns. Such a lowering of extremely pre-term births is highly positive since this decreases both perinatal and postnatal mortality and morbidity. The authors of that study speculated that the reasons for this decline might have been due to the fact that the lockdown led to dramatic changes in lifestyle, including lower physical activity in pregnant women, as well as the influence of hygienic practices leading to lower exposure to infectious agents in general, since infections are among the major factors inducing pre-term births [22].
The aim of this study is to assess the influence of the extraordinary and unique shortterm nationwide ban on the presence of a partner or close person during labor in the context of societal stress caused by the COVID-19 pandemic on perinatal indicators in the Czech Republic from March to April 2020.
Data Collection
To obtain detailed information on possible perinatal complications caused by the ban on the presence of a partner during labor, we designed an observational study targeting all maternity facilities in the Czech Republic. Four weeks after the Ministry of Health lifted the ban and the situation was partially normalized, the researchers sent to all 88 maternity facilities in the country a questionnaire structured to assess basic perinatal indicators (binary or categorical absolute frequency data describing deliveries and their clinical characteristics) during the period from 18 March to 16 April 2020. For control, the same indicators were also collected for the equivalent period during 2019 (with no restrictions due to . Replies (the head doctor or the head nurse of each maternity facility was responsible for data collection) were received from 34 facilities, but one was excluded from further analysis because its data was provided only for 2020. Thus, a total of 33 facilities were included in the study (a reply rate of 37.5%), covering 4805 births during the study period in 2019 and 4514 births in 2020. These numbers represented 51% of the total number of births in the Czech Republic during the periods studied. The issue of non-response bias is discussed in the limitations section.
Dissemination Strategy
The questionnaire was sent in electronic form with an accompanying letter by email on 15 May 2020 to all regional perinatologists in the Czech Republic (a traditional communication channel for the Czech Gynecological and Obstetrical Society, Prague, Czech Republic), who then forwarded the questionnaire to the heads of maternity facilities in their region. The cut-off date for returning the questionnaires was 31 May 2020.
Indicators Analyzed
The questionnaire was designed to monitor the total number of births, incidence of spontaneous pre-term births (<37 + 0 completed weeks of gestation), incidence of emergency and planned cesarean sections, number of inductions of labor, the use of analgesia during labor (including methods of systemic and/or regional analgesia), augmentation of labor with synthetic oxytocin, the average length of the first and second stages of labor, number of instrumental deliveries (forceps and/or vacuum extraction), the incidence of episiotomies (sorted by parity), the incidence of serious injuries during birth, and surgical procedures during the third stage of labor. Attention was given to postnatal adaptation of the newborn, monitoring the number of newborns with a 5-min Apgar score < 7, and pH of umbilical cord blood < 7.10. To obtain data on maternal blood loss due to peripartum bleeding, the use of supplementary blood products (packed red blood cells (PRBC), fresh frozen plasma (FFP), and fibrinogen substitution were monitored. In order to be comprehensive, we also included questions on the incidence of eclampsia, hysterectomies associated with birth, and maternal death.
Statistical Analysis
After receiving completed questionnaires from individual facilities, a data matrix was assembled. Data on basic indicators were available from all facilities. For other indicators, data were included only for those facilities that provided the information (sporadically, some data were missing as they are not routinely recorded in the respective facilities). Out of 51 indicators followed in total, data on 20 were provided by all 33 facilities, whereas data on only four indicators were provided by less than 29 facilities (non-pharmacological analgesia: 25; average length of the second stage of labor: 28; number of newborns with umbilical cord blood pH < 7.10: 27; the use of fresh frozen plasma: 27). All indicators, along with the number of facilities providing the respective data, are shown in Table 1. Pearson's χ 2 homogeneity test in 2 × 2 contingency tables was used to assess changes in individual indicator incidences between 2019 and 2020.
Incidence data were analyzed for the sum of all facilities in the Czech Republic (the nationwide perspective, see Table 1) as well as for each facility separately (discussed below if differed from the nationwide results). For indicators with low numbers of incidence, only combined values for the whole country were used. Tests were performed using Pearson's χ 2 -test, which compared the incidence for the periods from 18 March to 1 April in 2019 and 2020. The indicator was valued as either a presence or absence using a two-by-two contingency table. On one occasion, a bigger table was used to test more options (4 types of birth).
Results
In April 2019, altogether, 9292 children were born in the Czech Republic, while 8859 were born in April 2020. This difference is in line with the inter-year variability (from 2011-2020, the average number of infants born in April is 8999 (95% CI: 8814, 9185)) [23]. The number of children born in the studied periods at individual facilities is shown in Comparing the number of cesarean sections with the total number of births for the whole Czech Republic, the difference between 2019 and 2020 was not statistically significant (p = 0.536). This was also the case in most individual facilities, with the exception of an increase in cesarean section rate at Hořovice (p = 0.005), Jilemnice (p = 0.037), and Ústí nad Labem (p = 0.007), and a decrease in cesarean section rate at Přerov (p = 0.008). For a more detailed picture, we tested the changes in the mean of four values: vaginal full-term births, vaginal pre-term births, emergency cesarean sections, and planned cesarean sections (at facilities with a low number of pre-term births, only three values were tested, combining full-term and pre-term vaginal births; for three facilities, Hořovice, Prachatice, and Ústí nad Orlicí, numbers of pre-term vaginal births and planned cesarean sections Comparing the number of cesarean sections with the total number of births for the whole Czech Republic, the difference between 2019 and 2020 was not statistically significant (p = 0.536). This was also the case in most individual facilities, with the exception of an increase in cesarean section rate at Hořovice (p = 0.005), Jilemnice (p = 0.037), and Ústí nad Labem (p = 0.007), and a decrease in cesarean section rate at Přerov (p = 0.008). For a more detailed picture, we tested the changes in the mean of four values: vaginal full-term births, vaginal pre-term births, emergency cesarean sections, and planned cesarean sections (at facilities with a low number of pre-term births, only three values were tested, combining full-term and pre-term vaginal births; for three facilities, Hořovice, Prachatice, and Ústí nad Orlicí, numbers of pre-term vaginal births and planned cesarean sections were also too low to analyze them separately). From the national perspective, there was a statistically significant change (p = 0.030) caused mainly by the decline in emergency cesarean sections and an increase in planned cesarean sections. At individual facilities, statistically significant changes were found at Hořovice (p = 0.019)-an increase in emergency and planned cesarean sections and an associated decline in vaginal births; Pardubice (p = 0.025)-an increase in pre-term vaginal births; Přerov (p = 0.007)-a decline in emergency cesarean sections; Třinec (p = 0.026)-an increase in planned cesarean sections; and Ústí nad Labem (p = 0.002)-a decline in pre-term vaginal births and increase in emergency cesarean sections.
To compare the course of labor and delivery, we analyzed the incidence of extraction methods (using forceps or vacuum extraction) compared to the total number of vaginal births. Due to low numbers of such methods at individual facilities, only values from all facilities combined were analyzed. There was a statistically significant change in the increase in the use of these methods (p = 0.001). This was primarily due to an increase in the use of vacuum extraction, from 104 to 150 cases, accompanied by the decline in the total number of births (with the largest changes at the facilities ofČeské Budějovice, Plzeň, and Olomouc). We also analyzed the incidence of induction of labor compared to the number of vaginal births and emergency cesarean sections. For all facilities combined, there was a statistically significant decline in the incidence of inductions (p = 0.016), mainly due to declines at the facilities at Karlovy Vary (p < 0.001) and Ústí nad Orlicí (p = 0.040); at other facilities, there were no significant changes (p > 0.05) or the numbers of inductions were too low to analyze (Jihlava, Prachatice).
As for the use of analgesia, there were highly significant differences among individual facilities. The results show that each obstetric facility used quite different procedures. The number of individual types of analgesia was compared to the total number of vaginal births. For all facilities combined, there was no significant difference found. However, this was caused by the averaging of different results from the facilities. At individual facilities, there were both significant increases as well as significant decreases in the types of analgesia used. For the categories "Non-pharmacological analgesia" and "No analgesia", the use of these methods even ranged from "Always" to "Never" (non-pharmacological analgesia p = 0.595; systemic a. p = 0.964; regional a. p = 0.677; no analgesia p = 0.934).
The incidence of the use of episiotomy was compared with total vaginal births. Data were available for all participating facilities except for Plzeň, which provided only total values without specifying the parity of mothers (primiparous and multiparous mothers). Nationwide, there was a statistically significant decline (p = 0.022), which was caused by values for multiparous mothers (p < 0.001) rather than for primiparous mothers (p = 0.122). For individual facilities, there was a significant increase in the number of episiotomies found at Kolín (p = 0.05; i.e., at the border of significance), Písek-a significant increase for multiparous mothers (p = 0.047), and VFN Prague-a significant increase for primiparous mothers (p = 0.006), but a significant decline for multiparous mothers (p < 0.001) as well as in the total incidence (p = 0.001).
Analyzing the incidence of surgical procedures during the third stage of labor compared to the total number of vaginal births, there was a statistically significant difference between 2019 and 2020, similar to the incidence of third-and fourth-degree perineal tears compared to the total number of vaginal births. There was no statistically significant difference found in the number of newborns with a 5-min Apgar score < 7, the number of newborns with umbilical cord blood pH < 7.10, the use of supplementary blood products (PRBC, FFP), or the use of supplementary fibrinogen; all these indicators were compared to the total number of births. Furthermore, there was no incidence of eclampsia reported by the hospitals for either 2019 or 2020 (data from Plzeň was lacking). Hysterectomy associated with birth was reported only in rare cases (2019: Olomouc, Pardubice, Třinec (3 cases in total), 2020: Šternberk, 2 × UPMD, VFN Prague (4 cases in total)). Two cases of peripartum maternal deaths were reported in 2020 (at Jihlava and VFN Prague), while there was no case in 2019.
Discussion
During the first wave of the COVID-19 pandemic in the Czech Republic, a nationwide ban was introduced that included a ban on the presence of a partner or other close person in maternity wards. This allowed us the opportunity to compare basic perinatological parameters during two comparable periods but differing in the presence of a birthing partner during labor and delivery. Given the size of the research sample, the results of the study can be considered nationally representative; 8859 children were born in the Czech Republic during the period studied in 2020 compared to 9292 a year earlier. This difference corresponded to the year-to-year fluctuation of the birthrate in the long term. There was no statistically significant change (p = 0.536) in the number of cesarean sections over the periods studied. Regarding extraction methods, there was a statistically significant increase in the frequency of these methods (p = 0.001); however, the change was due to the situation in two facilities only, and the reason was not specified by these facilities. As for analgesia, there was no statistically significant change during the observation periods. Overall, there was a statistically significant reduction in the length of the first stage of labor and a decrease in the incidence of episiotomies (p = 0.022), which was due to values in multiparous women (p < 0.001), while there was no statistically significant change in primiparous women (p = 0.122). The explanation for the discrepancy in the incidence of episiotomies between primiparous and multiparous women could not be sufficiently clarified within the study design, nor was it a primary objective. There was no statistically significant difference in the frequency of surgical intervention in the third stage of labor analyzed in relation to the total number of vaginal deliveries. Except for the increase in vacuum extraction, no difference was found in any of the perinatological parameters studied.
The impact of stress in the context of the COVID-19 pandemic on mothers and births has been discussed by Matvienko-Sikar et al. [24], who described the impact of stress on mothers and called for essential psychological support for mothers by healthcare professionals. However, they did not report a worsening in maternal outcomes due to COVID-19, in agreement with the conclusions of Elshafeey et al. [25]. Gausman and Langer [26] highlighted the disproportionate impact of the pandemic on women in the context of COVID-19, citing, among other factors, the fact that women give birth in pandemic settings without social support. A more recent study by Harrison et al. [27] found negative relationships between perceived social support and depression and anxiety in a sample of women who were pregnant during the COVID-19 pandemic, indicating that women with lower levels of perceived support experienced more depression and anxiety symptoms, in alignment with research conducted prior to the pandemic. Mollard and Wittmaack [28] suggested that pandemic-related changes to maternity care practices may have impacted birthing women's perceptions of safety and support in the hospital environment and affected symptoms of stress. On the one hand, the available literature shows that there is no evidence of poorer obstetric outcomes during the COVID-19 pandemic; however, the literature also shows that maternal stress has been substantial during the pandemic and that a responsive and understanding approach by health care professionals is needed.
In the Czech Republic, the ban on a partner during childbirth provoked a strong wave of emotions and negative reactions across the country. The presence or absence of the father during labor and delivery of their children became a frequent topic in both traditional and new types of media. All this multiplied the antenatal stress levels in women who were due to give birth during this period. Even the Cochrane review [3] was used as an argument; however, the most recent review on continuous support for women during childbirth from 2017 did not take into consideration issues such as the COVID-19 pandemic [29] and the importance of continuous support has never been neglected. Despite the epidemiologically caused absence of a close person at birth, continuous midwifery support was provided to every woman under all circumstances in all facilities. Therefore, by inducing hostility and even panic in society, an emotionally strained atmosphere was created in the maternity sector, which may have led to fears of vulnerability and distrust in the practical and psychological support of obstetric teams. After the stabilization of the epidemiological situation in the Czech Republic and after the expansion of expert knowledge on the biological effects of SARS-CoV-2, the strict ban on the presence of the father or another non-hospital birthing partner began to be abolished after four weeks [10].
The number of completed questionnaires returned (34/88) clearly demonstrates that healthcare professionals put a high emphasis on the issue of partners' participation in childbirth. However, the perinatological results obtained in the questionnaire study are, in some respects, rather surprising. Apart from the increase in the frequency of extraction methods, no perinatological results were observed that would demonstrate a negative impact on perinatological outcomes in terms of the absence of a partner or other close person during childbirth, particularly taking into consideration the impact of ante/intrapartum stress, which was extreme due to the measures mentioned.
The results of this study are unique because conducting a randomized control trial on the matter of the absence of a partner at birth is otherwise impossible for ethical reasons, and a recurrence of the national ban on the presence of a partner or close person at birth is very unlikely to occur in the foreseeable future.
Limitations of the Study
The first limitation of our work is that our study has not focused on the possible multifactorial subjective psychological impact on labor. We focused primarily on objective clinical indicators.
Another limitation is the non-response bias. The data cover the biggest maternity hospitals in the country but exclude data from some facilities. However, data were provided by facilities from 12 out of 14 administrative regions of the Czech Republic (the two missing are the Hradec Kralove Region and Zlín Region) and include all types of maternity hospitals ranging from university hospitals to regional hospitals to municipal or private maternity facilities. We conducted our analysis not only based on nationwide data but also for each participating maternity facility individually, and if the results differed, we discussed them in the text. Thus, all trends could be captured, and the results are quite representative of the Czech Republic.
The time period in which the data was obtained was also relatively short. A total of 8 of the 12 perinatology intensive care centers and 9 of the 13 perinatology intermediate care facilities returned completed questionnaires.
A final limitation is an inability to assess the impact of this measure in comparison with other epidemiological measures in the context of the first wave of the COVID-19 pandemic in the Czech Republic.
The impact of the absence of a close person at birth in other countries where their presence is not part of the standard approach to childbirth, whether for cultural, religious, economic, or other reasons, is not reflected in our work and requires a considered approach.
Conclusions
This study offers first insights into a situation where the presence of a person other than healthcare professionals was not possible during labor and at birth for a limited period. Women giving birth alone (i.e., without a non-hospital birthing partner) due to the first wave of the COVID-19 pandemic did not show different perinatological outcomes compared to the same period in 2019. During the period of specific government-mandated anti-epidemic measures, we did not observe changes in the incidences of cesarean sections or other types of operative deliveries, or increases in pre-term births, the duration of the second stage of labor, the use of synthetic oxytocin for augmentation, the need for the administration of analgesics, the incidence of fetal hypoxia, the incidence of episiotomies or other birth injuries, blood loss, or neonatal complications requiring special care. There was only a local increase in the number of vacuum extractions. There were significant decreases in the number of inductions of labor, the duration of the first stage of labor, and episiotomies in multiparous women. In terms of the perinatological parameters assessed, the absence of a partner or other close person when giving birth during the COVID-19 pandemic did not cause a deterioration in the quality of perinatal care provided. Informed Consent Statement: Patient consent was waived due to anonymous character of collected data which are standardly collected within the Czech Republic for the needs of national registers.
Data Availability Statement:
The dataset used and analyzed during the current study is available from the corresponding author upon making official request addressed to the Department of Obstetrics and Gynecology, First Faculty of Medicine, Charles University and General University Hospital in Prague. | 2023-02-03T16:13:18.291Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "582144f1e73d30a3da92fb748b73d108f8a3f1c6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph20032614",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6733da2acfc7437e0e9416bb6e2830b7d9e2cb5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1136204 | pes2o/s2orc | v3-fos-license | Posterior interosseous nerve syndrome due to intramuscular lipoma
Lipomas are extremely common benign soft tissue tumors that are usually subcutaneous and asymptomatic. However, an intramuscular lipoma, occurring adjacent to the proximal radius, may easily cause paralysis of the posterior interosseous nerve because of a specific anatomical relationship of these structures in that area. In this report, we describe an unusual case of a 48-year-old-woman with a posterior interosseous nerve syndrome due to an intramuscular lipoma. The patient had good recovery after surgery and rehabilitation physiotherapy.
Introduction
Lipomas are the most common benign soft tissue tumors and occur in the subcutaneous tissue. Rarely, lipomas present in the deep soft tissue such as intermuscular, intramuscular, and parosteal sites. It tends to be indolent, and symptoms caused by nerve compression are unusual. However, an intramuscular lipoma occurring in the proximal forearm causes paralysis of the posterior interosseous nerve (PIN) because of its anatomical relationship in that area. Below the elbow, the PIN passes beneath the extensor carpi radialis brevis muscle and then continues between the superficial and deep layers of the supinator muscle. The proximal edge of the supinator muscle forms an arch, the arcade of Frohse [1]. The PIN is vulnerable in this region [2]. We report a new case of an intramuscular lipoma with compression of the PIN.
Case report
A 48-year-old woman presented with spontaneous inability to extend the fingers of the left hand. She noticed a gradually increasing inability to extend the fingers over 6 months. She was unable to perform her domestic activities because of weakness and paresis of the hand; she denied any trauma or other diseases. Examination revealed a swelling of 5 ×3 cm in the anterolateral aspect of the left forearm in the region of the brachioradialis muscle just below the elbow. The swelling was firm in consistency, immobile, not fixed to the skin; there were no dilated veins over the swelling or signs of inflammation. The elbow function was normal, as was the flexor strength of the wrist and fingers; however, there was a decrease in the extension strength of the wrist and metacarpophalangeal joints of the fingers and thumb (power 3/5). There was no sensory deficit. Magnetic resonance imaging of the elbow objectified an intramuscular mass, of 8-cm-long axis depending on the supinator muscle, pathognomonic of a lipoma (Figs. 1 and 2). Surgical excision was recommended. General anesthesia was administered, and an incision was made over the mass. The incision of the supinator muscle exposed a fatty encapsulated mass (Fig. 3). The dissection of the mass revealed that it is constricting the PIN (Figs. 4 and 5). The nerves were carefully released, and a lipoma was removed. The PIN was preserved, without damage (Fig. 6).
Histological examination of the tumor confirmed it to be a benign lipoma (Figs. 7 and 8). In the early postoperative period, the radial nerve recovered its function. A physiotherapy program was started 2 weeks after surgery. The patient recovered well, and 6 weeks after surgery, she resumed her activities. No local recurrence was detected at 18 months after surgery.
Discussion
Symptomatic radial nerve compression is relatively uncommon, and when it is caused by a lipoma, it commonly occurs at the elbow level, compromising the posterior interosseous branch [3,4]. Lipomas are benign tumors composed of mature adipocytes, and they represent one of the most prevalent tumors of mesen- showing an intramuscular mass on the supinator muscle Fig. 3 The incision of the supinator muscle exposed a fatty mass [4][5][6]. Lipomas and other tumors over the radial nerve are rare causes of chronic entrapment of the PIN, but they can produce a classic picture of PIN syndrome (PINS). There are some reports of compression neuropathies [7][8][9][10] of the upper limb caused by this kind of tumor. Other causes of PIN compression have been described: rheumatoid synovial cysts [11], ganglion [12][13][14], myxoma [15], pseudogout [16], and chondroma [17], among others. Intramuscular lipoma shows an infiltrative nature to the surrounding striated muscle, and the lesion is usually not encapsulated [18], although the pathogenesis of an intramuscular lipoma remains obscure [18].
The diagnosis of PINS is based on clinical history and physical examination and is confirmed by electrophysiological Fig. 4 The dissection of the mass revealed that it is constricting the PIN Fig. 5 Liberation of the PIN Fig. 6 After excision of the mass, the superficial radial nerve and the PIN were preserved Fig. 7 The fatty mass studies. Classically, this syndrome has neither pain nor other sensory symptoms, but there are cases of forearm pain and paralysis of the extensor muscles of the forearm. If there is any suspicion, based on clinical examination, of a mass as the causative factor of PINS, MRI scan is the imaging method of choice for evaluating their presence and extent. In this clinical case, however, the lipoma was located intramuscularly, and there was a palpable mass on the forearm.
Surgical excision of an intramuscular lipoma is recommended to prevent involvement of the PIN or to ensure optimal recovery when the nerve is already compressed by the tumor [19]. The recovery of the neurological deficit relates to the duration of symptoms, the longest reported duration of symptoms with full recovery postoperatively being 18 months [20]. The prognosis after excision of a lipoma is excellent, with only one recurrence described in the literature. Malignant transformation has not been reported.
Jürgens and Hampt, in a study of 20 patients with PIN paralysis, concluded that the result of the operation depended on the duration of the symptoms, so that long-lasting paralysis made reinnervation less likely to occur [21]. According to the study by De-song et al., early diagnosis and surgery are very important in the treatment of PINS [22]. Prompt diagnosis and early removal of the compressing mass facilitate quick neurological recovery.
Conclusion
The prognosis of PINS depends on an early diagnosis, followed by an immediate surgical excision based on MRI findings. The paralysis of the PIN is clinically evident; electrophysiological exploration would confirm the diagnosis and the site of entrapment. After surgical excision of the mass, rehabilitation is needed for rapid functional recovery of the upper extremity. | 2017-09-15T02:23:40.843Z | 2013-05-21T00:00:00.000 | {
"year": 2013,
"sha1": "86694e113df4005f5b00dab1f93aaf61be7e50a0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12570-013-0203-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "86694e113df4005f5b00dab1f93aaf61be7e50a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2843780 | pes2o/s2orc | v3-fos-license | Expression of epidermal growth factor in transgenic mice causes growth retardation.
The epidermal growth factor (EGF) family of peptides signals through the erbB family of receptor tyrosine kinases and plays important roles in development and tumorigenesis. Both EGF and transforming growth factor (TGF)-alpha only bind to erbB1 and activate it. The precursor of EGF is distinct from that of TGF-alpha in having eight additional EGF-like repeats. We have recently shown that the EGF precursor without these repeats is biologically active and leads to hypospermatogenesis in transgenic mice. Here we present evidence that the growth of transgenic mice widely expressing this engineered EGF precursor is also stunted. These mice were consistently born at half the normal weight and reached almost 80% of normal weight at adulthood. The mechanism involved a reduction of serum insulin-like growth factor-binding protein-3. Chondrocyte development in the growth plate was affected, and osteoblasts accumulated in the endosteum and periosteum. Besides these novel findings on the in vivo effects of EGF on bone development, we observed no sign of tumor formation in our transgenic animals. In contrast to previous reports on TGF-alpha transgenic mice, we show that the biological functions of EGF and TGF-alpha are clearly distinct.
The epidermal growth factor (EGF) family of peptides signals through the erbB family of receptor tyrosine kinases and plays important roles in development and tumorigenesis. Both EGF and transforming growth factor (TGF)-␣ only bind to erbB1 and activate it. The precursor of EGF is distinct from that of TGF-␣ in having eight additional EGF-like repeats. We have recently shown that the EGF precursor without these repeats is biologically active and leads to hypospermatogenesis in transgenic mice. Here we present evidence that the growth of transgenic mice widely expressing this engineered EGF precursor is also stunted. These mice were consistently born at half the normal weight and reached almost 80% of normal weight at adulthood. The mechanism involved a reduction of serum insulin-like growth factor-binding protein-3. Chondrocyte development in the growth plate was affected, and osteoblasts accumulated in the endosteum and periosteum. Besides these novel findings on the in vivo effects of EGF on bone development, we observed no sign of tumor formation in our transgenic animals. In contrast to previous reports on TGF-␣ transgenic mice, we show that the biological functions of EGF and TGF-␣ are clearly distinct.
Epidermal growth factor (EGF) 1 was initially identified from mouse submaxillary gland extract as a stimulator of eyelid opening and incisor eruption when injected into newborn mice and rats (1). Mature human EGF is composed of 53 amino acids but is derived from a much larger transmembrane precursor of 1207 amino acids (2). It belongs to the EGF family of peptides that signals through the erbB receptors, with EGF receptor being the prototype (3). EGF is released from its precursor by a specific arginine estero-peptidase that, in many cells, appears to be limiting (4). However, processing occurs in granular convoluted tubules of the submandibular gland, and EGF is released mainly into saliva (5).
Transforming growth factor (TGF)-␣ binds to the EGF receptor with an affinity similar to that of EGF, and the two share many biological effects. TGF-␣ is a 50-amino acid polypeptide derived from a 160-amino acid membrane-bound precursor. It was initially isolated as one of the transforming peptides from sarcoma virus-transformed fibroblasts (6). EGF, TGF-␣, and amphiregulin only bind and activate EGF receptors (also called erbB1 and HER1) (7), and they are referred to as group one of the EGF family. In recent years, information on the EGF family and erbB receptor family has expanded rapidly. In in vitro studies on cells expressing multiple erbB family members, signal specificity was shown to be controlled by ligand specificity as well as receptor homo-and heterodimerization (8).
Important information on the in vivo functions of erbB signaling has been gained from transgenic mice that overexpress the ligand as well as loss of function mutants (9). Mice deficient in one or all three of EGF, TGF-␣, and amphiregulin revealed their distinct role in mammary gland development (10). Mice without TGF-␣ or with a mutant EGF receptor showed an identical phenotype of affected hair and eyelid development (11)(12)(13). In addition to these defects, mice with a null mutation in EGF receptor died at peri-implantation, midgestation, or shortly after birth, depending on their genetic background (14 -16). In cancer tissues, overexpression of EGF receptor, TGF-␣, and amphiregulin, but not EGF, is frequently found (7). In agreement with this observation, transgenic mice overexpressing TGF-␣ showed epithelial hyperplasia of several organs, pancreatic metaplasia, and breast carcinoma (17,18). Amphiregulin was found to be a preneoplastic tumor marker in transgenic models of mammary tumors, including transgenic mice of TGF-␣ and erbB2 (19). Expression of amphiregulin in basal keratinocytes induced a psoriasis-like phenotype in transgenic mice (20). To provide further information on the physiological and pathological roles of EGF and to distinguish its in vivo effects from those of other EGF receptor ligands, we have generated transgenic mice widely expressing a shortened human EGF precursor (hEGF). The eight EGF-like repeats were deleted, leaving the active EGF domain in the transmembrane form. This would release the effect of EGF-like repeats, if any, on the exposure of the EGF domain and allows direct comparison of its effects with TGF-␣. Our previous study has shown that hEGF, like the full-length precursor, is biologically active in transforming NIH3T3 (21).
Various in vitro studies have shown that EGF reduces synthesis of insulin-like growth factor (IGF) and IGF-binding protein-3 (22,23). In vivo, IGF action is influenced by the IGFbinding proteins (IGFBPs). Six IGFBPs have been found that differ in their influence on IGF activity. Besides increasing the half-life of IGFs in circulation, IGFBPs can potentiate activities of IGFs on cell proliferation. In addition, IGF-independent regulatory mechanisms of IGFBPs have been described. IGF-independent growth inhibition by IGFBP-3 is believed to occur through IGFBP-3-specific cell surface association proteins or receptors and involves nuclear translocation (24). Several transgenic mouse models overexpressing IGFBP-1, -2,-3, or -4 have been developed over the past few years (25). The overexpression of IGFBP-3 under the control of a ubiquitous promoter resulted in selective organomegaly (26). Recent data indicate that low levels of IGFBP-3 are associated with stunted growth and an increased risk of at least several types of carcinoma that are common in economically developed countries (24,27). Additional studies are required to determine the clinical relevance of these findings.
To elucidate the role of EGF in vivo, we have recently overexpressed hEGF in transgenic mice. The two major phenotypes were infertility and stunted growth (28). Here we investigated the possible mechanisms leading to the growth problem and the relationship between EGF and IGFBP-3.
EXPERIMENTAL PROCEDURES
Generation of Transgenic Mice-The procedures for microinjection have been described previously (29). The DNA construct consisted of hEGF with the -actin promoter to give widespread expression in transgenic animals. The eight EGF-like repeats in the extracellular domain of hEGF were removed as described previously (21). Transgenic mice were characterized by Southern analysis, immunoblotting, and immunohistochemistry of hEGF (28,30).
Radioimmunoassay of Serum IGFBP-3-Blood was collected by cardiac puncture immediately after the animal was sacrificed by cervical dislocation. The blood samples were allowed to clot for 15 min on ice, and then serum was collected by centrifugation. Aliquots were stored at Ϫ20°C. Serum IGFBP-3 was measured using undiluted serum with the immunoradiometric assay kit from Diagnostic Systems Laboratories, Inc. Controls were age-and strain-matched normal mice including nontransgenic littermates. The statistical difference between the transgenic and control groups was analyzed using the Mann-Whitney test.
Histology of Long Bone in Postnatal Mice-The hind limb was dissected away from tendon and muscle and then fixed in 4% paraformaldehyde overnight at 4°C. Bone was decalcified using a procedure described in Ref. 31 modified as follows: bone was washed four times with water for 15 min each and then immersed in 20% EDTA in water and kept at 4°C. EDTA solution was changed every second day during the first week and every third day during the second week or a longer period. Finally, bone was washed for a total of at least 6 h with five changes of water before embedding in fibrowax (Gurr, BDH). Sections were cut at 6-m thickness.
Immunohistochemistry-Antigen detection was based on the streptavidin-biotinylated peroxidase system (Dako). The procedures have been described in detail previously (32). To detect human but not mouse EGF protein, the polyclonal antibody Ab-3 (Calbiochem) was used at a dilution of 1:1000 for bone sections. Sections from nontransgenic mice were used as negative controls. Endogenous EGF expression was identified using a polyclonal anti-mouse EGF antibody (Serotec) at a dilution of 1:500 and 1:1000. To confirm the specificity of signals obtained with anti-mouse EGF antibody, the diluted antibody was preincubated with 10 mM murine natural EGF (Life Technologies, Inc.) overnight at 4°C before use.
Reverse Transcription-PCR-Expression of EGF was studied in a chondrocyte cell line, MCT, which was derived from mouse rib primary chondrocytes immortalized with temperature-sensitive large T antigen. At a nonpermissive temperature of 37°C, these cells stop growing and acquire characteristics of hypertrophic chondrocytes (33). Total RNA was extracted, DNase-digested, quantified by absorbance at 260 nm, reverse-transcribed with oligo(dT), and PCR-amplified as described previously (34). EGF was amplified with primers 5Ј-GAGAATTCCGCCT-GCACCAAC and 5Ј-TCGTCGCCCCGTCGACACCTAGG. After 30 cycles of 94°C for 30 s, 57°C for 30 s, and 72°C for 1 min, half of the product was electrophoresed. cDNA samples from mouse adult kidney and embryos at day 17.5 were used as positive controls. Amplification with primers for hypoxanthine phosphoribosyl transferase (hprt), 5Ј-CCTGCTGGATTACATTAAAGCACTG and 5Ј-GTCAAGGGCCATATC-CAACAACAAC, served as a PCR control. After 30 cycles (94°C for 30 s, 55°C for 30 s, and 72°C for 1 min), one-eighth of the product was electrophoresed.
RESULTS AND DISCUSSION
We have recently reported the generation of EGF transgenic mice. They all expressed human EGF protein at high levels in various organs, and their fertility problem has been reported previously (28). Growth rate and body weight were compared with those of nontransgenic littermates. All transgenic animals were born at only half the weight of their normal littermates. They caught up by day 20 and reached 78% of the weight of nontransgenic littermates at adulthood (Fig. 1A). The transgenic mice appeared to be proportionate dwarfs. In the current study, we focused on investigating the mechanism leading to stunted growth.
EGF Reduced Serum IGFBP-3-IGF-I is known to be a mediator of growth hormone action in pubertal growth (35). It also acts from gestation day 13.5 onward in prenatal mice in a growth hormone-independent manner, whereas IGF-II controls growth earlier in gestation (36). In humans, IGF-I, but not IGF-II, has also been shown to be involved in the control of fetal size during the later months of intrauterine life (37). The molar concentration of serum IGFBP-3 roughly equals the sum of the IGF-I and IGF-II molar concentrations (38). We speculated that EGF exerted its effect on growth through the IGF system, and we measured the concentration of serum IGFBP-3. Serum IGF-I could not be reliably quantified with the system we have been using for measuring human IGF-I. The mean IGFBP-3 level of transgenic mice (182.5 Ϯ 94.4 ng/ml; n ϭ 4 founders; 2-9 months old) was significantly lower (p ϭ 0.0011) than that of normal adult mice (425.1 Ϯ 74.6 ng/ml; n ϭ 12; 8 -9 months old; Fig. 1B). One female transgenic founder was sacrificed at 2 months in the pilot study to detect transgene expression and found to have embryos at around day 7.5 of gestation. Serum IGFBP-3 level increased with pregnancy (39). Still, its value (349 ng/ml) was relatively lower than that of nonpregnant controls. The results suggest that the action of EGF on growth was mediated at least in part through decreasing serum IG-FBP-3. All transgenic mice expressed hEGF in various organs to a similar level as judged by Western analysis (28). Our data suggested that EGF may change the production/secretion of
FIG. 1. Reduced body weight and IGFBP-3 in transgenic mice.
A, the ratio of mean weight of transgenic mice (n ϭ 4 founders) to that of wild type littermates (n ϭ 4) at various time points after birth. B, serum IGFBP-3 levels of the four above-mentioned founders as determined by radioimmunoassay. The wild type value was obtained from 12 mice including their 4 littermates. Values shown are the mean Ϯ 1 S.D. IGFBP-3 in liver and kidney. Transgenic mice overexpressing different IGFBPs have been very useful for addressing the specific functions of IGFBPs (25). Overexpression of IGFBP-3 resulted in selective organomegaly that differed from the major sites of transgene expression (26). We believe that in our transgenic mice, reduced serum IGFBP-3 is the result of EGF overexpression rather than a secondary effect of growth retardation. In a recent study (40), EGF administered for 7 days to young adult rats was shown to significantly lower IGFBP-3 levels to 44% of control values without affecting the body weight, whereas circulating IGFBP-1 and -2 levels were unaffected. It has also been shown by Frystyk et al. (41) that injection of EGF for 4 weeks into adult rats decreased serum IGF-I and IGFBP-3. The authors discussed that most in vitro studies, including those on hepatocytes, reported an increase in IGF-I after EGF stimulation. The discrepancies between in vivo and in vitro studies may be explained by changes in IG-FBPs. In both situations, EGF reduced IGFBP-3. In vitro, reduced IGFBP-3 would increase free IGF-I. In vivo, reduced IGFBP-3 would decrease circulating IGF-I because most IGF-I is bound to IGFBP-3 (41). In transgenic mice overexpressing interleukin-6, growth impairment was also correlated with reduced IGF-I (42). In IGF-I null mutants, the mice were smaller from embryonic day 12.5 (36). In our case, EGF also acted prenatally because we noticed that all transgenic mice identi-fied at weaning were small from the day of birth. Our data are in agreement with the hypothesis that EGF affects the production/secretion of IGFBP-3, hence decreasing the availability of IGFs and resulting in slower growth before and after birth.
Abnormal Proliferation of Osteoblasts-To gain further insights into the effects of EGF overexpression on bone development, we investigated the histology of long bones of transgenic mice. In wild type mice, osteoblasts were found as an even lining along the bone cortex both on the outer surface (periosteum) and inner surface along the marrow cavity (endosteum). In transgenic mice, hEGF immunostaining was found in both the periosteum (Fig. 2A) and the endosteum (Fig. 2B). In addition, abnormal accumulation of osteoblasts in the periosteum and/or endosteum was found in some areas (Fig. 2D). This imbalance in bone remodeling, however, did not result in thickening of the cortical bone. In contrast, we found that the thickness of the cortical bone in transgenic mice was reduced compared with that of normal mice (data not shown). It has been shown that in cultured fetal rat long bone, EGF stimulated thymidine incorporation at a low concentration, whereas it stimulated bone resorption at a higher concentration (43). The long bone has also been shown to harbor EGF receptors in osteoblast-like cells (44). Our data raised the possibility that EGF overexpression increased osteoblast proliferation in vivo.
Endogenous EGF Is Expressed Mainly in Hypertrophic Chon- drocytes-Unlike normal mice at 6 months of age (Fig. 3A), the growth plate of our transgenic animals still contained columns of chondrocytes consisting of a considerable number of prehypertrophic chondrocytes (Fig. 3B). However, the signal of hEGF immunostaining in the growth plate of our transgenic animals was too weak to be detected. Ideally, the growth plate of younger transgenic animals should be studied. To gain insight into the normal role of EGF in bone development, we studied endogenous EGF expression in the growth plate of fetal (day 14.5-17.5), 2-day-old, 2-week-old, and 4-week-old mice. EGF was strongly expressed in some proliferating and all hypertrophic chondrocytes at all stages studied (Fig. 4, A and B). The specificity of immunostaining was shown by the fact that it could be blocked by preabsorbing the antibody with 10 mM EGF. Similar results were obtained by Tajima et al. (45), who reported staining in resting, proliferating, and hypertrophic zones of the adult mouse femur epiphyseal plate. We further substantiated our findings by studying the expression of EGF in a mouse chondrocyte cell line, MCT. At a nonpermissive temperature of 37°C, the cells stop growing and express molecular markers of hypertrophic chondrocytes such as type X collagen and osteopontin (33). By reverse transcription-PCR, we found EGF expression only when MCT cells differentiated to hypertrophic chondrocytes at 37°C (Fig. 4C). Although TGF-␣ expression has been reported in a number of cell lines, to our knowledge, cell lines expressing EGF are rare. We suggest a specific role for EGF in the last stages of chondrocyte differentiation. The MCT cell line will allow us to study the regulation of EGF production and its role in chondrogenesis.
Comparison with TGF-␣ Transgenic Mice: A Role for EGF in Tumorigenesis?-Because both EGF and TGF-␣, as well as
FIG. 4. Endogenous EGF expression in hypertrophic chondrocytes.
A, EGF immunostaining (brown) was found in some proliferating chondrocytes (arrow) but was found mainly in hypertrophic chondrocytes (arrowhead). The hind limb of a 14.5-day embryo is shown. B, immunostaining of hypertrophic chondrocytes shown at a higher magnification. The pattern of EGF expression was found to be the same at 2 days, 2 weeks, and 4 weeks after birth (data not shown (46 -48), we compared the phenotype of our mice with that reported for transgenic mice overexpressing TGF-␣ or the TGF-␣ precursor (17,18). Transgenic mice overexpressing TGF-␣ weighed approximately 10% less than the control mice (49). None of the neoplastic changes reported in liver, coagulation gland, and pancreas of TGF-␣ mice was observed in our mice at a gross or histological level, despite the expression of hEGF in these organs as detected by immunohistochemistry and/or Western blotting. Indeed, we observed patch necrosis in the liver of all of our transgenic animals (Fig. 5). This was in sharp contrast to liver enlargement and increased proliferation in TGF-␣ mice (49). These data suggested an important functional difference between EGF and TGF-␣.
Of the four features originally observed when EGF was injected into newborn animals, accelerated eyelid opening and incisor eruption were most striking. In addition, abnormal skin structure and stunted growth occurred at high doses of EGF (1). In our transgenic mice, only growth retardation was remarkable and would be attributed to the decrease of IGFBP-3. Because other phenotypes encountered in the previous study were not observed in our transgenic mice, the mechanism of action of EGF on eyelid opening and incisor eruption might be different from that on growth. To our knowledge, this is also the first report on the in vivo effects of EGF on chondrocyte and osteoblast proliferation. During bone development, EGF may play a role in chondrocyte hypertrophy. We also provide in vivo evidence that EGF overexpression did not lead to tumorigenesis in our transgenic animals. Additional studies to reveal the distinct biological effects of EGF and TGF-␣ in vivo are under way in our laboratory. We are also generating transgenic mice expressing EGF in a tissue-specific manner to distinguish the systemic versus paracrine effects of EGF. | 2018-04-03T03:23:05.632Z | 2000-12-08T00:00:00.000 | {
"year": 2000,
"sha1": "91fdf5f82955a6bbc24555ba070752cb857f7475",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/49/38693.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "0baff0305597d26b429a99e847aaf5b2b9b87cc5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
53742691 | pes2o/s2orc | v3-fos-license | Decreased Intrinsic Functional Connectivity of the Salience Network in Drug-Naïve Patients With Obsessive-Compulsive Disorder
Obsessive-compulsive disorder (OCD) patients have difficulty in switching between obsessive thought and compulsive behavior, which may be related to the dysfunction of the salience network (SN). However, little is known about the changes in intra- and inter- intrinsic functional connectivity (iFC) of the SN in patients with OCD. In this study, we parceled the SN into 19 subregions and investigated iFC changes for each of these subregions in 40 drug-naïve patients with OCD and 40 healthy controls (HCs) using seed-based functional connectivity resting-state functional magnetic resonance imaging (rs-fMRI). We found that patients with OCD exhibited decreased iFC strength between subregions of the SN, as well as decreased inter-network connectivity between SN and DMN, and ECN. These findings highlight a specific alteration in iFC patterns associated with SN in patients with OCD and provide new insights into the dysfunctional brain organization of the SN in patients with OCD.
INTRODUCTION
Obsessive-compulsive disorder (OCD) is a psychiatric disorder characterized by two symptoms: intrusive, recurrent, distressing thoughts (obsessions) and/or repetitive behaviors (compulsions), with a lifetime prevalence of 2-3% (Ruscio et al., 2010). Although the pathophysiology of OCD remains unclear, neuroimaging studies have provided important insights into the neurobiological models of OCD. Many structural and functional magnetic resonance imaging (fMRI) studies reported the abnormalities in several cortical and subcortical regions including the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), striatum, and thalamus, which are part of the pathophysiological model of cortico-striato-thalamo-cortical (CSTC) circuitry for OCD (Menzies et al., 2008;Harrison et al., 2009;Del Casale et al., 2011). The salience network (SN), composed of dorsal anterior cingulate, anterior insular cortices and several subcortical brain areas has been shown deteriorated connectivity with CSTC circuit activity in patients with OCD, which suggest that SN may also be involved the broader pathophysiology of obsessive-compulsive phenomena (Harrison et al., 2013;Zhu et al., 2016).
The recently proposed "triple-network" model emphasized the aberrant intrinsic functional connectivity (iFC) patterns within and between the default mode network (DMN), executive control network (ECN), and SN as core features of psychiatric disorders (Menon, 2011). Altered iFC within and between the DMN, ECN, and SN have been reported in patients with OCD (Stern et al., 2012;Posner et al., 2016;Fan et al., 2017a). As a core brain network, the SN is involved in detecting and filtering internal and external salient information (Sridharan et al., 2008;Menon, 2011). In addition to the intra-network function, SN also plays an important role in monitoring interactions between ECN (task-positive network) and DMN (task-negative network). It is thought the SN initiates transient control signals that engage the ECN to mediate cognitive control processes while disengaging the DMN when a salient external stimulus is detected (Menon, 2011;Fan et al., 2017b). Patients with OCD have difficulty in switching between obsessive thought and/or compulsive behavior, which may be related to a dysfunction of SN in engaging task-positive ECN and disengaging task-negative DMN (Gürsel et al., 2018). IFC analyses within and between brain networks have shown to provide important insights into the neural deficits of psychiatric disorders (Shin et al., 2014). However, little is known about the changes in intra-and inter-iFC of the SN in patients with OCD.
Previous resting-state fMRI studies have indicated abnormal iFC within and between SN and other network. However, the results have somewhat been inconsistent. For example, within the SN, Fan et al. (2017a) demonstrated that patients with OCD exhibited greater iFC in the bilateral ACC within the SN using independent component analysis (ICA), and they also found elevated right insula-left dorsal ACC connectivity within the SN in patients with OCD with preserved insight into their symptoms (Fan et al., 2017b). For inter-network connectivity with SN, Posner et al. (2016) and Wang et al. (2018) both found increased iFC between the SN and the DMN in patients with OCD, as well as between the SN and ECN (Fan et al., 2017a). However, other researches revealed decreased iFC between SN and ECN and extending to DMN in patients with OCD (Stern et al., 2012;Gürsel et al., 2018). The differences among these results in OCD may, at least in part, be attributed to different methods and parameters used for seed definition (Stern et al., 2012;Posner et al., 2016). Furthermore, the functional connectivity based on assumptive and different seed definitions may lead to different results patterns, and limited in exploring the functional connectivity of possible sub-networks within a larger brain network (Stern et al., 2012;Posner et al., 2016). Alternatively, model-and seed-free approach such as ICA does not allow for exploration of relationships among subregions within a brain network (Fan et al., 2017a). Thus, in the current study, we systematically investigated the whole-brain iFC changes by first parceling the SN into 19 subregions according to publicly available atlas, then performed seed-based functional connectivity analyses using each of the 19 subregion as seed region. This method allows us to investigate the iFC between subregions within the SN as well as between SN and other parts of the brain. Furthermore, by correlating the changes of iFC with the severity of clinical symptoms in patients with OCD, it can help to elucidate brain-behavior relationships.
In present research, we aim to compare the iFC changes between all SN subregions and whole brain voxels in drug-naïve patients with OCD and healthy controls (HCs) using resting-state fMRI. Changes in iFC strength within SN and between the SN and other functional network was investigated, based on previous findings, it was hypothesized that the OCD group would exhibit abnormal iFC strength within the subregions of the SN, and between the SN subregions and another brain network. We also hypothesized that these changes would correlate with the clinical symptom of OCD.
Participants
Forty-three medication-free patients with OCD were recruited from outpatient and inpatient clinics at the Qiqihar Mental Health Center and the Fourth Affiliated Hospital of Qiqihar Medical University, Heilongjiang, China. Diagnoses were established using the Structured Clinical Interview for DMS-IV. The severity of OCD, depressive and anxiety symptoms were assessed with the Yale-Brown Obsessive Compulsive Scale (Y-BOCS), the 17-item Hamilton Rating Scale for Depression (HAMD) and the Hamilton Anxiety Rating Scale (HAMA), respectively. Only patients with a total score of 16 or higher on the Y-BOCS and a score less than 18 on the HAMD were included in the present study (Gottlich et al., 2015;Yang et al., 2015). All patients fulfilled the criteria of OCD, were right-handed and 18-60 years old. Exclusion criteria were the presence of neurological and other major psychiatric disorders other than OCD. At the time of the study, all patients with OCD had not taken any kind of psychotropic medication for at least 4 weeks. Fourteen patients with OCD did have a history of antiobsessive or antidepressant medication, such as selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitors (SNRIs) and clomipramine, eight patients had a history of antipsychotic medication, eighteen patients were drug-naïve. In addition, forty matched HCs were recruited using the Structured Clinical Interview for DSM-IV Axis I Disorders-Non-patient Edition. None of the HC subjects reported any history of neurological and psychiatric disorders.
This study was approved by the Research Ethics Committee at Qiqihar Medical University. All participants provided written informed consent.
Image Acquisition and Preprocessing
RS-fMRI images were acquired with a 3.0-Tesla GE 750 Signa-HDX scanner (General Electric Healthcare, Waukesha, WI, United States) at the Third Affiliated Hospital of Qiqihar Medical University, Heilongjiang, China. Subjects were instructed to relax and lay as still as possible with their eyes closed, without falling asleep or thinking of anything in particular. The RS-fMRI scans were obtained using an echo-planar imaging (EPI) sequence with the following parameters: 33 axial slices, TR = 2000 ms, TE = 30 ms, FA = 90 • , thickness/gap = 3.5/0.6 mm, FOV = 200 × 200 mm, in-plane resolution = 64 × 64. A total of 240 volumes were collected (8 min). None of the participants exhibited any clinically significant structural abnormalities upon visual inspection by two independent radiologists.
Resting-state functional images were analyzed using Data Processing & Analysis for Brain Imaging (DPABI) software (Yan et al., 2016). The first 10 volumes were discarded to ensure scanner equilibration. Preprocessing procedure included slice timing and motion correction, which then followed by normalization to a standard echo-planar image template in MNI space and resampled to isotropic voxel size of 3 mm. The resulting images were then smoothed with a 4-mm fullwidth half-maximum Gaussian kernel, linearly detrended, band pass filtered at 0.01-0.08 Hz, and scrubbed with a framewise displacement (FD) measure (with a threshold of 0.5 together with one preceding and two subsequent volumes) (Power et al., 2012;Liu et al., 2015;Han et al., 2016). Three patients with OCD were excluded due to more than 33% of the volumes were removed. The fMRI data of the remaining 40 patients have conducted the iFC analysis. The nuisance covariates, including 24 head motion parameters, white matter time course, and cerebrospinal fluid time course were modeled and regressed out using general linear model. We didn't regress out the global mean time course, because doing so may cause artificial negative correlations in iFC analysis (Nalci et al., 2017). We calculated the mean FD for each participant, and there was no difference between patients with OCD and HCs (Table 1).
Analysis on Functional Connectivity
The SN was identified with a publicly available atlas of functionally defined regions of interests (ROIs), developed by the Functional Imaging in Neuropsychiatric Disorders (FIND) lab at Stanford University 1 . This includes 19 subregions, from anterior 1 http://findlab.stanford.edu/functional_ROIs.html region 1 (A1) to anterior region 7 (A7) and posterior region 1 (P1) to posterior region 12 (P 12), mainly including the medial frontal gyrus (medial FG), insula, dorsal ACC (dACC), middle cingulate cortex (MCC), the parietal cortex, and the cerebellum regions (see Supplementary Table S1 and Supplementary Figure S1).
Nineteen subregions of the SN were used as ROIs to calculate the iFC analysis between each seed region and all voxels in the whole brain using DPABI to examine whether the functional connectivity of the SN was altered in OCD. The mean time series was obtained and correlated with the time series of all the voxels in the whole brain. This results in 19 functional connectivity maps separately for each group. The correlation coefficients were transformed to standard z-values to achieve normality using Fisher's r-to-z transformation. Two-sample t-tests were used to identify any brain regions that showed a significant iFC difference between patients with OCD and HCs. Bonferroni corrections were used for multiple comparisons. Given the number of seeds used, the corrected p-value was set at p < 0.05/19 = 0.00263 using the Gaussian random field (GRF) method (a voxel p-value < 0.001 and a cluster p-value < 0.00263).
The DMN, the ECN, and SN templates identified by the FIND lab were used to examine whether the iFC results belong to a specific brain networks. The principal regions involved in the DMN are the medial prefrontal cortex, ACC, posterior cingulate cortex (PCC)/precuneus, parietal cortex, and medial temporal regions (i.e., hippocampal and parahippocampal gyri) (Li et al., 2017). The ECN mainly included the parietal cortex, the dorsolateral prefrontal cortex (DLPFC), the angular gyrus, and the cerebellum region (Krmpotich et al., 2013) (see Supplementary Figure S2). The images were visualized with BrainNet Viewer (Xia et al., 2013).
To test whether iFC differences were correlated with the clinical presentation in patients with OCD, we correlated the connectivity strength within these areas that showed significant group differences with measures of Y-BOCS score, obsessive thinking score, and compulsive behavior score, respectively. HAMD score, HAMA score and FD values were included as Data are presented as mean ± standard deviation or number or frequency. Y-BOCS, Yale-Brown Obsessive-Compulsive Scale; HAMD, 17-item Hamilton Depression Rating Scale; HAMA, Hamilton Anxiety Rating Scale; FD, framewise displacement. Variables of age, education, Y-BOCS total score, subscales score, HAMD score, HAMA score and FD were tested by two sample t-test, the results were indicated by t. Categorical data such as gender was tested using chi-squared tests, the results were indicated by X 2 . nuisance covariates. We used a Bonferroni corrected threshold of p < 0.05/3 × 7 (0.002) to control for multiple comparisons.
Clinical Characteristics
Clinical characteristics of patients with OCD and HCs were displayed in Table 1. There was no significant difference between the OCD and HCs groups in age, gender, education or FD values (all p > 0.05). There were significant group differences of total scores in Y-BOCS, HAMD, and HAMA subscales.
Functional Connectivity Within the SN
Patients with OCD exhibited significantly decreased iFC strength within the SN subregions compared to the HCs group (Table 2 and Figures 1, 2). Compared to HCs, patients with OCD exhibited decreased iFC strength between the left thalamus and the left cerebellum, between the left insula and the right thalamus, FIGURE 1 | Brain regions demonstrating group differences of the iFC between SN subregions and whole brain voxels in patients with OCD. The threshold was set at a voxel p-value < 0.001 and a cluster p-value < 0.00263, two-tailed (Bonferroni corrected using the GRF method). L, left side; R, right side.
Frontiers in Neuroscience | www.frontiersin.org between the right cerebellum and the bilateral insula, and the right ACC.
Functional Connectivity Between the SN and Other Networks
Patients with OCD exhibited decreased iFC strength between the SN and the DMN compared to the HCs, which mainly found in the SN subregions (left insula) and the MCC (Table 2 and Figures 1, 2). The iFC strength between the SN and the ECN was also significantly decreased in patients with OCD as compared to the HCs. Specifically between the SN subregions (left insula) and ventral lateral prefrontal cortex (VLPFC) (Table 2 and Figures 1, 2).
To exclude the effect of head motion, we did a correlation analysis between mean FD and FC values of regions showing significant difference between two groups, and found that there were no significant correlation between the mean FD and the FC values [all p > 0.05/1 × 7 (0.007) Bonferroni corrected]. Based on these results, we preliminary speculate that head motion may not affect the FC values of these regions.
Relation Between Altered iFC Strength and Clinical Symptoms
The altered iFC strength within SN and between the SN and other functional network have no correlation with the clinical symptoms in patients with OCD (all p > 0.002).
DISCUSSION
The present study firstly split the SN into 19 subregions using the publicly available atlas and investigated the restingstate functional connectivity differences for each of the 19 SN subregions in drug-naïve patients with OCD vs. HCs. Consistent with our hypothesis, our results revealed significantly reduced iFC strength within the SN subregions in the OCD group compared with the HCs group. In addition to abnormalities within the SN, the OCD group also exhibited reduced iFC strength between components of the SN and the brain regions within DMN and ECN. These results provide evidence of a reduced connectivity within SN subregions, and between SN and DMN, and ECN. Consequently, these findings point to a specific alteration in iFC patterns associated with SN in patients with OCD.
Consistent with the results of a meta-analysis (Gürsel et al., 2018), decreased iFC strength within the SN subregions in the OCD was revealed in the present research, specifically in the bilateral insula, thalamus, and cerebellum. As a core component of the SN, insula plays an important role in information integration. Insula receives and integrates the internal and external stimuli to update expectations or to initiate actions (Menon and Uddin, 2010;Palaniyappan and Liddle, 2011). Decreased iFC strength between bilateral insula and thalamus, as well as cerebellum, may suggest a dysfunction in integration between these brain regions (Zhang et al., 2011). On the one hand, the thalamus and the cerebellum are unable to perform the task assigned by the insula; on the other hand, the insula also are unable to receive and integrate the information coming from the thalamus and the cerebellum. Therefore, the decreased intra-SN iFC may lead to the dysfunctions of SN in patients with OCD.
Another important finding in the present study is that patients with OCD exhibited significantly decreased iFC strength between the SN and the DMN (particularly between SN subregions and MCC) as well as decreased iFC strength between the SN and the ECN (particularly between SN subregions and VLPFC). The major functions of DMN are self-referential processes and episodic memory (Andrewshanna et al., 2010), while the ECN is responsible planning, decision-making, goal-directed behavior, and cognitive control (Littow et al., 2015). Previous studies in patients with OCD also reported decreased iFC between SN and DMN (Beucke et al., 2014;Posner et al., 2014;Gürsel et al., 2018), and between SN and ECN (Harrison et al., 2009;Gürsel et al., 2018). Decreased iFC between SN and DMN, as well as between SN and ECN may imply the existence of a chaotic relationship between the internal and external environment in patients with OCD, because the basic modulation function of the SN switching between DMN and ECN may decline (Fan et al., 2017a). Consequently, abnormal inter-SN iFC may be associated with patients' difficulty in disengaging from internally self-referential thoughts, and the ability to plan goal-directed behavior to adapt the changing external environment, which may lead to the cognitive and behavioral disturbances simultaneously in OCD. In addition, reduced SN-DMN connectivity may contribute to decreased sustained attention (Posner et al., 2016) and may also be related with poor insight in patients with OCD (Fan et al., 2017b).
However, contrary to our results, greater iFC within the SN (Fan et al., 2017a,b), and increased iFC between SN and DMN, between SN and ECN were revealed by previous studies (Posner et al., 2016;Fan et al., 2017a;Wang et al., 2018). The decreased reproducibility of neuroimaging findings may be due to the intrinsically low statistical power of relatively small sample size (Button et al., 2013). Moreover, compared with previous studies, the patients with OCD in our study may have different clinical OCD subtypes (i.e., good insight and poor insight), different clinical OCD subtypes are thought to have different pathophysiology (van den Heuvel et al., 2009). Most importantly, the majority of the results from previous studies didn't survive at the strict AlphaSim correction level of p < 0.001 (Fan et al., 2017a,b). Lower statistical power may induce some false-positive results in neuroimaging study (Eklund et al., 2016). Therefore, relatively large and homogeneous samples of patients with OCD and strict statistical level are needed for future studies.
In this study, we utilized the subregions of SN and found decreased intra-and inter-iFC of the SN. However, previous studies used different seed definition of SN and revealed different iFC patterns within and between SN and other network. For example, Fan et al. (2017b) used the bilateral anterior insular and dACC as SN and found increased iFC within the SN in patients with OCD; Posner et al. (2016) defined the bilateral anterior insular as SN and revealed no significant differences in iFC within the SN, but increased iFC between the SN and the DMN in patients with OCD; Stern et al. (2012) used bilateral dorsal anterior insula as SN, and discovered decreased iFC between SN and DMN in patients with OCD. The inconsistence of previous studies may attributed to assumptive and different seed definitions of the SN, which may limit to explore the iFC patterns of brain network (Stern et al., 2012;Posner et al., 2016).
Inconsistent with our hypothesis, we didn't found any correlations between altered iFC strength and clinical symptoms in patient with OCD. We infer that the altered iFC strength within the SN subregions may be a trait change for OCD independent of the clinical variables (Guo et al., 2014), and should be investigated in future studies.
This study has several limitations. First, the relationship between the DMN and the ECN was not explored in patients with OCD. Second, different clinical OCD subtypes, such as good insight and poor insight, may have different intra-and inter-iFC at the SN in patients with OCD. Third, cognitive and behavioral information of patients with OCD were not collected in our study. Lastly, some patients with OCD had history of psychotropic medication, which may already caused changes in brain function and structure. Therefore, the patients enrolled in this study were not all drug-naïve, and the results of our study should be interpreted with caution. Future study needs to take these into consideration.
Taken together, the present study conducted a detailed investigation of SN in patients with OCD by testing for abnormalities in all subregions of the SN. Our results not only demonstrated decreased connectivity within the SN, but also reduced inter-network connectivity with DMN and ECN. Therefore, the present findings suggest that patients with OCD exhibit unique changes of iFC in SN, and provide new insight into the dysfunctional brain organization of the SN in OCD. The "triple-network" model may contribute to the clinical phenotype of OCD. | 2018-11-28T22:46:55.055Z | 2018-11-28T00:00:00.000 | {
"year": 2018,
"sha1": "b87f0e7f5ab188818fe9bfd99e2bbba7c3164b13",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2018.00889/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b87f0e7f5ab188818fe9bfd99e2bbba7c3164b13",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235319574 | pes2o/s2orc | v3-fos-license | Growth and Decline of the Military-Industrial Complex: the Cases of Argentina and Brazil
This article examines most significant causes of the development of a weapons industry in Argentina and Brazil. International market and political conditions, domestic economic and political determinants, and regional contextual factors explain the evolution and makeup of the military industrial complex in these countries. The article examines all three sources and provides a summary profile of the arms sector in each country. Developments in the 1980 and early 1990s - domestic, regional, and international - have resulted in the near-collapse of arms production in Argentina and Brazil. In the last section, the implications of this dramatic contraction are explored.
Introduction
How and why do states without a tangible external threat to their security develop a weapons industry? Beginning sixty years ago, Brazil and Argentina initiated policies that would lead, by the 1960s and 1970s to substantial military industries. Why did these two key South American states feel compelled to inaugurate such costly endeavors? And, how did the arms market contrac-tion after the cold war affect the Argentine and Brazilian defense industries? The political economy of the rise and fall of state-supported arms produc-tion is, then, the focus of this essay, within the larger context of global, regional and domestic political and economic transformations.
International market and regimes
The initial impetus for the development of arms industries in Argentina and Brazil goes back to the 1930s and 1940s, when both countries' armed forces took incipient steps to create an indigenous capability in weapons produc-tion. However, it was not until the 1970s and early 1980s that Brazil's aircraft and arms industries were consolidated, and their export capacity -mostly Brazil's -became more significant. This capacity was strengthened by declining global arms markets in the early 1980s, despite the receptivity to arms purchases in areas of protracted military conflict, such as the Middle East.
Opportunities to strengthen political and economic relations with developing countries, fierce competition among producers, and national prestige consid-erations fueled an aggressive arms trade and lowered the resistance of traditional suppliers to diffuse technological and productive capabilities. In other words, changing patterns in arms trade helped shape a new interna-tional division of labor, expanding access to arms and technology markets for a group of emerging Third World arms producers like Argentina and Brazil. These conditions increased the ability of recipients to maximize indigeniza-tion in weapons production through assertive bargaining. The effective growth of Brazil's arms exports began in 1976 and lasted until the early 1980s while Argentina's negligible production, despite an initial effort in the 1950s, surged slightly in the late 1970s.
International financial conditions at the time were beneficial to the devel-opment of arms industries in these countries. On the one hand, the 1973 oil shock and its aftermath increased constraints on domestic financing in oil importers like Brazil, which now required efforts to expand exports. On the other, it also provided Middle Eastern and other oil-producers with windfall petrodollars capable of funding the modernization of their armed forces, expanding the pre-1970s Third World arms market. Euromarkets flooded with recycled petrodollars were now a source of loans and suppliers' credits to finance domestic arms production and purchase of technology for countries like Argentina and Brazil. Finally, international regimes related to arms transfers were fairly inactive or nonexistent during that period, imposing fewer constraints on arms exporters than would be the case in the 1990s. Thus, international political, strategic, commercial, and financial conditions point to an overall permissive environ-ment for the development of arms production among emerging suppliers such as Brazil and Argentina during the 1970s.
Domestic determinants: Import-substitution and national security ideology Import-substitution was the name of the game, providing an overarching industrialization strategy in Argentina and Brazil in the post-World War Two era, and state entrepreneurship was at its heart. There was a preponderance of the state sector as arbiter and agent of economic development and the state bureaucracy had a significant degree of freedom and maneuvering. The armed forces implicitly or explicitly controlled vast segments of the public sector and industrial infrastructure protecting it with high tariff surcharges, quotas, and subsidies.
As early as in 1930, a series of articles in Argentina's Revista Militar demanded the creation of state and mixed enterprises, and the adoption of protective barriers to accelerate national industrialization. More extreme posi-tions within the armed forces favored industrial 'autarky. ' Peron's rule accelerated and gave substantive meaning to this orientation to industrializa-tion and to the role of the military in it. The mostly-military administrations hat succeeded him deepened this trend, despite episodic and failed efforts to reverse some Peronist economic strategies.
The interpenetration of military and economic power, according to Rouquie, was not peculiar to Argentina, but its extent was unique in Latin America. Brazil's armed forces have also had an active interest in industrialization since the 19th century and an industrialist-technocratic orientation within the armed forces developed as early as the 1940s. The post World War Two large-scale state enterprises in steel-making, oil, petrochemicals, mining, and public utilities were largely influenced by military or formerly military tech-nicians and bureaucrats.
An ideology of 'National Security' prevailed in Argentina and Brazil by the 1960s (with deeper historical origins), when the military ruled more often than not and when the armed forces enjoyed privileged budgets and ancil-lary activities such as weapons-production and the development of The expression of import-substitution and an inward-looking orientation in the military sector was to seek as much military independence from suppliers as possible, a project embedded in Argentina in the Savio Law of 1941. In time, arms embargoes begot even stronger import-substitution efforts (in the case of Brazil, as a reaction against President Carter's human rights pre-requisites for arms sales). Domestic arms production and exports were regarded as an important means to 'great power' roles, 'grandeza,' 'equidis-tance' from the superpowers, and non-alignment. In the words of a former Brazilian officer who was the director of the Brazilian War Material Enterprise-IMBEL: 'We will sell to the left, to the right, to the center, up above and down below'. The ideology of National Security permeated the foreign affairs bureaucracy as much as the military institution itself. Arms exports held the promise of increased international leverage vis-a-vis suppliers of raw mate-rials, oil, and technology.
The regional setting: Rhetorical closeness, distant neighbors
Brazil and Argentina have not fought each other in the twentieth century. Relations between the two giants in the Southern Cone, however, have never been very close. Rather, their relations were characterized by historical competition over territories, resources, and influence over buffer states -competition that occasionally developed into more serious expressions of mutual distrust. Brazil's fears of Argentine aggression went back over a hundred years -both fought their last war in 1825-1828 -and were exacer-bated by Argentina's alignment with Axis powers during World War Two, when Brazil joined the Allies' military efforts in Europe. Military institutions and their central political role helped exacerbate the cold relationship. Argentina and Brazil became familiar cases in studies of nuclear proliferation -given four decades of intensive efforts to develop nuclear capabilities outside the global nonproliferation regime -despite repeated assertions that nuclear indus-tries were solely directed at civilian activities. Notwithstanding the classical rhetoric of Pan-American solidarity -mostly directed against the US -a tacit historical competition largely defined the bilateral relations between these two key South American States.
Much of the twentieth century was thus best characterized by neither militarized conflict nor effective cooperation between Argentina and Brazil. No genuine cooperative economic schemes ever took hold until quite recently, as we shall see below. Although a lukewarm relationship, it is important to remember that the so-called Argentine-Brazilian rivalry has been largely overplayed, and that it never reached more than measured competition. Argentine-Chilean military rivalry was perhaps more pronounced, leading both countries to the verge of war over the Beagle Channel in the late 1970s. However, in the grand scheme of factors affecting weapons production in the Southern Cone, regional considerations seem to have played a rather marginal role in the evolution of arms industries in both Argentina and Brazil.
Brazil: A profile
Under the international, regional, and domestic conditions just described, Brazil developed an arms and aircraft industry characterized by an effective part-nership between the state and private sector. The industry succeeded in achieving significant rates of indigenization, allowing Brazil to leap ahead, becoming one of the world's largest exporter of conventional arms in the mid-l 980s.
Concrete strategies in Brazil's arms industry involved a policy of 'market reserve,' state financing, and technological support to private firms through the Aerospace Technical Center (CTA) and the army's Technical Center (CTEX, created in 1979 emulating the air force's CTA). The state reduced entrepreneurial ('strategic') and market uncertainty through its pro-curement of guaranteed shares, often at higher prices than their market value. It financed private R&D, built part of their infrastructural requirements, trained engineers, mediated between foreign technology suppliers and national firms, and transferred new technologies to private firms.
The air force concentrated in missile development, airplanes, and guided systems, the army in armoured vehicles and artillery, and the navy in electronic systems, communications and computers. The Ministry of the Navy initiated in the late 1970s activities in shipbuilding and nuclear-related technologies, including a nuclear submarine. IMBEL was established in 1975 under Army sponsorship, as a state-owned holding of seven major producers (and 55 private companies), administering Army arsenals and factories. The anti-statist movement of the latter part of the 1970s, and an extant private infrastruc-ture in automobile manufacturing, acted as an effective barrier against state expansion in this area. By the early 1980s, IMBEL had a semiprivatized coop-erative structure, and was granted tax exemption for most of its imports.
The three major enterprises accounting for most Brazilian arms exports at the time were Embraer, Engesa, and Avibras. The Air Force developed the state-owned firm EMBRAER in 1969 as the national champion of the aircraft industry, out of its Centro Tecnico Aerospacial (CTA), the locus of techno-logical research in aircraft design and production. The Ministry of Aeronautics manipulated Brazil's domestic market for civilian and military aviation to EMBRAER's advantage, through its procurement power, R&D support, and protective tariffs. The Ministry not only had effective control over EMBRAER itself, but increasingly concentrated -throughout the 1970s -R&D, training, financial, fiscal, marketing, regulatory, and international bargaining (for tech-nology) related to the sector. By the late 1970s, the Ministry managed to camouflage EMBRAER as a mixed enterprise, as a result of a tax incentive scheme -a deduction of 1 percent in corporate income tax to purchase EMBRAER's shares -that provided the firm with low-cost, long-term, inter-vention-free capital. EMBRAER was thus considered a mixed enterprise, statecontrolled, 90 percent privately-owned, with 246,937 shareholders.
EMBRAER began producing a variety of planes (air frames, parts, and navigation equipment), and licensed aircraft technology abroad (Tucano to Egypt and the United Kingdom). By the early 1980s Embraer was the sixth largest aviation company in the world (outside the US), producing the Xavante jet trainer and ground-attack plane (Italian license); the AMX fighter-bomber Uoint venture with Aeromacchi and Aeritalia, 80% Italian, sold to Brazil's and Italy's Air Force); the trainer 'Tucano' (sold to Lybia, Egypt, Iraq, among others, and produced in Egypt by the Arab Organization for Industrialization with Brazilian technology); the civilian (Pratt & Whitney engines) aircraft -Bandeirante (over 500 Bandeirantes sold to 34 countries), Brasilias (hundreds sold), 3 medium-sized general purpose aircraft (Xingu, Tapaj6, Araguaia), and the 'Ipanema' (for agriculture).
Some aircraft models were the product of skillfully negotiated industrial cooperation agreements with a foreign supplier designed to achieve rapid market penetration without excessive technological dependence. Preferred modes of technology transfer included coproduction arrangements (with Italian Aermacchi for the jet-trainer Xavante, with Aermacchi and Aeritalia for the AMX fighter) and licensing (from Piper for different light aircraft). The Tucano trainer and the Bandeirante are of national design, but over 50 percent of the value of a Bandeirante was imported from the US and Canada. Efforts at nationalization of inputs resulted in the diffusion of technological capabil-ities to dozens of suppliers.
The Ministry of Aeronautics, but more so the Army, nurtured the private firm AVIBRAS in missile technology, turning Brazil into a designer of ground missiles, including guidance systems. Founded in 1961, Avibras was a pioneer aerospace company which produced the first Brazilian composite propellants in the 1960s. It developed the Sonda-I, 11-B and 11-C rockets, worked on the second stage of Sonda III and the first prototype of Sonda IV, and converted the Sonda series of sounding rockets into artillery rockets for exports. The Astros II rocket-launching system became its most successful product by 1983. Avibras' annual production grew from $6 million in 1978 to $391 million in 1987, and its work force from 250 to over 6,000.
ORBITA was created in 1987 as a joint venture among private firms (including Engesa) and Embraer, although it never materialized from the planning stages. It was originally designed to consolidate missile-development activities: to convert the Sonda-IV space rocket into a missile with the help of extensive technical assistance from West German and French firms, and to develop the Leo anti-tank missile and the Piranha air-to-air missile, which never entered the production stage. By the latter half of the 1980s Brazil's efforts in this area included the development of ground-based SS-300 missiles (which never came into being) and Barracuda sea-launched missiles for tactical warheads. The Satellite Launch Vehicle (SLV ), capable of launching a 440 lb payload into a 435-mile orbit, was scheduled to be ready by early 1996, but was eventually cancelled. In 1993, about 200 Brazilian companies (including Avibras and Embraer) joined in the Aerospace Industries Association of Brazil, in order to promote exports.
Another private firm, ENGESA, became a major producer of armored vehicles, with over 90% of its production oriented towards exports. In the early 1970s Engesa was a still small firm with little in-house research activity, while by the end of the decade it became the world's largest producer of such vehicles -including the Urutu, Cascavel, and Jararaca -exporting to over 20 countries. Engesa relied on domestically developed technology in the automotive sector (about 17 percent of its sales were invested in R&D, $1 M. in 1980), or on carefully selected and negotiated coproduction agreements with several sup-pliers. Most engines were from General Motors or Mercedes Benz do Brasil. Engesa's armoured vehicles were sold to Libya, the PRC, Iraq, Iran, Nigeria, and Sudan, among others. The planned Osorio tank never went beyond a 1985 prototype (one of which went to Saudi Arabia).
In sum, the sector reflected a cooperative structure among state (particu-larly military), private sector, and research institutions. It succeeded in achieving relatively high levels of national design and indigenization of com-ponents and in using add-up engineering, integrating imported components into a new system. Over 90 percent of its production, including armored vehicles, aircraft, sophisticated rocket systems, and missiles, was exported to over 50 countries (15 . ACDA data suggests that exports peaked in 1982 at $749 million. By 1984 Brazilian sources estimated exports to be about $1 billion and more, although experts concur on the general overestimation of the value of military exports for political purposes. SIPRI reports on exports of major weapon systems in 1987 for only $491 million. It is important to high-light that estimates of employment and export performance of the military industries are generally not very reliable and most experts suggest caution in interpreting these figures, particularly those emanating from governmental sources at the time and weapons producers. Brazil's relative success in arms production and exports, even if far less impressive than estimates at the time implied, could be traced not only to an effective reading of market signals, but also to the receptiveness of its planes for Third World conditions (due to size, price, low operating costs in short commuting routes, low maintenance requirements), the versatility of its armoured vehicles, simplicity in design (low maintenance requirements), adapt-ability to worst climate and terrain, and reliability. Finally, as a Third World supplier during the Cold War era, Brazil's 'no-stringsattached' partnership was particularly appealing. The military de-emphasized indigenous arms production since the downfall of Peron and until 1976, when investment in state arsenals surged. The best known export output at the time was the TAM (Medium Argentine Tank), commissioned for design by the West German firm Thyssen-Henschel in 1973. Among recipients of TAM were Iran (about 100), Peru, Panama, Jordan, and allegedly Saudi-Arabia. Efforts at reducing dependence on foreign technology and licensing in the military complex run by the army were negligible. This is particularly striking if one compares the relative shares of R&D funds from the central budget allocated to the three forces, with their technical achievements. The navy's export share of total R&D investments in 1978 was 0.2 percent, the air force's 1.72 percent and the army's 18 percent. There was no shortage of army R&D agencies, which included twelve insti-tutes under the supervision of the Council for the Armed Forces for R&D. A group of researchers at the Army's R&D center was never able to influ-ence the army and the DGFM in the direction of industrial promotion and technological investments. Although considered an indigenous design, the Pucara was inspired on a US model and was highly dependent on imported parts. Limited numbers of Pucara were sold to Uruguay, Iraq, Central African Republic, Venezuela, Morocco, and El Salvador.
The Armed Forces Technical Research Center (CITEFA) started working on missiles in the early 1970s. With mostly German technical assistance (MBB) and Egyptian and Iraqi funding it was engaged after 1982 in the develop-ment of a medium-range (600 miles) surface-to-surface ballistic missile (Condor II) with a payload of 1,000 pounds. Developed by the air force at Falda del Carmen, the Condor II project was estimated to have absorbed $300 M. Iraq and Egypt were to acquire 200 Condor II each (labelled the Badr 2000 in Egypt and the Saad 16 in Iraq). The Argentine government admitted delivering eight Condor II prototypes to Egypt in 1991. Argentina also produces an unguided multiple launch (200 km) rocket, the Alacran, capable of delivering a 100 Kg. payload, far below the MTCR threshold. The navy controlled the nuclear sector and the National Atomic Energy Commission's ambitious nuclear program. The navy's liberal orientation followed the British and American models and was evident in its emphasis on 'state subsidiarity,' which the nuclear program gave effective meaning by developing private firms in heavy components and other inputs for nuclear plants and fuel processing facilities.
Argentinian arms exports are estimated to have amounted to $217 million between 1976 and 1982. By 1985, Argentina's revenues from arms exports were said to be as high as those from meat exports, although these estimates are -as in the case of Brazil -not entirely reliable. All in all, Argentina's arms industry was historically shackled by a statist orientation, and for the most part, was unable to translate copious investments into technologically and com-mercially significant capabilities.
During the growth phase of Argentina's and Brazil's arms industries, both developed extensive connections with Middle East clients. Brazil's military exports to Algeria, Libya, Egypt, Morocco, Qatar, Saudi Arabia, Tunisia, UAE, had turned this area into its major extra-regional market, followed by Africa (Gabon, Nigeria, Upper Volta, and Zimbabwe). Brazil's heavy oil depen-dency had lubricated these connections, leading to barter and countertrade agreements exchanging oil for weapons. Engesa's international debut was armoured vehicle shipments to Iraq in 1977. Between 1979 and 1982 Engesa delivered to Iraq close to 800 Cascavels, in addition to over 300 Jararaca and 300 Sucuri, turning Iraq into the recipient of a third of all Brazilian arms exports. A package of tanks, missiles, and aircraft equipment ($1bn.) with Saudi Arabia followed in the mid-l 980s. Embraer licensed the Tucano for production in Egypt (110 units) in 1983, with resales to Iraq. Over 90 percent of Avibras' exports went to the Middle East, principally Iraq and Libya (also Saudi Arabia), including the rocket system Astros II (range 40-70 km). By 1989 Brazilians were assisting Iraq in rocket aerodynamics, flight testing, the control of rocket trajectories, on-board electronics, and rocket propellants.Jo At the time, Iraq was Brazil's eight largest trading partner. The Brazilian government revealed in 1990 that, since 1980, it had provided Iraq with enriched uranium and with assistance in uranium enrichment, with prospecting of uranium ore, and with a facility for converting yellowcake into uranium oxide.JI In 1993, UNSCOM inspection teams in Iraq were studying samples of nuclear material believed to be of Brazilian origin. Brazil was also sus-pected of providing Iraq with designs for centrifuges and even with an actual centrifuge.
Argentina also maintained military exports in the region. Different Argentine provinces developed different proclivities to sales in the Middle East, with Cordoba's independent foreign policy pushing for Pucara plane sales to Iraq, and Entre Rios opposing the sale, to protect its rice and tea exports to Iran (worth $500 M.). Among other transactions in the 1980s, when nuclear exports were part of the nationalist diplomatic kit, Argentina supplied nuclear materials and services to Middle East countries. This included assistance in completing the two Iranian reactors at Bushehr and exporting large amounts of uranium dioxide to Algeria. By 1993 Argentina was still alleged to export low-enriched (20 percent) uranium fuel and nuclear-related services to Tehran. Argentina's best known military cooperation project in the Midle East was the Condor 2 project with Egypt and Iraq. Condor 2-related components were discovered by UNSCOM in 1993. Argentina allegedly helped Iraq in solid fuel technology and guidance systems, increasing the range of Iraq's Scuds. Guidance and control systems, however, were Argentina's own bottleneck in the development of the Condor II. The program was deactivated under heavy US pressure, with its components shipped to Spain's National Airspace Technical Institute (INTA) in 1993.
In sum, a relatively dense network of military cooperation -conventional and nuclear -developed between Argentina and Brazil on the one hand, and Iraq, Iran, Egypt, Algeria, Libya, and Syria on the other. With the contrac-tion of state agencies and military budgets, this network faced significant threats. However, private actors including former military officers and entre-preneurs continued to offer their services to Middle Eastern arms-producing programs. Former Brazilian CTA and Orbita personnel were purportedly involved in plans to build a nuclear-version of the Piranha air-to-air missile for Iraq, although the Piranha itself had never entered the production stage in Brazil.35 Argentine scientists were reported to assist Iraq's rocket program as well.
International constraints
The end of the Iran-Iraq war also ended a primary market for Brazil's arms industry. By the end of the 1980s, the international arms market became saturated, a situation made even worse from the perspective of weapons pro-ducers by the end of the Cold War and the ability of traditional suppliers to adjust to the requirements of Third World clients. In 1990, Saudi Arabia was ordering Abrams tanks, not Brazilian Os6rios, despite Engesa's effort to get a $2.2 bn. deal by calling the tank 'Al Fahd.'36 International financing for arms industries had dried up. Iraq stopped paying Engesa's bills in the late 1980s, contributing to Engesa's financial collapse in 1990. Avibras' sales dropped from $350 million in 1987 to $10 million in 1989, leading to its bankruptcy in 1990. Even Embraer, which could still rely on civilian exports, became heavily indebted by the early 1990s, forcing dramatic cuts in its projects and labor force. Thus, the brief success of their arms exports ended with a double whammy: the sharp contraction of international demand on the one hand, and the height-ened levels of supply on the other.37 In addition, the emergence of international regimes aimed at controlling international arms transfers and sales -such as MTCR in missile-related technology -placed further political and techno-logical constraints on the relative freedom of operation which Argentina and Brazil had enjoyed in the preceding decades. For example, Argentina's Condor II and Brazil's VLS program were under heavy MTCR pressures. Both Brazil's and Argentina's nuclear exports became under stricter supervision, with the latter even joining NPT and the Nuclear Suppliers' Club with its strict guidelines.
A new domestic political economy
Following democratization in the mid-l 980s, the armed forces aimed -ulti-mately unsuccessfully -at exchanging the right to rule for the right to nurture military industries. The service heads of the army, navy, and air force in Brazil resisted the cancellation of their ministerial status and of the hitherto secure budgetary autonomy of their economic fiefdoms. Successive finance minis-ters in the 1980s were unable to stem fiscal expenditures favoring the military, subsidy-dependent private firms, and public employees. Under President Sarney an explicit directive was issued to the presidential cabinet to give priority to defense appropriations, leading to an increase in the military share of central government expenditures relative to the preceding six years of military rule. Sections of the military continued to develop a 'parallel nuclear program' with apparent weapons applications, even after attempts -through the Constitution drafted in 1988 -to place all nuclear activities under democratic control.
In Argentina, President Alfonsin challenged military prerogatives with some success, contracting military budgets by about 37 percent between 1984 and 1989. However, Alfonsin retained the air force's Condor II program in 1985, maintained relatively high levels of military expenditures (over 3 percent of GDP), and sustained Argentina's opposition to the NPT, its right to peaceful nuclear explosions, as well as its refusal to ratify the regional Tlatelolco treaty.
By the late 1980s and early 1990s Brazil and Argentina were poised for what amounted to a genuine revolution in the countries' political economy. The political coalitions backing Presidents Carlos S. Menem and Fernando Collar de Mello endorsed effective economic liberalization, privatization, military contraction, and structural adjustment, with unprecedented vigor. Following decades of import-substitution industrialization, genuine liberal-ization began taking hold, most consistently in Argentina, where the neoliberal program brought about privatization, low inflation, balanced budgets, and an average growth rate of close to 8 percent annually in the early 1990s. Arms and ancillary industries were now prime targets for privatization and conver-sion into civilian-oriented production. Menem and his Finance Minister Domingo Cavallo presided over the sharpest contraction of military budgets and military personnel in decades and over the elimination of the military draft. Economic reform lagged in Brazil with the ascension of Itamar Franco, who wooed a statist-populist constituency and the military, and attacked interna-tional institutions and their domestic allies. This phase was superseded by the election of Fernando H. Cardoso in 1994, whose coalition set out to embrace an economically-liberalizing revolution at home, in the region, and towards the rest of the world. Both Collar and Cardoso have decimated military budgets and worked to reduce the political influence of Brazil's armed forces.
The weapons-producing industries -recipients of state subsidies, fiscal incentives and R&D support -were a main casualty of the contraction of state expenditures and entrepreneurial activities. Fewer resources narrowed the political space for military expenditures and forced a redefinition of priori-ties. Although Brazil had been spending less than 1 percent of its GNP on the armed forces, among the world's lowest (Argentina spent 2.4 percent and occasionally far more), there were hidden costs and opportunity costs in the expansion of the military-industrial complex. Among the most important polit-ical costs was the expansion of the armed forces' influence, and its resistance to contracting the state. In Argentina, DGFM accounted for up to five percent of the country's GNP, swallowed over seven percent of the national budget, and accumulated over $ 1.5 billion in foreign debt. Already by the mid-l 980s, pressures to privatize DGFM were mounting. The Condor II program was esti-mated to have absorbed between $ 3-5 billion, although Iraq allegedly provided most of the funding.
Economic liberalization had a beneficial effect on the military's disentan-glement from political and economic sources of power. Both in Brazil and Argentina institutions like the air force's CTA (Brazil) and IIAE (lnstituto de Investigaciones Aeronuticas y Espaciales, Argentina), and the navies' IPqM (Brazil's Instituto de Pesquisa da Marinha) and National Nuclear Energy Commissions had enjoyed enormous bureaucratic autonomy. The level of political insulation and budgetary rent-seeking of military-related enterprises were reduced significantly under President Fernando H. Cardoso. Cardoso had cut off funding for missile development (VLS satellite launch) even earlier, as finance minister. Engesa itself ceased to exist and Brazil's nuclear sub-marine program was discontinued in 1996. The eventual cancellation of the Condor II project in Argentina symbolized the triumph of the new liberal-izing agenda under President Menem over old power competitors in the Argentine political system, such as the Air Force. Both countries continued promoting space research -with very limited resources -having placed their respective Comissions for Space Matters directly under the President's supervision.
Finally, Menem scrapped the C6ndor II project, dealing a severe blow to the last military program with a potential for redressing decades of Argentine failure in military production. The Menem Administration played a game of occasionally pointing to foreign pressures and tradeoffs in dismantling this program. But, in reality, the external benefits of increased US support and inter-national recognition complemented a domestic priority of killing the vestiges of a historically powerful statist rival: the military-industrial complex.
Regional breakthroughs
The leap in economic liberalization was matched by a leap in bilateral coop-eration. Following decades of Argentine-Brazilian estrangement and failed attempts at genuine political and economic cooperation, the administrations of Carlos S. Menem in Argentina and Fernando Collor de Mello in Brazil laid out a blueprint of cooperation in the early 1990s, involving every issue-area, most notably economic integration and regional denuclearization. This was an unprecedented definition of regional cooperation in the Southern Cone, with MERCOSUR as an essential component. In July 1990, Collor and Menem signed the Buenos Aires Act, accelerating the timetable for the establish-ment of an Argentine-Brazilian common market by December 1994, and instituting automatic tariff reductions across the board. Argentina, Brazil, Uruguay, and Paraguay signed the Treaty of Asunci6n in March 1991, creating MERCOSUR (Mercado Comun del Sur, or MERCOSUL in Portuguese). The treaty stipulated the free circulation of goods and services within the region by 1995, an automatic schedule for tariff reductions, the institution of a common external tariff by 1995, the harmonization of laws and regulations concerning rules of origin and dispute settlement, and the coordination of macroeconomic policies. This time, integrative schemes were not mere rhetoric, but effective policies. A genuine economic integration process was in place after many failed attempts during these countries' importsubstituting and hybrid (including weakly lib-eralizing) phases. Trade within MERCOSUR quintupled between 1991 and 1995 while bilateral trade between Argentina, Brazil, and Chile tripled. Brazil's share of Argentine trade doubled between 1989 and 1993, from 10 percent to 20 percent of the total. Argentina's share of Brazil's trade nearly trebled between 1989 and 1993, from 3.7 percent to over 13 percent. In addition to the unprecedented cooperation between Argentina and Brazil in economic and infrastructural areas, a mutual commitment to renounce nuclear weapons and the accession to the Treaty of Tlatelolco stipulations and to NPT (in the case of Argentina) have replaced three decades of nuclear ambiguity and competition. A highly cooperative regional context weakened even further the justification for extracting societal resources to maintain military-indus-trial complexes. Moreover, the commercial excuse for exportoriented complexes had withered away.
All in all, the contraction of arms production in Argentina and Brazil was overdetermined by international, regional, and domestic considerations. All three are linked by the process of economic liberalization, which led, as in many other cases in the industrializing world, to rationalization in budgetary allocations. While undermining military expenditures, neoliberal programs have often been oblivious to the development of social safety nets.
Conclusions
The external dimension of Brazil's and Argentina's political and economic transformation included not only an unprecedented embrace of liberal trade rules but also the abandonment of historical nationalist foreign policies across the board. By the early 1990s, Argentina had joined an array of inter-national regimes (including NPT and MTCR), severed its membership in the Nonaligned Movement, and sent a naval contingent to join the multilateral force in the Gulf war. The infamous Condor 2 project was put to rest in 1993, paving the way to increased Argentine access to investment, technology, and trade. Argentina's new credentials became also evident in its caution and deference to nuclear export guidelines and to the political sensitivities of the international community, regarding what are often referred to as 'rogue' states. In 1992 President Menem barred the transfer of nuclear reactor components, including uranium conversion and purification equipment, that Argentina had agreed to supply to Iran in 1987. Argentina joined the Nuclear Suppliers' Group restricting the supply of sensitive nuclear materials in 1994. By 1995, Chancellor Guido Di Tella was ready to cancel the (internationally legal) sale of an experimental nuclear reactor to Syria, with an uncharacteristic flexibility that revealed the content and bureaucratic carriers of Argentina's new policy. Whereas the Atomic Energy Commission had once a virtual monopoly over Argentine nuclear (including exports) policy, a refurbished Foreign Ministry had become pivotal to the implementation of the external aspects of Menem's liberalizing policies. The Brazilian government became similarly committed to pass a Congressional bill improving export control mechanisms for sensitive technologies. Brazil became a full MTCR member in October 1995 and has since received advanced missile technology from Russia.
Domestic political shifts away from policies embraced by the Menem and Cardoso administrations are possible, but not likely in the near term. Some political challengers and sectors of the militaryindustrial complex in both countries have criticized the demise of the military sector. However, the likelihood of a revival of an arms industry is significantly low, given the global, regional, and domestic logic that accelerated their downfall in the last decade. | 2021-06-04T00:16:36.205Z | 1998-01-01T00:00:00.000 | {
"year": 1998,
"sha1": "594eba67e25a58c0570b5a15461a2bbec35839b6",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt9w28h60b/qt9w28h60b.pdf?t=qomi43",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "594eba67e25a58c0570b5a15461a2bbec35839b6",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
17863929 | pes2o/s2orc | v3-fos-license | Presence of Avian Influenza Viruses in Waterfowl and Wetlands during Summer 2010 in California: Are Resident Birds a Potential Reservoir?
Although wild waterfowl are the main reservoir for low pathogenic avian influenza viruses (LPAIv), the environment plays a critical role for the circulation and persistence of AIv. LPAIv may persist for extended periods in cold environments, suggesting that waterfowl breeding areas in the northern hemisphere may be an important reservoir for AIv in contrast to the warmer southern wintering areas. We evaluated whether southern wetlands, with relatively small populations (thousands) of resident waterfowl, maintain AIv in the summer, prior to the arrival of millions of migratory birds. We collected water and fecal samples at ten wetlands in two regions (Yolo Bypass and Sacramento Valley) of the California Central Valley during three bi-weekly intervals beginning in late July, 2010. We detected AIv in 29/367 fecal samples (7.9%) and 12/597 water samples (2.0%) by matrix real time Reverse Transcription Polymerase Chain Reaction (rRT-PCR). We isolated two H3N8, two H2N3, and one H4N8 among rRT-PCR positive fecal samples but no live virus from water samples. Detection of AIv RNA in fecal samples was higher from wetlands in the Sacramento Valley (11.9%) than in the Yolo Bypass (0.0%), but no difference was found for water samples (2.7 vs. 1.7%, respectively). Our study showed that low densities of hosts and unfavorable environmental conditions did not prevent LPAIv circulation during summer in California wetlands. Our findings justify further investigations to understand AIv dynamics in resident waterfowl populations, compare AIv subtypes between migratory and resident waterfowl, and assess the importance of local AIv as a source of infection for migratory birds.
Introduction
Wild birds (orders Anseriformes and Charadriiformes) are capable of maintaining and spreading most subtypes of low pathogenic avian influenza viruses (LPAIv) [1]. LPAIv replicate primarily in the intestinal tract of infected birds, with large amounts of virus shed through feces into the environment [2]. Based on experimental studies, Hénaux and Samuel [3] estimated that virus excreted during the infectious period represented about 1,500 times the median bird infectious dose (BID 50 ) for LPAIv. This level of contamination implies that the environment is critical to AIv transmission through the fecal/oral route [4]. Accordingly, recent modeling of LPAIv dynamics in wild waterfowl suggested that disease cannot be maintained in many populations without environmental transmission [5][6].
The role of the environment as a reservoir for AIv is also supported by the ability of LPAIv to persist in water for extended periods [7][8][9]. Experimental studies demonstrated that temperature greatly influences viral persistence, with an exponential decay of viral infectivity as temperature increases [7]. In addition, AIv are most stable in freshwater (i.e., low salinity) with pH between 7.4 and 8.2 [8,[10][11]. Prolonged infectivity in cold freshwater (#4uC [2,7,9]) suggests that in the northern hemisphere (implied hereafter) AIv may persist longer in northern than southern waterfowl habitats, and infect migratory birds returning to breeding areas during spring [12][13]. In contrast, decreased survival in warmer water implies limited LPAIv persistence and transmission among non-migratory waterfowl during summer on southern wetland areas [7].
Although the transmission of AIv was documented in resident waterfowl in southern areas during winter [14], the role of local populations in the maintenance of AIv during summer is still unknown. Identifying the sources of AIv affecting wintering waterfowl (i.e., AIv circulating in migratory populations vs. present locally in the environment) would improve our understanding of the role of southern wetlands as a reservoir for AIv and migratory birds as AIv carriers, and help determine the risks related to the spread of AIv. The objective of our research was to evaluate the role of summer wetlands and resident waterfowl in California as potential reservoirs for AIv. We hypothesized that AIv subtypes would be unlikely to persist in these wetlands during the summer because of unfavorable environmental conditions (especially high temperatures) and absence of a sufficient waterfowl population to serve as an effective AIv reservoir. We collected up to 20 fecal samples from resident waterfowl and 20 water samples at ten wetlands in two regions of the California Central Valley (Figure 1) at bi-weekly intervals from late July to late August 2010; three wetlands were in the Yolo Bypass east of Davis, CA, and the other seven were 80-100 km north in the Sacramento Valley.
Mallards Anas platyrhynchos were the most abundant waterfowl species at study wetlands followed by cinnamon teal A. cyanoptera, gadwall A. strepera, ruddy duck Oxyura jamaicensis, and wood duck Aix sponsa. In late August, northern pintail Anas acuta, and northern shoveler A. clypeata were also observed at YOL1.
Mean site-specific water temperatures ranged from 16.9 to 30.6uC, pH from 7.0 to 10.0, conductivity from 113.1 to 1246.8 mS/cm, dissolved oxygen (DO) from 28.7 to 296.1 mg/ L, turbidity from 22.6 to 873.3 NTU (Nephelometric Turbidity Units), and coliform concentration from 2 to 1600 MPN (Most Probable Number) over the course of the study.
Discussion
Although waterfowl species are known to contribute to the dispersal of AIv from breeding to wintering areas [1], our study is the first to investigate the presence of virus in wetlands and resident waterfowl populations in southern wetlands during summer. We detected AIv RNA in 7.9% of fecal samples of resident waterfowl, with a higher detection probability in SACV than YOLO wetlands. The probability of detection of AIv RNA was significantly lower in water (2.0%) compared with feces. We isolated multiple influenza viruses (H3N8, H2N3, H4N8) from fecal samples at several SACV wetlands, indicating circulating LPAIv infections in resident duck species late into the summer.
We found a low detection of AIv RNA in water samples, although virus isolation in feces indicates ducks were shedding live virus into wetlands. However, virus dilution in wetlands is expected to reduce virus concentration and detection probability, as indicated by the higher RT-PCR Ct values in water. In laboratory experiments, LPAIv persist in water conditions similar to those measured during our study from a few days to a few months [7][8]10], but there is limited information on the influence of natural wetland characteristics on virus persistence. Microorganisms and filter-feeding bivalves can reduce AIv survival and infectivity [15][16]. Although we conducted detailed statistical analyses, we were not able to show any significant influence of water characteristics, concentrations of coliform bacteria, or bird abundance on AIv detection (all P.0.05; results not shown). We suspect the low detection rate and the limited range of conditions in our study affected this analysis. Our findings indicate the need for improved detection of AIv in water samples as well as the investigation of biotic and abiotic components affecting virus survival in natural environments. We isolated several AIv subtypes from fecal samples indicating current infections of resident waterfowl and environmental contamination in California wetlands during summer. We obtained virus from 17.2% of rRT-PCR positive fecal samples which corresponds with the range reported in other studies (3 to 45% [17][18][19][20]). However, we did not isolate AIv from positive water samples. Variations in isolation rates among studies likely result from differences in sample methods (cloacal or oropharyngeal swab only, field-or lab-combined swabs, environmental samples), host species, and environmental characteristics. Although we sampled fresh feces the survival of AIv may be affected by temperature and humidity, with a loss of infectivity of HPAIv (H5N1) within 1 day at 25uC in dried feces [21]. Low viral titer (as observed at the end of the infectious period [3]), inactivated or non-infectious virus, and the presence of inhibitors [20] may also contribute to the low isolation rates. In water, UV radiation might inactivate virus in the water column and virus dilution may reduce isolation rates.
The reasons for higher detection of AIv RNA in feces at SACV vs. YOLO wetlands are unclear and may include a lower resident duck density at YOLO [22] or different proportions of naïve juvenile birds among these two regions. In summer 2008, LPAIv infection prevalence (by rRT-PCR) in live ducks was 9.1% (4/44) at Mendota Wildlife Area in the southern Central Valley (i.e., San Joaquin Valley), but only 1.1% at Lower Klamath NWR about 200 km north of the Central Valley [23]. Given that waterfowl are the primary source of environmental AIv, monitoring the distribution, species and densities of resident waterfowl several weeks prior to sampling, in relation to wetland habitat (e.g., presence of cows at YOL1 in late July) and management (e.g., water level), may help understand spatial heterogeneities in AIv distribution.
Our findings indicate that resident waterfowl populations in southern wetlands may serve as a source of virus for migratory ducks during winter. The prevalence of AIv infection in waterfowl wintering in the SACV and YOLO regions may reach up to 5% in some species [24][25] and further research is needed to evaluate the extent in which AIv circulating during summer can cause infection during winter. Among the AIv found in our study, H3N8 is commonly found in the Pacific and Central flyways [12,[25][26][27][28][29][30][31], and has been frequently detected in California. In contrast, H2N3 and H4N8 have been isolated from free-living aquatic birds in Alaska, Canada, and Texas [12,[29][30][31], but have not been previously reported in California. Comparing the genetic sequences of the AIv from our study with reference sequences may provide insight on the origin of these viruses and clarify the importance of summer virus persistence in LPAIv dynamics.
We sampled semi-permanent/permanent wetlands in July-August to minimize potential for virus from northern-breeding migrants. Adult male northern pintails are one of the first species to migrate into the Central Valley, arriving as early as the first week of August [32]. However, pintail abundance in our study area during early August was low (100s) and these early-arriving migrants concentrate on seasonal wetlands with high carbohydrate foods (i.e., seeds) needed to replenish reserves depleted by migration. At the semi-permanent/permanent wetlands we sampled, only local breeding populations (i.e., mallard, gadwall, cinnamon teal Anas cyanoptera, wood duck, american coot Fulica americana, pied-bill grebe Podilymbus podiceps, ibis Plegadis chihi, egret Ardea alba) were observed in late July-early August. Although migrants had increased to the thousands by our third sampling period, we observed migrant species on only two of the wetlands sampled at YOLO (i.e., several pintails and northern shovelers on YOL1 and several hundred shorebirds on YOL2), but did not detect AIv at these sites. These observations and the fact that AIv detection rate did not increase in August indicate that the limited number of early migrants did not likely contribute to the AIv pool.
Our findings suggest that cold environmental water temperature and high bird numbers may not be required to maintain AIv circulation. Although the low densities of resident waterfowl populations and unfavorable environmental conditions may impact virus circulation and epizootic dynamics (i.e. reduce transmission, decrease virus diversity), our findings showed that California waterfowl and wetlands may serve as a reservoir for AIv. Our findings justify further longer-term investigations about the dynamics of AIv infection in resident waterfowl populations to determine the importance of southern summer waterfowl areas as a potential source of infection for migratory wintering ducks, and to evaluate the potential to enhance virus exchange and favor virus reassortment through mixed infections [25]. Such information is basic for the understanding of AIv epidemiology and ecology.
Study areas and sample collection
About 10-15% of the wetlands in the Central Valley are semipermanent or permanent and maintain summer water for the approximately 400,000 resident waterfowl (about 70% mallard Anas Platyrhynchos) that breed there ( [22], California Dept. of Fish and Game, unpublished data). Most seasonal and semi-permanent wetlands in the Central Valley are managed primarily to provide food and refuge for wintering waterfowl. Managers schedule flooding and periodic disking or burning to encourage growth of swamp timothy, watergrass (Echinochloa crusgalli), and smartweed (Polygonum), or a mix of these and other wetland (e.g., alkali bulrush, Juncus, Paspalum distichum) or moist-soil plants [33]. These and the fall-flooded seasonal wetlands support several million migratory waterfowl during winter [22,34].
To limit the uncertainty inherent to disease surveillance surveys and enhance detection probabilities [35], we conducted repeated sampling in time and space [36]. We monitored ten wetlands for AIv at five major waterfowl wintering areas in the California Central Valley. All wetlands studied were either on federal or state lands and permission for sampling was obtained from the manager of each area. This study did not involve endangered or protected species and no other specific permits were required. We sampled wetlands that have permanent water, are frequently used by resident waterfowl populations, and have historically hosted high densities of migratory waterfowl during winter. In the Sacramento Valley (SACV; 39.37uN, -121.97uE) we sampled two wetlands at the Sacramento National Wildlife Refuge (SAC1 and SAC2), one wetland at Delevan National Wildlife Refuge (DEL1), two wetlands at Little Dry Creek (LDC1 and LDC2), and two wetlands in Gray Lodge Wildlife Area (GRL1 and GRL2). SAC, DEL, GRL, and LDC wetlands were #30 km apart. In the Yolo Bypass (YOLO; 38.54uN, -121.61uE), we sampled three wetlands at Yolo Wildlife Area (YOL1, YOL2 and YOL3; Figure 1). Sampled wetlands were bordered by idle grasslands and located in a rice dominant agricultural landscape [37]. Wetlands were sampled at bi-weekly intervals from late July to late August 2010 (3 sampling periods). However, three wetlands were unexpectedly drained before the end of our sampling: LDC1 after the late July sampling, and LDC2 and YOL3 after the mid-August sampling. In these cases, sampling during the remaining period(s) occurred in an adjacent wetland (500-2300 m-distant). Size of wetlands sampled averaged 18 ha (SD = 16 ha) and ranged from 6-58 ha. There was no detectable water flow in any of the wetlands during the sampling period.
During each wetland sampling period, we collected 20 samples of 45 ml of surface water at representative wetland vegetation sites distributed throughout the wetland, within areas accessible by foot (#1.2 m depth). At ten sample locations (every other water sample) we also measured water characteristics (temperature, pH, turbidity, dissolved oxygen (DO), and salinity) using a YSI 6920 V2-1 sonde (YSI Inc., Yellow Springs, OH). At YOL2 in late August these measurements were carried out only at three sample locations because water at other sample locations was too shallow (,15 cm depth). Approximate bird numbers, including primarily Anseriformes, and in a lower extent Ciconiiformes, Charadriiformes, Gruiformes, Pelecaniformes, and Podicipediformes, were recorded as low (,50 birds), moderate (50-100 birds), and high (.100 birds) based on binocular observations of open water areas for each wetland sampling period. Bird abundance at sampling time was an indicator of potential viral shedding so we could evaluate the probability of detecting AIv in wetlands with higher relative duck abundance. In large wetlands, we primarily sampled water areas used by ducks to increase AIv detection. During the first and third sampling periods, a composite sample of surface water consisting of four sub-samples from each wetland was collected and sent to a microbiology laboratory (Basic Lab., Chico, CA) to determine the concentration of coliform bacteria.
During each wetland sampling period we collected up to 20 fecal samples at $one waterfowl roost site (loafing or feeding location) along the wetland edge or on islands. Collection of fresh feces offers the opportunity to obtain information on the presence of AIv in wild bird populations without capturing birds [38]. At each site, we collected one sterile DacronH swab sample per distinct fresh feces; although we did not collect samples from adjacent feces, we cannot exclude the possibility that some fecal samples collected in the same roost site were from the same individual. Because of the absence of fresh feces at GRL2 (mid August) and SAC1 (late August) we collected fecal samples at an adjacent wetland (300-2000 m-distant). Fresh fecal swabs were immediately placed into a 1.5 ml vial of viral transport media [39]. Fecal and water samples were kept cool in the field and shipped with blue ice within 24 hours of collection to the U. S. Geological Survey National Wildlife Health Center, Madison, WI, for laboratory analyses.
Laboratory analyses
Molecular detection of AIv matrix was performed on all individual samples. RNA was extracted from 50 ml of the water or fecal swab sample and the presence of AIv tested according to the AIv matrix gene rRT-PCR method as described by Spackman et al. [40]. Results from rRT-PCR are reported in threshold cycle (Ct) values, which correspond to the number of rRT-PCR cycles required to detect nucleic acid (on a log 10 scale); lower Ct levels indicate greater concentration of virus RNA in the sample. There is no recommended Ct limit value for the use of the Spackman et al. [40] method on environmental samples in the literature. However, data from the 2009 surveillance across the United States showed a linear decrease in virus isolation rate with increasing rRT-PCR Ct values in swab samples, with 20% recovery at Ct = 40.0 (n = 1624 samples, R 2 = 0.99; Ip unpublished data). Individual samples with Ct-values#40.0 were considered as positive and were further analyzed using the H5 and H7 rRT-PCR tests [40][41]. All matrix gene rRT-PCR test positive specimens were then tested by virus isolation in embryonating eggs [42]. Note that, in our study, no virus was isolated from samples with rRT-PCR Ct values .40.0, indicating no false negative test results related to the Ct cut-off value. Allantoic fluid from each egg was tested for the presence of hemagglutinating viruses using chicken and turkey red blood cells. Hemagglutination-negative samples were passaged at least once more and retested before the original samples were considered negative. Hemagglutination-positive samples were retested by rRT-PCR to identify AIv isolates. Virus subtyping (for all HA and NA subtypes) was conducted on positive samples from virus isolation by RT-PCR and sequence analysis as described by Hoffman et al. [43]. | 2016-02-24T08:38:05.773Z | 2012-02-06T00:00:00.000 | {
"year": 2012,
"sha1": "3d3d94d8696dd5afde9ab918795d38ea991ed0d2",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0031471&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d3d94d8696dd5afde9ab918795d38ea991ed0d2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235197003 | pes2o/s2orc | v3-fos-license | Global Burden of Aortic Aneurysm and Attributable Risk Factors from 1990 to 2017
Background: To date, our understanding of the global aortic aneurysm (AA) burden distribution is very limited. Objective: To assess a full view of global AA burden distribution and attributable risk factors from 1990 to 2017. Methods: We extracted data of AA deaths, disability-adjusted life years (DALYs), and their corresponding age-standardized rates (ASRs), in general and by age/sex from the 2017 Global Burden of Disease (GBD) study. The current AA burden distribution in 2017 and its changing trend from 1990 to 2017 were separately showed. The spatial divergence was discussed from four levels: global, five social-demographic index regions, 21 GBD regions, and 195 countries and territories. We also estimated the risk factors attributable to AA related deaths. Results: Globally, the AA deaths were 167,249 with an age-standardized death rate (ASDR) of 2.19/100,000 persons in 2017, among which the elderly and the males accounted for the majority. Although reductions in ASRs were observed in developed areas, AA remained an important health issue in those relatively underdeveloped areas and might be much more important in the near future. AA may increasingly affect the elderly and the female population. Similar patterns of AA DALYs burden were noted during the study period. AA burden attributable to high blood pressure and smoking decreased globally and there were many heterogeneities in their distribution. Discussion: AA maintained an incremental public health issue worldwide. The change pattern of AA burden was heterogeneous across locations, ages, and sexes and it is paramount to improve resource allocation for more effective and targeted prevention strategies. Also, prevention of tobacco consumption and blood pressure control should be emphasized.
Introduction
An aortic aneurysm (AA) is a focal dilatation of the aorta to greater than 1.5 times normal size. The major complications of AA are catastrophic dissections and ruptures, which are usually surgical emergencies and have an ensuing mortality of 90% [1] even with prompt treatment. AA-related mortalities are estimated at about 200,000 per year worldwide [2], representing a considerable public health concern.
Thus far, most of our knowledge of AA burden distribution is largely hampered to epidemiological surveys executed in individual developed countries, for an outdated period, and limited to a defined age group/gender. For example, it's reported that abdominal AA caused 1.3% of all deaths among men aged 65-85 years in developed countries [3]. Therefore, we currently only get sporadic information on the entire blueprint of AA global burden.
The Global Burden of Disease (GBD) study is an ongoing global collaboration that uses all available epidemiological data to provide a rigorous and comparable measurement of the burden of 328 diseases across 195 countries and territories [4][5][6]. The Guidelines for Accurate and Transparent Health Estimates Reporting promote best practices in reporting health estimates [7]. It provided an opportunity to comprehensively assess the distribution and development trends of AA burden and then enable policymakers to make informed decisions about how to allocate resources to best improve population health. In a previous study, Sampson et al. described the mortality trends of aortic dissection and aneurysms using the data derived from the GBD Study 2010 [8]. As of now, no more updated research has been reported. Moreover, it is unclear whether AA has undergone epidemiological transitions so far, with the remarkable aging of the population and the changing burden of the related risk factors.
To bridge those knowledge gaps and complete the torn puzzle, we conducted this study to systematically reveal the levels and trends of global mortality, disability-adjusted life years (DALY) of AA, and the major risk factors, according to locations, ages, and sexes, based on updated data from the GBD study from 1990 to 2017, which may potentially inform health policy decisions.
Study data
Information on annual AA deaths, DALYs, and respective age-standardized rate (ASR), by locations, ages, and sexes, from 1990 to 2017, were retrieved from the Global Health Data Exchange (GHDx) query tool (http:// ghdx.healthdata.org/gbd-results-tool) [9]. The general methods for the GBD study and for the estimation of AA burden have been detailed in previous studies [8,10].
To analyze the global AA burden distribution, we classified the location information into three levels. Firstly, we used the social-demographic-index (SDI) to categorize the countries and territories into five SDI quintiles (high, high-middle, middle, low-middle, and low). Secondly, as shown in Tables 1 and 2, the world was geographically divided into 21 GBD regions. Lastly, we showed AA burden in 195 countries and territories by drawing world maps. . More importantly, the ASR trends can serve as a good surrogate for shifting patterns of disease within a population, and the estimated annual percentage ' change (EAPC) is a widely used measure of the ASR trend over a specified interval. Consequently, a regression line was fitted to the natural logarithm of the rates: y = α + βx + ε, where y represents ln ASR, and x refers to the calendar year. EAPC = 100× (exp(β)-1) and its 95% uncertainty interval (UI) can also be obtained from the regression model. Additionally, to explore the influential factors for EAPCs, we assessed the association between EAPCs and ASRs (1990)/HDI (2017) at the national level. All statistical analyses were performed using R program (Version 3.6.3, R core team). A p value of less than 0.05 was considered statistically significant. Figure 1A, B). The AA deaths varied considerably across ages and sexes, with the highest death cases observed in the 75to 79-year-old group in males and the 80-to 84-year-old group in females. People over 50 exceeded 90% of the total deaths; people over 70 accounted for 74.26% AA deaths in females but only 59.60% in males. In all age groups under 90, more than half or even two-thirds of the death cases were recorded in males, while females over 90 have higher occurrences ( Table 1, e- Figure 1A).
Trends of AA deaths
At the global level, the annual deaths increased gradually. It was noted in both sexes, but female achieved a higher increase (72.03%) than male (53.17%) ( Table 1). Contrary to the 59.58% increase in overall deaths over the past 28 years, the global ASDR was decreased from 2.88/100,000 persons in 1990 to 2.19/100,000 persons in 2017, with an overall EAPC of −1.32 (95% UI = 1.43 to −1.21). The ASDR in male subjects was markedly higher and decreased more obviously than that in females (Figure 2). The proportion of the three age groups (15-49 years, 50-69 years, and 70+ years) in AA deaths remained stable between 1990 and 2017. AA-related deaths in the 70-plus age group remained the highest among the three age groups during the study period (Figure 3).
Regarding SDI level analysis, the number of AA deaths increased in all five SDI quintiles between 1990 and 2017, precipitously increased in the middle and low-middle SDI quintiles (1.35-fold and 1.21-fold, respectively) and less obviously in the high SDI quintile (0.22-fold). However, the ASDR in the high SDI quintile was on the decline with an EAPC of − 1.92 (95% UI = −2.08 to −1.76). The ASDRs in the other SDI quintiles were stable (Table 1, Figure 2). Regionally, absolute numbers of AA deaths increased in almost all GBD regions between 1990 and 2017, except for Australasia. The most pronounced increase was observed in high-income Asia Pacific ( Table 1). The AA deaths at young age (15-to 49-year-old group) remained stable when compared the data in 2017 with that in 1990. Alarmingly, the number of AA deaths in the high-income Asia Pacific region and South Asia had grown rapidly, especially among older people (over 70 years old). In other regions with a heavy burden of AA deaths, Western Europe still had a highest number of deaths, and people over 70 years old accounted for a large proportion; East Europe and East Asia had an obvious increase while the high-income North America had achieved a visible decline of deaths (e- Figure 2A, C). Only three GBD regions (Central Asia, high-income Asia Pacific, and Eastern Europe) reported increasing AA ASDRs, and six GBD regions (high-income North America, Southern Latin America, Central Latin America, Western Europe, Australasia, and Caribbean) reported decreasing AA ASDRs; the ASDRs in the other GBD regions were stable during the study period ( Table 1).
On observation from the 195 countries and territories, ASDRs showed an upward trend in 45 countries and territories, a stable trend in 20 countries and territories, and a downward trend in 130 countries and territories ( Figure 1C). The three countries and territories with the highest EAPC were Georgia, Uzbekistan, and Turkmenistan; the three countries and territories with the lowest EAPC were Rwanda, Qatar, and Australia. The details were listed in e- Table 2.
Our findings demonstrated a significant negative relationship between EAPC and the ASDRs in1990 (ρ = -0.5413, p < 0.01, Pearson correlation analysis), suggesting that those countries with lower disease reservoir at baseline have experienced a more rapid increase in ASDRs ( Figure 4A). Conversely, no significant linear correlation was found between the EAPCs of ASDR and the HDI in 2017, while we can still find positive associations at the beginning and negative associations for the rest when fitting using polynomials ( Figure 4B). (Table 2). India, China, Japan and United States were the four countries with the highest reported AA DALYs in 2017. Armenia, Montenegro, and Fiji showed the highest age-standardized DALY rates, while Nicaragua, Kyrgyzstan, and Bahrain had the lowest ones (e- Table 3, e- Figure 3A, B). AA DALYs are similar to deaths in that they differ greatly across ages and sexes. While the peak of the DALY curves is roughly 10 years earlier than that of the deaths (e- Figure 1B).
Trends of AA DALYs
At the global level, a 45.49% increase was noted for DALYs over the past 28 years. In contrast, the global age-standardized DALY rate was decreased from 51.09/100,000 persons (95% UI = 49.01-54.52) in 1990 to 38.18/100,000 persons (95% UI = 36.21-40.00) in 2017, with an overall EAPC of -1.40 (95% UI = -1.51 to -1.29). A higher age-standardized DALY rate and its more pronounced decline were noted in male ( Table 2).
Regarding SDI level analysis, the total DALYs increased in all SDI quintiles, with increased ranged from 1.07-fold in the low-middle SDI quintile to 0.03-fold in the high SDI quintile. Correspondingly, the agestandardized DALY rate decreased most seriously in the high SDI quintile with an EAPC of -2.07 (95% UI = -2.22 to -1.91) (Table 2, Figure 2).
Among the 21 GBD regions, Central Asia showed the largest increase of AA DALYs between 1990 and 2017. In parallel, the AA DALYs of Australasia, high-income North America and Western Europe decreased a lot. The AA DA LYs at young age (15-to 49-year-old group) remained stable when comparing the data in 2017 with that in 1990. Alarmingly, the number of AA DALYs among older people had grown sharply in most Asia areas, including South Asia, East Asia, high-income Asia Pacific, and Southeast Asia. In other regions with a large number of AA DALYs, Western Europe still had the highest number of DALYs, and people over 70 years old accounted for the largest proportion; Eastern Europe had an obvious increase while high-income North America and tropical Latin America had achieved a visible decline (e- Figure 2B, D). The age-standardized DALY rate increased in only two regions, decreased in seven regions, and remained stable in all the other regions ( Table 2).
On observation from the 195 countries and territories, age-standardized DALY rates showed an upward trend in 42 countries and territories, a stable trend in 18 countries and territories, and a downward trend in 135 countries and territories (e- Figure 3C). The three countries and territories with the highest EAPC of DALYs were Georgia, Uzbekistan, and Turkmenistan; the three countries and territories with the lowest EAPC of DALYs were Rwanda, Australia, and Burundi. The details are listed in e- Table 3. The relationship between the EAPC of DALYs and age-standardized DALY rate/HDI mirrored the same pattern of the EAPC of deaths (e- Figure 4C, D).
Attributable risk factors changes
High systolic blood pressure (SBP)-related AA deaths decreased globally during the study period. This is mainly due to the reduction of AA deaths attributable to high SBP in the High SDI quintile, as this proportion displayed a gentle increasing trend in all the other SDI quintiles (Figure 5A). Between 1990 and 2017, the proportions of AA deaths attributable to smoking also declined globally. Specifically, smoking-related AA deaths showed a downward trend in all SDI quintiles, most notably in the high SDI quintile (Figure 5B). Commonly, both smoking and high SBP had the most important contributions to deaths in the high-middle SDI region, and their proportion was much higher than other regions for a long period of time.
With regard to risk factors, there were many heterogeneities in different geographic locations, ages, sexes, and variations between years. For 21 GBD regions, AA deaths attributable to high SBP are roughly equal between sexes, while the percentage of AA deaths attributable to smoking in males is much higher than that in females. The good news is that in all GBD regions, AA deaths attributable to smoking decreased in 2017 when compared with that of 1990 in both sexes (Figure 6A). For different ages, among people under 50, the proportion of AA deaths attributable to high SBP in male is slightly higher than that in female, and for both sexes the proportion is higher in 2017 than in 1990. Among people over the age of 50, AA deaths attributable to high SBP among male and female were roughly equal, and declined slightly in 2017. At all ages, the percentage of AA deaths attributable to smoking in males is significantly higher than that in females, and we can also see a decreasing trend of AA deaths attributable to smoking in both sexes over time (Figure 6B).
Discussion
This research thoroughly revealed the latest global burden of AA deaths and DALYs and the most relevant risk factors from 1990 to 2017. In general, the all-encompassing absolute number of AA deaths and DALYs were increased, while the overall ASRs of both of them declined. The patterns and trends of AA burden varied considerably by location, age, and sex and also differed across risk factors. We believe that our results are the most comprehensive and representative, which can serve as an important extension to previous studies and can furthermore help the design of targeted strategies in AA prevention tailored to different populations. We indicated a substantial AA burden of 167,249 deaths and 3,039,858 DALYs worldwide in 2017. These figures even almost certainly represent underestimates of the impact of AA for inevitable reasons [11]. Consideration of local conditions is essential, and targeted health policies will likely be the key for overall success. The area with towering concentration of AA burden that requires special attention was the high SDI quintile, not only in terms of the absolute amount of deaths and DALYs, but also the ASRs of both. The 21 GBD regions were: Western Europe, South Asia, East Asia, high-income Asia Pacific, and high-income North America. AA was even reported to be the fifth leading cause of cardiovascular disease DALYs in highincome Asia Pacific [10]. At the country and territory level, Japan, India, China, and United States were the four countries with the highest reported AA deaths and DALYs; this further highlights the need for greater awareness of AA in those areas.
Understanding the temporal trends of AA burden also facilitates the initiation of more targeted publichealth strategies. Contrary to the 59.58% increase in deaths and 45.49% increase in DALYs over the past 28 years, the ASDR and age-standardized DALY rate decreased, indicating that population aging and growth mostly accounted for the absolute increase in global AA burden. Regarding the analysis of SDI levels, the AA deaths and DALYs increased in all SDI quintiles, precipitously increased in the middle and the low-middle SDI quintiles and less obviously in the high SDI quintile. Correspondingly, the ASDR and the age-standardized DALY rate in the high SDI quintile were on the decline. From a regional perspective, the most pronounced increase of AA burden was observed in high-income Asia Pacific, Central Asia, and Eastern Europe, while high-income North America, Western Europe, Australasia had made substantial strides in reducing ASRs. These findings are constant with previous published studies [12][13][14][15][16] and similar patterns of epidemiological change were observed in the ASRs of overall cardiovascular diseases [10]. This phenomenon has to be taken into account for policymakers to formulate relevant policies more rationally, either increase support for AA prevention and treatment or continue to maintain the current gratifying trend, respectively.
We also validated that the amplitude of ASR variations from 1990 to 2017, namely EAPC, was significantly negatively correlated with baseline ASR in 1990. For those countries and territories with higher ASR in 1990, the AA burden was more likely to decrease. One possible explanation is that countries and territories with higher ASRs are also more likely to bear a heavier burden of all kinds of cardiovascular diseases. Significant public health efforts, such as improved management of various cardiovascular disease risk factors, continuous disease monitoring, and prevention of complications have been made to counter this problem. Those countries are also more likely to consider AA as a high priority in disease-prevention programs. For example, UK [17], Sweden [18], America [19], and Canada [20] adopted national screening policies, which undoubtedly saves lives and alleviates the AA burden imposed on medical healthcare systems. Furthermore, we found positive associations between EAPC and HDI in 2017 at the beginning while negative associations were observed when the HDI exceeded about 0.8 (the 'very high' level). The tide of AA burden appears to have been mitigated in countries with very high HDIs. The favorable pattern of the obvious downward trend of ASRs may reflect the benefits of robust health systems that are stemming AA burden via risk factor modulation. In addition, improvements in treatment [21] and supportive nursing measures [19] in recent years have further promoted this trend [22,23]. These findings provide clear confirmation that AA prevention can no longer be a priority of only well developed areas. There are challenges ahead for those countries with high/medium HDIs where relatively enormous growth in ASRs was noticed. The health systems in these regions are insufficient to cope with a foreseeable future increase in AA burden. This may indicate the need for more active localized prevention policy interventions to tackle the diverse challenges faced by the health-care systems. Moreover, it is conceivable that the extent of the AA burden in low HDI areas was likely to be underestimated. It is essential to improve disease monitoring in these areas and promote the implementation and evaluation of relevant health policies. More targeted strategies aimed to modify multiple risk factors and improve the availability and affordability of medical care for AA are urgently needed.
The proportion of annual young deaths and DALYs remained stable. The major burden of AA is the elderly over 50 years old, accounting for more than 90% of the total. The extremely high AA burden in the high SDI quintile is also mainly because of the high proportion of elderly cases. We also noticed that the increase in AA cases in high-income Asia Pacific, East Asia, South Asia, and Eastern Europe, and the decrease in AA cases in high-income North America was dominated by the changes in AA cases among people over 50 (especially those over 70). The increase in life expectancy and the consequent rapid and ongoing increase in the elderly population in those areas in recent years should be an important reason. What's more, it's reported that AA are now presenting later in life [13,22]. Based on trial evidence of screening efficacy, Howard et al. suggested that older age groups should be considered in screening programs [24].
Both the absolute number and ASR of deaths/DALYs of males were considerably higher than those of females. The peak age of AA burden in males was earlier than that in females. These discrepancies partially reflected the consequence of different risk factor distributions between sexes; the protective role of estrogen, local difference in vascular hemodynamics, and many other pathophysiological factors may also contribute to the sex differences. Besides, it should not be ignored that these findings may be partly due to selection bias, in which case males were more likely to be screened for AA than their female counterparts. We also noted the global increase of AA-related deaths for both sexes. It is worth noting that females achieved a higher increase in deaths than males; simultaneously, the ASDR of males decreased more obviously. The observations associated with DALY were congruent with those observed for deaths. The sex ratios for AA burden will probably change in the future. This may be an early warning to the fact that AA may increasingly affect females like other cardiovascular diseases [12,14,25].
Many health conditions and lifestyle habits put the aortic wall at risk of damage. Among them the high SBP [26] and smoking [27] have been extensively investigated. Many of the disparities of AA burden distribution can be explained by the heterogeneity of risk factor exposures. From 1990 to 2017, the proportions of AA deaths attributable to high SBP and smoking declined, which was more pronounced in the high SDI quintile. This is largely aligned with the temporal trends of ASRs of death and DALY. Moreover, the AA deaths attributable to high SBP is roughly equal between sexes, while the percentage of AA deaths attributable to smoking in males is much higher than that in females. This is also consistent with the male bias in AA death/DALY cases and ASRs. Therefore, more aggressive preventive interventions are needed, with emphasis on smoking prevention or cessation and blood pressure control, in order to maintain the downward trend of ASRs.
Owing to the widespread use of antihypertensive medications, global mean blood pressure has remained constant or has decreased slightly over the past four decades. In contrast, the global prevalence of hypertension has increased, while the proportions of hypertension awareness, treatment and blood pressure control were low [28]. Among them, countries and territories in the low SDI, low-middle SDI, and middle SDI quintiles were particularly lagging behind and their contribution to AA deaths was increasing. It is urgent to strengthen blood pressure management to correct this trend.
It is promising that the proportion of AA deaths attributable to smoking has been decreasing over the last couple of decades. The decline might be primarily attributable to the comprehensive anti-tobacco policies [22]. Large reductions in the estimated age-standardized prevalence of daily smoking were observed at the global level, especially pronounced in Australia, Brazil, China, Norway, Sweden, Switzerland, and United States, suggesting sustained progress in tobacco control [29]. Notably, there was a possible dose-response relationship between smoking and AA deaths [26], which confirmed this conjecture. Despite the great success of years of anti-tobacco efforts, smoking remains the leading risk factor of AA burden, and the pace of progress in reducing smoking prevalence has been heterogeneous. As more countries begin to recognize the enormous preventable tobacco-induced AA burden of death and disability, and the desire of most smokers to quit once they become aware of the risks of tobacco use, the goal of a tobacco-free world by 2040 will become more feasible [30].
Apart from high SBP and smoking, several other factors are also important in determining the risk of AA burden, including elevated cholesterol and triglycerides, sedentary lifestyle, obesity, a past history of arterial aneurysms in other blood vessels, family history of aneurysms, bicuspid aortic valve, and a history of chronic inflammatory disease. Atherosclerosis shares many risk factors with abdominal AA and is strongly associated with its development, while degenerative changes most often cause thoracic AA. From Figure 5 we can see that smoking and high SBP have always been the most important contributors in AA deaths. Their total contribution has been declining year by year, from 85.23% to 68.99%, which strongly reminds us to pay more attention to the other AA risk factors that have not yet attracted enough attention. We hope that in the future, more far-ranging epidemiological investigations can also pay attention to these risk factors to facilitate a comprehensive analysis of the attribution of AA burden.
Health policymakers need timely and accurate information on the AA burden to assess the effectiveness of current related policies and allocate limited resources. Although the GBD study 2017 provided highquality estimates of global AA burden, several limitations affected the present investigation. Differences in data collection and the quality of data sources were still inevitable. In many countries, reporting bias due to the scarcity or inferior quality of existing data were likely significant. The true AA-related deaths might be underestimated, especially in the low-SDI area, where imaging examinations were less routine and autopsy were rarely performed. For these countries and territories, results mainly relied on covariates known to be associated with AA, trends in neighboring countries, or a combination of both methods. It was even possible that in regions and countries where the burden of AA seems to be increasing partly due to improved diagnosis and reporting. Another limitation is represented by the lack of data concerning other AA-related risk factors besides high SBP and smoking. They were not available in the GBD 2017 datasets and, as such, are not accounted for.
Conclusion
In conclusion, AA remained an important public health issue with an incremental burden globally. It was noteworthy that the changing pattern of AA burden presented a mixed picture: on the one hand, although we have achieved a reduction in ASRs in some developed areas that used to have a heavy AA burden, deaths and DALYs caused by AA remained an important health issue in those relatively underdeveloped areas and might be much more important in the near future; on the other hand, the trend that AA may increasingly affect the elderly and females cannot be ignored. Prevention of AA attributable to tobacco consumption through government policy intervention and blood pressure control should be emphasized in several highrisk areas.
Data Accessibility Statement
All the original data we used for analysis in the present study are publicly accessible on the website of the Institute for Health Metrics and Evaluation (IHME) and can be downloaded at http://ghdx.healthdata.org/ gbd-results-tool for free. | 2021-05-27T05:19:35.311Z | 2021-05-04T00:00:00.000 | {
"year": 2021,
"sha1": "44af85a91ae494b33d5fe1c89919765f09286329",
"oa_license": "CCBY",
"oa_url": "http://globalheartjournal.com/articles/10.5334/gh.920/galley/995/download/",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "37558c050ac51850ad58ad2cce7caf168963bc15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4694091 | pes2o/s2orc | v3-fos-license | A Mini Review of the Zoonotic Threat Potential of Influenza Viruses, Coronaviruses, Adenoviruses, and Enteroviruses
During the last two decades, scientists have grown increasingly aware that viruses are emerging from the human–animal interface. In particular, respiratory infections are problematic; in early 2003, World Health Organization issued a worldwide alert for a previously unrecognized illness that was subsequently found to be caused by a novel coronavirus [severe acute respiratory syndrome (SARS) virus]. In addition to SARS, other respiratory pathogens have also emerged recently, contributing to the high burden of respiratory tract infection-related morbidity and mortality. Among the recently emerged respiratory pathogens are influenza viruses, coronaviruses, enteroviruses, and adenoviruses. As the genesis of these emerging viruses is not well understood and their detection normally occurs after they have crossed over and adapted to man, ideally, strategies for such novel virus detection should include intensive surveillance at the human–animal interface, particularly if one believes the paradigm that many novel emerging zoonotic viruses first circulate in animal populations and occasionally infect man before they fully adapt to man; early detection at the human–animal interface will provide earlier warning. Here, we review recent emerging virus treats for these four groups of viruses.
| The geographical location of first detections (with known reservoirs) for recently emerged adenoviruses (Ads), enteroviruses (EVs), coronaviruses, and influenza viruses. Zoonotic (coronaviruses and influenza viruses) and non-zoonotic viruses (Ads and EVs) are shown. For zoonotic viruses, the hosts range from cattle, bats, chickens, camels, wild birds, cats, ferrets, goats, and humans (from left to right). The different sizes of the circles represent the number of human cases during the first outbreaks of the emerging respiratory viruses. Human cases of adenoviral infections are shown in blue; human cases of enteroviral infections are shown in yellow; human cases of coronaviral infections are shown in green; and human cases of influenza viral infections are shown in red. ZOOnOTiC inFLUenZA introduction and epidemiology Influenza viruses are RNA viruses that are members of the orthomyxovirus family and classified into four types: A, B, C, and D (2). As shown in Table 1, these four types of viruses are characterized by their immunologically distinct nucleoprotein and matrix protein antigens. Influenza A and B viruses consist of hemagglutinin (HA), which binds a sialic acid receptor, allowing the virus to enter the host cell, and neuraminidase (NA), which cleaves the sialic acid to release the virus. Similarly, influenza C and D viruses contain HA-esterase fusion glycoproteins that also allow for the attachment of viral and cellular membranes. Antigenic shifts (influenza A only) in HA, NA, and the HA-esterase proteins contribute to the generation of novel viral strains. The host range of influenza viruses includes humans, birds, pigs, bats, and other livestock animals such as cattle and goats. The network of the influenza viral transmission is complex with both inter-and intraspecies transmission. As the viruses continue to change in their genetic sequences, ongoing research is imperative in investigating the ecology of these viruses at the human-animal interface to control further spread of infections and prevent the risk of future pandemics.
Swine Influenza H3N2
Influenza A virus H3N2 subtypes are frequently reported in swine, avian, and canine hosts that are responsible for highly infectious respiratory diseases in pigs and have been examined as a potential cause of influenza in humans. One study, examining the role of IAV in pigs at USA agricultural fairs, reported an average influenza A prevalence of 77.5% among 161 swine across all seven fairs (3). The genomic sequences of the viruses isolated from the swine were ≥99.89% similar to the H3N2 viruses isolated in humans. At these fairs, IAVs were detected at least 1 day before symptoms of the virus were observed in humans, indicating that H3N2 was transmitted from pigs to humans in this case.
H1N1
Since 2009, H1N1 virus has posed a significant threat to livestock workers and the greater community and has now become a seasonal influenza virus which circulates in humans. To explore the role of swine production facilities in the development of new swine-like influenza viruses, the spatiotemporal association between weekly influenza-like illnesses (ILIs) in humans and the location of pig farms was investigated in North Carolina over four influenza seasons (4). Analyses showed that the years of H1N1 pandemic, 2009-2010 and 2010-2011, were closely related with earlier peaking of ILI cases. These findings suggest that increased exposure to pigs was associated with earlier observations of the greatest number of human H1N1 cases. In China, the transmission of influenza A between humans and pigs in six farms is being examined using a One Health approach, taking into consideration the interconnectedness of humans, animals, and the environment (5). Findings suggest that both A(H1N1)pdm09-like and swine-lineage H1N1 and swine-lineage H3N2 viruses are circulating in swine workers and that these viruses likely reassort and cross species within the pig farms; as such, additional research is needed to understand the relationship between cross species transmission of viruses in humans and pigs.
Avian Influenza
Avian influenza viruses are the largest group of influenza A viruses reservoired in aquatic birds or poultry. Although infrequently transmitted to humans, many cases have now been reported. Human infection with avian influenza can lead to serious health conditions, including death. The first outbreak of an IAV strain, H5N1, in humans occurred in Hong Kong SAR, China, in 1997, infecting 18 humans (6). The first identified cases of human infection with H7N2, another avian influenza, occurred in North America with two human cases reported in 2002 (7). Another variation of the virus, H7N7, was the first avian influenza strain reported in Europe; it infected 89 humans in 2003. In 2004, the first human cases of H10N7 infections were observed in Africa (8). It is important to note that H5N1 virus outbreak occurred again in 2004, 7 years after its first outbreak in humans, infecting more than 650 humans and causing more than 386 deaths worldwide (9). The avian influenza viruses have continuously evolved, causing serious infections among humans across the world.
H7N9
H7N9 virus, a sporadic subtype of an avian influenza A virus, was first reported in humans in China in 2013. Since the first outbreak, China has been experiencing epidemics annually, with a cumulative number of 1,562 reported cases, 40% of which have led to deaths as of September 2017 (10). The incidence of the H7N9 infections has been increasing in both humans and poultry and in 2017 alone 764 infections have been reported (11). Although H7N9 was first recognized as a low pathogenic avian influenza, two divergent lineages were detected in 2016-including a highly pathogenic avian influenza variant (12). According to the Center for Disease Control and Prevention (CDC), H7N9 is now recognized as the virus with the greatest potential to cause a pandemic due to its rapid genetic changes over the last 5 years. This further supports the need to improve disease control strategies and increase efforts to develop an effective vaccination strategy in the future as the spread of the H7N9 infection poses a threat to the poultry business.
Influenza D
Influenza D virus (IDV) is a novel influenza virus that is structurally different from the other influenza viruses. IDV was first isolated in 2001 from pigs in USA and since the first report, viral infection has been reported in various locations in USA, Europe, and Asia. In a serological study, cattle workers and non cattleexposed adults in Florida were screened for IDV antibodies (13). Of the cattle workers, 97% of IDV seroprevalence was observed, while less than 20% was observed in non-cattle-exposed adults, suggesting a greater risk of IDV infection for cattle workers. During a swine respiratory disease outbreak in Northern Italy in 2015, the IDV genome was detected and isolated in both pigs and cattle herds (14). The viral genome isolated from the pigs was closely related to the viral genome isolated in USA in 2011. Additionally, the archived serum samples from 2009 had lower IDV antibody titers compared to the serum samples collected in 2015. These findings suggest that the incidence of IDV infections in pigs may have increased over time, and therefore, IDV may pose a public health threat to the community. (17,18).
Recently, it has also been suggested that bats may play a role in the direct human transmission as bat SARS-like coronaviruses have been identified in some species (19).
In the past decade, teams from the Sabin Vaccine Institute and Baylor College of Medicine have been working toward the development of a vaccine for SARS-CoV. Although initial reports indicated that a vaccine may be ready for human clinical trials in 2017, progress has been slow and few human SARS vaccine trials have been conducted to date.
MeRS-Cov
Middle East respiratory syndrome was first recognized in Saudi Arabia in 2012. Many cases were linked to travel to or residence in countries in and near the Arabian Peninsula. Symptoms include severe acute respiratory illness with fever, cough, and shortness of breath. There is limited human-to-human transmission of MERS-CoV, but exposure to camels is a risk factor for infection, with seroprevalences 15-23 times higher in camel exposed individuals (20). Despite this, major health care-associated transmission of MERS-CoV was reported in the Middle East and Korea, with outbreaks characterized by interhospital spread related to overcrowding and a lack of personal protective equipment (21). The total number of worldwide cases reported to the WHO as of January 9, 2017 was 2,067 MERS-CoV cases (22).
The cocirculation of CoVs in its animal reservoirs (camels and bats) raises important questions about the evolution of MERS-CoV. In a study conducted between 2014 and 2015 in Saudi Arabia, researchers found that dromedary camels share three CoV species with humans, including betacoronavirus 1, MERS-COV, and a CoV 229E-related virus (23). With the aim of reducing MERS-CoV transmission to humans, Haagmans et al. developed a vaccine for camels using a poxvirus vehicle (24). This vaccine has significantly reduced virus excretion among camels and conferred cross-immunity to camelpox infections (25).
enTeROviRUSeS introduction and epidemiology
Enteroviruses are small, positive-sense, single-stranded RNA viruses in the Picornaviridae family. There are 12 species of EVs found globally, including EV A-J (EV-A, B, C, D, E, F, G, H, and J) and rhinovirus A-C (RV-A, B, and C). With low replication fidelity and frequent recombination, EVs have viral genetic diversity and a potential for cross-species infection. In February 2013, the International Committee on Taxonomy of Viruses (ICTV) approved changes to EV and rhinovirus species names after many of the human EV species were identified and isolated in nonhuman hosts. Based on an analysis of Picornaviridae hosts listed in the ICTV database and subsequent studies of EV infection in non-human primates, there is growing evidence to indicate a potential for future zoonotic transmission between animals and humans (26,27). Among the most important emerging respiratory viruses are EV68, EV71, coxsackieviruses, echoviruses, rhinoviruses, and polioviruses.
Enterovirus transmission occurs year-round, with seasonal peaks occurring in the summer and fall (June-October). Infants less than 1 year of age are most susceptible to infection, and males are at an increased risk for infection until the age of 20 years (28). The predominant mode of transmission is through a direct or indirect fecal-oral route; however, certain serotypes are transmitted via the respiratory route, in tears, and via fomites (29). Immunity to EVs is serotype specific with most causing mild respiratory infections.
Rhinoviruses
Rhinoviruses are small, single-stranded RNA viruses in the picornavirus family that are responsible for more than half of all upper respiratory tract infections. In addition to exacerbating asthma and chronic obstructive pulmonary disease, rhinoviruses have also been associated with acute respiratory hospitalizations among children (30). In a large prospective study of US pneumonias, rhinoviruses have been identified as the second most prevalent etiology of pneumonia in children after respiratory syncytial virus and the first most common etiology among adults (31). There are more than 150 unique types of rhinoviruses. Among the three genotypes (A, B, and C) types A and C are most often associated with increased morbidity and bacterial secondary infection. In animals, rhinovirus type C has been associated with morbidity in chimpanzees (32). With an array of unique serotypes no vaccines or approved antiviral therapies have been commercially produced; however, experiments have suggested that vaccines and antiviral therapy may be possible (33,34).
ev D68
Enterovirus D68 has caused sporadic respiratory disease outbreaks across Asia, Europe, and USA since 1960s; however, in 2014, a nationwide outbreak of D68 was associated with severe respiratory illness in USA, resulting in 14 deaths out of a known 1,150 cases (35). The CDC found 36% of all EVs tested during this outbreak were D68 and that patients with a history of asthma were found to be at a disproportionately increased risk of infection (36). One study of the 2014 outbreak found 59% of patients seen with EV-D68 in hospitals across Missouri, Illinois, and Colorado were admitted to intensive care units and 28% received ventilator support (35). In a study evaluating EVs in non-human primates, EV-D68 was detected as a recombinant zoonotic strain (37).
enterovirus 71 (ev71)
While there are several strains of coxsackievirus and EVs that can cause hand-foot-and-mouth disease (HFMD), EV71 is most commonly associated with severe disease outcomes. HFMD predominantly affects young children and is found worldwide but especially in the Asia-Pacific region. Although EV71 is not typically detected in animals, recent research has indicated that it infects non-human primates (38). Various antiviral therapies are currently under study, including small molecules, monoclonals, and antivirals. Vaccine candidates are also in development, with two vaccines currently available in China, which involve recombinant proteins, attenuated strains, inactivated whole-virus and virus-like particles, and DNA vaccines (39).
HUMAn Ad introduction and epidemiology
First discovered in 1953 by Rowe et al., Ads are non-enveloped, double-stranded DNA viruses with 57 unique serotypes, some of which are specific for attacking the respiratory track, conjunctiva, or gastrointestinal track (40). Key features of Ad infections include various symptoms of disease, including rhinorrhea, nasal congestion, cough, sneezing, pharyngitis, keratoconjunctivitis, pneumonia, meningitis, gastroenteritis, cystitis, and encephalitis. Illnesses may be asymptomatic, mild, or severe; however, immunocompromised patients and infants are at increased risk of severe morbidity and death.
Ad Outbreaks
Outbreaks of respiratory Ad infection are common in both military recruits and other large training groups, such as police trainees. Large persistent epidemics of Ad type 4-associated respiratory disease have been documented in various military trainees (41)(42)(43). In response to the increased disease burden from Ad4 and Ad7 in military recruits, Teva has made a vaccine available to military recruits in USA (42). Despite a 12-year hiatus from use, in late 2011 oral Ad4 and Ad7 vaccines were reintroduced as an infection control measure for military recruits (42). After reintroduction, military recruits experienced a 100-fold decline in Ad disease burden, which accounted for the prevention of approximately 1 death, 1,100-2,700 hospitalizations, and 13,000 febrile Ad cases per year among trainees (44).
emerging Ads
Outbreaks of Ad in the general population have been characterized by infection due to novel viruses such as Ad7h, Ad7d2, Ad14a, and Ad3 variants. These novel viruses are sometimes associated with high attack rates and a high prevalence of pneumonia. Severe mortality is also prevalent among patients with chronic disease and in the elderly.
One of the most important novel serotypes, Ad14, previously rarely reported, is now considered as an emerging Ad type causing severe and sometimes fatal respiratory illness in patients of all ages (45). Beginning in 2005, Ad14 cases were suddenly identified in four locations across USA (46); the strain associated with this outbreak was different than the original Ad14 strain isolated in 1950s. The novel strain, Ad14a, has now spread to numerous US states and is associated with a higher rate of severe illness when compared to other Ad strains.
Novel Ad species have also been recently detected in crossspecies infections from non-human primates to man in USA and between psittacine birds and man in China (47). These cross-species infections indicate that Ads should be monitored for their potential to cause cross-species outbreaks. In a recent review of the risks of potential outbreaks associated with zoonotic Ad (48), it was noted that intense human-animal interaction is likely to increase the probability of emergent cross-species Ad infection. Additionally, the recombination of AdVs with latent "host-specific" AdVs is the most likely scenario for adaptation to a new host, either human or animal.
Currently, there are no FDA approved antivirals for Ad infection; however, the best antiviral success has been seen with ribavirin, cidofovir, and most recently brincidofovir an analog of cidofovir (49).
COnCLUSiOn
As it is clear that many emerging respiratory viruses have zoonotic reservoirs, the design and implementation of effective control strategies are increasingly important. It has been suggested that avoiding direct contact with animals known to be zoonotic reservoirs for these viruses is one potential strategy (50); however, in populations where contact at the human-animal interface is common this may not be an acceptable solution.
Complex disease problems cannot be solved by one institution or one discipline; as such, this presents opportunities to incorporate the One Health approach of working across disciplines to incorporate human, animal, and environmental health to solve complex problems. Although some of the respiratory viruses described here are found almost exclusively in humans (Ad strains), many of the most important emerging respiratory viruses are found at the human/animal interface. This suggests that strategies for novel virus detection should incorporate global surveillance at the human-animal interface to detect potentially emerging zoonotic viruses. This surveillance will require collaboration and cooperation among many stakeholders in order to address emerging and novel viral diseases.
AUTHOR COnTRibUTiOnS
EB, JF, and JC conducted the literature review and wrote the manuscript; GG conceived the idea of the review and helped revise the manuscript to add important scientific content and refine the interpretation of the results. All the authors reviewed the final version of the manuscript and agreed to its submission.
FUnDing
This work was supported in part by NIH/NIAID grant R01AI108993-01A1 (Gregory Gray PI). | 2018-04-09T13:06:25.212Z | 2018-04-09T00:00:00.000 | {
"year": 2018,
"sha1": "70068338c7b42c6afe98139eadbf9b150a0d1af1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2018.00104/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70068338c7b42c6afe98139eadbf9b150a0d1af1",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32536875 | pes2o/s2orc | v3-fos-license | Immobilized pH in culture reveals an optimal condition for somatic cell reprogramming and differentiation of pluripotent stem cells
Abstract Aim One of the parameters that greatly affects homeostasis in the body is the pH. Regarding reproductive biology, germ cells, such as oocytes or sperm, are exposed to severe changes in pH, resulting in dramatic changes in their characteristics. To date, the effect of the pH has not been investigated regarding the reprogramming of somatic cells and the maintenance and differentiation of pluripotent stem cells. Methods In order to investigate the effects of the pH on cell culture, the methods to produce induced pluripotent stem cells (iPSCs) and to differentiate embryonic stem cells (ESCs) into mesendoderm and neuroectoderm were performed at each medium pH from 6.6 to 7.8. Using the cells of the Oct4‐GFP (green fluorescent protein) carrying mouse, the effects of pH changes were examined on the timing and colony formation at cell reprogramming and on the cell morphology and direction of the differentiation of the ESCs. Results The colony formation rate and timing of the reprogramming of the somatic cells varied depending on the pH of the culture medium. In addition, mesendodermal differentiation of the mouse ESCs was enhanced at the high pH level of 7.8. Conclusion These results suggest that the pH in the culture medium is one of the key factors in the induction of the reprogramming of somatic cells and in the differentiation of pluripotent stem cells.
O R I G I N A L A R T I C L E
Immobilized pH in culture reveals an optimal condition for somatic cell reprogramming and differentiation of pluripotent stem cells Narae Kim | Naojiro Minami | Masayasu Yamada | Hiroshi Imai
| INTRODUCTION
The pH is one of the important parameters in life, specifying the acidity or basicity of an aqueous solution. Variations in the pH influence every biological process at the cellular, tissue, and wholebody level. 1 In reproductive processes, the vaginal pH in women is normally maintained at a pH that ranges between 4.0 and 5.0. 2 The semen in men is maintained normally at a pH of >8.0. 3 After increasing the vaginal pH within a few seconds by ejaculation, 4 the pH recovers to being fairly acidic during pregnancy. 5 At the cellular level, an acrosomal reaction results in an increase of the internal pH of sperm at fertilization. 6 The intracellular pH during oogenesis and embryogenesis varies at each developmental stage. 7,8 Thus, germ cells and embryos are exposed to pH fluctuations, with dramatic changes in their traits during the development of individuals. 9 Regarding the processes of cell differentiation and cellular This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
reprogramming, there is little information concerning the influence of the pH on these phenomena.
The pH affects many molecular mechanisms inside and outside of cells in order to maintain homeostasis. The proton gradient in cells is maintained by pumps and channels, such as Na + /H + exchangers, HCO 3 − /Cl − exchangers, V-type H + pumps and voltage-gated H + channel on the plasma membrane. 1,10 Active or passive changes by the pH affect cell traits, such as the motility, enzymatic activity, cell cycle, and apoptosis. 1,11 Cell motility is also caused by the constitutional change of the cytoskeleton, which is affected by the environmental pH. 12 Recent studies also indicate that actin proteins are essential in transcriptional activation during the differentiation and reprogramming of cells. 13,14 However, the effects of the phenomena on somatic cell reprogramming that are caused by the pH in culture have not been investigated. Regarding the effects of the pH on cell differentiation, mesenchymal stem cells are affected by the pH during cell differentiation into osteogenic and chondrogenic cell lineages. 15 Also, the high pH in murine embryonic stem cell (ESC) culture is known to enhance cardiac cell differentiation. 16 However, the effect of the pH on the differentiation of ESCs has not been well investigated for a wide range of pH values.
Distinctive changes in the molecular activity of cells can be observed in the processes of cell differentiation and the reprogramming of somatic cells. 17 During the course of the reprogramming of fibroblasts, the appearance of cells changes from mesenchymal to epithelial 18 and the reverse phenomena of the differentiation of stem cells with major changes in the epigenetic state also occur. 19 In this article, the effects of the pH were examined during cellular reprogramming and differentiation by using mouse embryonic fibroblasts (MEFs) and ESCs from transgenic mice that carried the Oct4green fluorescent protein (GFP) reporter in vitro: these mice are well suited to estimate pluripotency. 20,21 Cells that have been cultured in media at various pH levels then are observed at the colony formation, at the timing of the reprogramming of the MEFs, and at the differentiation of the ESCs to the mesendoderm (ME) and neuroectoderm (NE).
| Chemicals and animals
Unless otherwise noted, all of the chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA). The MEFs and ESCs were derived from transgenic mice that carried the Oct4-GFP reporter (RIKEN BioResource Center, Ibaraki, Japan) 22 and were used in experiments for the reprogramming of somatic cells and differentiation. The MEFs and ESCs that originated from the transgenic mice were obtained and treated as described previously. 23 For the preparation of the MEFs, embryos were collected at embryonic days 13.5-15.5, as described previously. 23 The isolated embryonic cells were maintained in Dulbecco's modified Eagle medium (DMEM; GIBCO Life Technologies, Grand Island, NY, USA) that contained 10% (v/v) fetal bovine serum (FBS; SAFC Biosciences, Lenexa, KS, USA), 73 IU/mL penicillin (Sigma-Aldrich), and 50 μg/mL streptomycin (Sigma-Aldrich). The fibroblasts with four passages or less were used for the reprogramming of the somatic cells.
For the maintenance of the ESCs, they were cultured in a medium that was mixed equally with Neurobasal medium (GIBCO Life
| Retroviral transfection
In order to perform the retroviral transfection, pMXs-based ret-
| Adjusting and monitoring the pH in the culture media
Each medium pH condition was controlled by changing the concentration of NaHCO 3 (Wako, Osaka, Japan) under 5% (v/v) CO 2 according to the Henderson-Hasselbalch equation method that was applied to tissue culture vessels. 27 The fresh ESC medium was pre-incubated for 24 hours in 5% (v/v) CO 2 in air and the pH of the medium was measured within 0.1 of standard error against the predicted pH. Each medium pH before and after the medium change was checked by a pH meter (B-212; HORIBA, Kyoto, Japan). The stale medium was changed immediately to a fresh medium after checking the pH every day (Figs 1A and S1). Sox2
| Induction and estimation of the reprogramming of the mouse embryonic fibroblasts
were observed by using an inverted microscope (DIAPHOT 300; Nikon, Tokyo, Japan) and the photographs were acquired by using a COOLPIX P6000 camera (Nikon).
| Estimation of the embryonic stem cell proliferation
The estimation of the cell proliferation of the ESCs in various pH conditions was performed in the ESM that contained 15% KSR with LIF.
The ESCs were passaged on STO feeders or gelatin-coated dishes and were cultured for 3 days and then the cell number in each medium pH was counted.
| Alkaline phosphatase activity in the pluripotent cell culture and immunofluorescence staining
The cells were fixed with 3.7% paraformaldehyde (Wako) in PBS
| Reverse transcription-polymerase chain reaction
The total RNA was prepared by using the TRIzol reagent (Ambion Life
| Induction of the differentiation of the embryonic stem cells in vitro
In order to induce the differentiation of the ESCs, the cells were passaged onto gelatin-coated 35 mm dishes at a density of 6 × 10 4 cells per dish in 10% FBS that contained ESM without LIF, PD0325901, and CHIR99021. Two days after culture, the following chemicals were added to each differentiation medium: 28 (Fig. 1b). At the induction of each germ lineage, the pH of the medium was adjusted to a pH of 6.8, 7.4, and 7.8.
| Statistical analysis
The statistical significance of the difference between the sample means was determined by using the Student's t-test.
| Effects of the external pH on colony formation during the reprogramming of the mouse somatic cells
In order to investigate the effects of the pH during the reprogramming of the mouse somatic cells, the Oct4-GFP-positive colonies were counted at each medium pH. The pH was checked every day and was confirmed to be within 0.1 of standard error, indicating little change in the pH throughout the culture (Fig. S1).
The GFP-positive colonies appeared from day 5 to day 10 of culture and were counted at day 17 for each medium pH (Figs 2A, 3A, and S2C). The highest number of GFP-positive colonies was obtained at a pH of 7.4 ( Fig. 2A). No colony, however, was observed at a pH of 6.6. The maximum difference in the colony number at a pH of 6.8 was 15-fold lower than that at a pH of 7.4. The average colony number at each medium pH was divided by the colony number at a pH of 7.4 and each ratio was found to be significantly different than those at a pH of 7.4. Interestingly, only a 0.2 point difference of pH caused a significant decrease in the colony formation number (Fig. 2A). In order to examine the effects on the established pluripotent stem cells, murine ESCs were cultured at each medium pH from 6.8 to 7.8. After 3 days of culture in each pH, the highest number of cells was obtained at a T A B L E 2 Effect of the pH on the timing of the somatic cell reprogramming In order to elucidate the effects of the pH on somatic cell reprogramming, the ESCs were replated as single cell at a low cell density ( Fig. S3). A similar number of colonies among the various pH ranges then was obtained.
| Effects of the pH on the timing of reprogramming
During the reprogramming of the somatic cells at each medium pH, the number of GFP-positive colonies was counted every day. The fastest appearance of the GFP-positive colonies was observed at 5 days of culture at a pH of 7.8, but the latest appearance of the colonies was at 11 days in the culture at a pH of 6.8 (Table 2).
| Effects of the pH on the pluripotency and differentiation of the embryonic stem cells
The ESCs were cultured in the differentiation-inducing medium, leading to ME in the culture medium at a pH of 6.8, 7.4, and 7.8. For 3 days of culture in the presence of CHIR and activin A, the endodermal markers of Mixl1 and T were expressed in the cells at a pH of 7.4 and 7.8, but not at a pH of 6.8 (Fig. S4A). Nestin expression, which is a neural marker, was slightly expressed at a pH of 6.8 but was not observed at a pH of 7.4 and 7.8. In order to confirm the further effects of the pH on ESC differentiation, the cells were cultured in neural differentiation medium in the presence of bFGF and retinoic acid. For 3 days of culture, Nestin was expressed at a pH of 6.8 and 7.4, but not at a pH of 7.8, in the cells (Fig. S4A). Under the mesendodermal differentiation medium, the qPCR analysis also showed a (Fig. 4B). In a pH of 6.8, each cell was scattered and the Oct4-GFP expression in the cells was weak for 3 days after the treatment with the chemicals for mesendodermal differentiation (Fig. 4C). In contrast, the Oct4-GFP expression still remained at a pH of 7.8 (Fig. 4C). Also, the T protein, which is the early mesendodermal marker, was highly expressed at a pH of 7.8 (Fig. 4D).
| DISCUSSION
The pH is well known to fluctuate significantly in cells and tissues and affects many biological phenomena. In this study, it was found that the fluctuation of pH in culture medium has an effect on the processes that occur during the somatic cell reprogramming and differentiation of ESCs.
The medium that contains NaHCO 3 that is stored at 4°C in air usually indicates a higher pH, with a 0.03% CO 2 concentration in air, than the predicted pH under 5% CO 2 in the incubator at chemical equilibrium. A culture of pluripotent stem cells with high metabolic activity that is dependent on glycolysis results in an immediate decrease in the pH due to the supply of lactic acid to the culture. Therefore, the fresh medium was pre-incubated and the culture medium was changed every day in the process of somatic cell reprogramming to maintain a stable pH in the culture.
An acidic pH is known to suppress the cell cycle. 29 The proliferation of ESCs at a lower pH (6.8-7.2) was decreased, compared with that at a pH of 7.4, indicating that a lower pH inhibits the cell proliferation of ESCs (Fig. 2B). During somatic cell reprogramming, another 4-8 days were needed at a pH of 6.8 and 7.0 for the appearance of GFP-positive cells ( Table 2). A high proliferation rate of cells is necessary for the induction of cell reprogramming and the maintenance of pluripotent stem cells. 30 This finding was considered to be caused by the acidic pH of the medium. These results indicate that the delay in the appearance of colonies and the lower number of colonies at an acidic pH could be caused by the inhibition of the cell cycle, resulting in a slow or inefficient induction of cell reprogramming.
Morphological differences in the colonies at different pH values were observed during the induction of cell reprogramming and the maintenance of the ESCs, in which the cells were dispersed in morphology at a high pH (7.6-7.8) and compacted at a low pH (6.6-7.2) (Figs 3, S2C and S3A). In oligodendrocytic precursor cells, an acidic pH has effects on cell migration. 31 Additionally, in cancer cells, migration is affected by the pH, but vesicle trafficking, contraction, invasion, and metastasis also are affected. 12 The compacted morphology of the colonies in the acidic pH might be caused by the inhibition of cell migration.
The variation of pH in the medium affected early cell differentiation into ME, which is the precursor of the mesoderm and endoderm, and into NE. Mixl1 and Brachyury (T), which allow cells to differentiate into the mesoderm and endoderm, are observed in the primitive streak of embryos at the gastrula stage. [32][33][34][35] It has been known that the Oct4 and Sox2 genes are inducers for mesendodermal and neuroectodermal cell differentiation, respectively, at early ESC differentiation. 28 In this experiment, Oct4-GFP expression at a pH of 7.8 also indicates the direction of cell differentiation to the mesendodermal cells (Fig. 4C). Previously, it was shown that a high pH in culture has effects on cardiac differentiation at a pH of 7.1 and 7.4, rather than at a pH of 6.8. 16 In this study, the cells showed a much higher expression of Mixl1 at a pH of 7.8. In addition, under the mesendodermal differentiation condition, Sox2 and Nestin expression were relatively higher at a lower pH. However, under the neuroectodermal differentiation condition, the expression of Nestin was not inhibited in any pH range, compared to mesendodermal differentiation (Fig. 4A).
These results indicate that a broad range of pH levels in culture affects cell differentiation, as well as the specific inhibitory effects of a low pH on ESC differentiation into the mesendoderm. The differentiation direction of cells is determined by their environment as to whether to differentiate into the progenitors of ME or NE. 28 These results suggest that pluripotent stem cells define the direction of cell differentiation in culture and that the environmental pH is one of the cues that determines the directional property.
The present study indicates that the extracellular pH affects cell reprogramming and cell differentiation. There are many pathways in which the pH affects these processes. For example, a low pH downregulates cell proliferation by inducing p53 activation and p53dependent cell cycle inhibition. 29,36 Furthermore, the inhibition of p53 supports the establishment of iPSCs. 37 Previous articles 29,36,37 thus have indicated that there is a close relationship between the effects of the pH on cell reprogramming and the cell cycle. In addition, it also should be considered that the effects of the intracellular pH on cell physiology might act through organelles, such as the nucleus, mitochondria, and endoplasmic reticulum, as well as through internal epigenetic regulation. 10 Although this study examined the effects of the pH in vitro, fluctuations in pH also are considered to affect cells in vivo.
ACKNOWLEDGEMENTS
The materials that were used for the experiments were kindly pro- | 2018-04-03T05:43:16.788Z | 2016-12-22T00:00:00.000 | {
"year": 2016,
"sha1": "2a61efc65350cde3b385b87b5196131012a66e72",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rmb2.12011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a61efc65350cde3b385b87b5196131012a66e72",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
3841371 | pes2o/s2orc | v3-fos-license | Methods for estimating the burden of antimicrobial resistance: a systematic literature review protocol
Background Estimates of the burden of antimicrobial resistance (AMR) are needed to ascertain AMR impact, to evaluate interventions, and to allocate resources efficiently. Recent studies have estimated health, cost, and economic burden relating to AMR, with outcomes of interest ranging from drug-bug resistance impact on mortality in a hospital setting to total economic impact of AMR on the global economy. However, recent collation of this information has been largely informal, with no formal quality assessment of the current evidence base (e.g. with predefined checklists). This review therefore aims to establish what perspectives and resulting methodologies have been used in establishing the burden of AMR, whilst also ascertaining the quality of these studies. Methods The literature review will identify relevant literature using a systematic review methodology. MEDLINE, EMBASE, Scopus and EconLit will be searched utilising a predefined search string. Grey literature will be identified by searching within a predefined list of organisational websites. Independent screening of retrievals will be performed in a two-stage process (abstracts and full texts), utilising a pre-defined inclusion and exclusion criteria. Data will be extracted into a data extraction table and descriptive examination will be performed. Study quality will be assessed using the Newcastle-Ottawa scales and the Philips checklists where appropriate. A narrative synthesis of the results will be presented. Discussion This review will provide an overview of previous health, cost and economic definitions of burden and the resultant impact of these different definitions on the burden of AMR estimated. The review will also explore the methods that have been used to calculate this burden and discuss resulting study quality. This review can therefore act as a guide to methods for future research in this area. Systematic review registration PROSPERO CRD42016037510 Electronic supplementary material The online version of this article (doi:10.1186/s13643-016-0364-8) contains supplementary material, which is available to authorized users.
Background
Antimicrobial resistance (AMR) can be defined as the phenomenon in which microorganisms persist in the presence of antimicrobials, which are commonly used to prevent and/or treat infectious disease. AMR is a cause for concern within the UK and globally, due to the current and great potential negative impact on population health [1,2]. AMR-associated burden can be defined as AMR impact on health (mortality or morbidity), impact on healthcare and patient costs or impact on the economy (labour force impact, productivity impact or opportunity cost) depending on study perspective. The AMR review, chaired by Jim O'Neill, has recently published estimates of potential future AMR burden, for example stating global gross domestic product (GDP) loss over the next 40 years could be as great as $3 trillion [1]. These estimates have since been cited by policy makers and the media [3,4], showing the demand for estimates quantifying the current and potential future problem AMR poses.
Accurate estimates of disease-related burden are needed for policy makers to establish disease-related resource need and advocate for appropriate levels of funding and are critical inputs for any health economic evaluations of AMR interventions.
Recent descriptive review articles have discussed methods of burden estimation in the context of AMR, citing a few articles as examples of different methodologies [5][6][7]. However, since the 2012 rapid review update of a previous systematic review [8,9], there has been no systematic review which looks into the estimation of burden associated with AMR. None of the aforementioned reviews formally quality assess study methodology, which is needed to highlight methodological issues in establishing the burden of AMR. The 2012 rapid review by Smith and Coast [8] concluded that the evidence base suggests the burden of AMR is relatively modest due to the narrow perspective taken by most studies, and that a wider societal perspective was needed to capture the true impact. However, with more recent work taking a wider perspective on AMR burden [1] and many more research articles being published in this area in recent years, a new assessment is required of the current estimates of both health and economic AMR burden.
The aims of this systematic review include the following: (i) to establish what perspectives and resulting methodologies have been used in establishing the burden of AMR, (ii) to see how this impacts on the burden estimates given and (iii) to assess the quality of these studies.
Research question
What perspectives and resulting methodologies have been used in establishing the burden of AMR? Figure 1 depicts an overview of the study procedure.
Study eligibility
Any studies that aim to quantify the burden of AMR within humans will be considered in this review, and this includes studies across any microbes, infections and country settings.
The modified PICO [10] inclusion and exclusion criteria to be applied at the review stages can be found in Table 1.
Search strategy
The methods used in this systematic review are in line with the PRISMA guidelines [11]. In line with previous published protocols [12], a completed copy of the PRISMA-P checklist has been completed (see Additional file 1).
The search period will be restricted from 2013 onwards; this date was chosen to avoid retrieval duplication with Smith and Coast [8]. Ovid "Medline and EMBASE", Scopus and EconLit will be searched, along with grey literature from predetermined agency websites. The following agency websites were defined after consulting a group of AMR researchers. Their content will be It has been previously stated that many papers do not mention AMR generally, but rather specific microbes [8]. In an attempt to tackle this, an additional 13 clinically relevant bacteria will be highlighted in the search. These can be identified in the stated search string below (note that this is in the format for Scopus, the same terms are to be reformatted for OVID and EconLit searches): ((TITLE ((excess OR associated OR attributable) W/2 (burden OR morbidity OR mortality OR cost*)) OR ABS ((excess OR associated OR attributable) W/2 (burden OR morbidity OR mortality OR cost*))) OR (TITLE ((economic OR clinical OR global) W/2 (impact OR outcome* OR burden OR cost*)) OR ABS ((economic OR clinical OR global) W/2 (impact OR outcome* OR burden OR cost*)))) AND ((ALL (("antibiotic" OR "antimicrobial" OR "multidrug" OR "microbial-drug") PRE/1 resistan*)) OR ((TITLE (enterococc* OR escherichia OR streptococc* OR staphylococc* OR klebsiealla OR pseudomonas OR neisseria OR chlamydia OR clostridi* OR mycobacteri* OR "gram-positive" OR "gram-negative") OR ABS (enterococc* OR escherichia OR streptococc* OR staphylococc* OR klebsiealla OR pseudomonas OR neisseria OR chlamydia OR clostridia* OR mycobacteri* OR "gram-positive" OR "gram-negative")) AND ((TITLE (susceptib* OR nonsusceptib* OR resistan*) OR ABS (susceptib* OR nonsusceptib* OR resistan*)) OR (ALL (("antibiotic" OR "antimicrobial" OR "multidrug" OR "microbial-drug") PRE/1 resistan*))))) The lead reviewer (NN) will review all abstracts and full texts. Independent reviewers will perform a parallel review of the abstracts and full texts, with each of these reviewers being assigned a percentage of the total retrieval items. Any discrepancies will be discussed and reexamined until agreement is reached.
Quality assessment
Risk of bias in individual studies will be assessed using the Newcastle-Ottowa scales for cohort and case control studies [13], whilst the Philips checklist will be used for economic models [14]. These tools were chosen as the focus of this review is on study methodology rather than reporting standards.
Risk of bias across studies will be assessed in two groups; studies looking at health burden and studies looking at all other burden, and will simply be assessed based on the sign and significance of the outcome. This is due to the expected heterogeneity in studies (outcome, infection, resistance).
Data collection and analysis
Data will be collected by the lead reviewer (NN). Data will be inputted into a standardised data extraction table (Excel) and independently checked to ensure quality.
The following information will be extracted: study identifiers, study characteristics (perspective, country setting), population characteristics, data setting (hospital or community), study methodology, outcome of interest (mortality, length of stay, cost), results (e.g. resistance has a significant impact on the outcome of interest), stated limitations and information used for risk of bias assessment (informed by the cited checklists).
A descriptive synthesis of the study information and risk of bias structured around the perspectives (health, health system and economic burden) and related methods used will be provided. This will include a results table containing individual level study data, and summary graphical representation of study characteristics such as scatter plots of estimates for excess mortality and monetary cost. We anticipate limited scope for a meta-analysis given the assumed heterogeneous nature of identified outcomes, studies included may differ across perspective, infection site, infection type/causative organism, bug-drug combinations and sub-populations. However, if there are suitable data for one drug-bug combination in similar populations, then forest plots will be constructed utilising hazard ratio as the comparative outcome [15].
The format of this write-up will be a manuscript which will be submitted for publication in a peer-reviewed journal, it will also contribute to the lead reviewer's (NN) PhD project as part of the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Healthcare Associated Infections and Antimicrobial Resistance.
Discussion
Recent estimates for the burden suggest that AMR that it is a significant economic burden to the global economy [1], whilst previous reviews have suggested that perhaps study outcome and methodology impacts whether AMR is found to have a significant burden or not [8]. Yet, there has been no literature review which formally looks to assess the quality of such studies.
Originally, the leading author ran a similar search strategy independently; however, after discussion with coauthors, it was realised that given the nature of previous reviews, and the lack of quality assessment of previous literature, the original study design did not adequately answer the research question or fill the current research gap. Therefore, the original study was halted (results not published in peer-review) and the study protocol was revised into the protocol written here. This review will provide an overview of previous health, cost and economic definitions of burden in the context of AMR. The review will also explore the methods that have been used to calculate this burden and discuss resulting study quality. This review can therefore act as a guide to methods for future research. | 2018-04-03T01:33:33.549Z | 2016-11-08T00:00:00.000 | {
"year": 2016,
"sha1": "f7fd195bf0582ce850936f579ab05526fba6f8f8",
"oa_license": "CCBY",
"oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-016-0364-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7fd195bf0582ce850936f579ab05526fba6f8f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254220830 | pes2o/s2orc | v3-fos-license | Zero-Shot Rumor Detection with Propagation Structure via Prompt Learning
The spread of rumors along with breaking events seriously hinders the truth in the era of social media. Previous studies reveal that due to the lack of annotated resources, rumors presented in minority languages are hard to be detected. Furthermore, the unforeseen breaking events not involved in yesterday's news exacerbate the scarcity of data resources. In this work, we propose a novel zero-shot framework based on prompt learning to detect rumors falling in different domains or presented in different languages. More specifically, we firstly represent rumor circulated on social media as diverse propagation threads, then design a hierarchical prompt encoding mechanism to learn language-agnostic contextual representations for both prompts and rumor data. To further enhance domain adaptation, we model the domain-invariant structural features from the propagation threads, to incorporate structural position representations of influential community response. In addition, a new virtual response augmentation method is used to improve model training. Extensive experiments conducted on three real-world datasets demonstrate that our proposed model achieves much better performance than state-of-the-art methods and exhibits a superior capacity for detecting rumors at early stages.
Introduction
The spread of rumors emerging along with breaking news is a global phenomenon, which can cause critical consequences for social network users in different lingual contexts. For example, during the unprecedented COVID-19 pandemic, a false rumor claiming that "the vaccine has a chip in it which will control your mind" 1 released by a Muslim cleric, went viral on Facebook and Twitter in different languages. Such misleading claims about vaccines are being shared widely in many countries, which confuse the public and undermine their enthusiasm for vaccination. Due to the barriers of domain and language, even human fact-checkers are poor judges of such rumors. Therefore, it's imperative to develop automatic approaches for rumor detection spread in different languages amid unforeseen breaking events.
Social psychology literature defines a rumor as a story or a statement whose truth value is unverified or deliberately false (Allport and Postman 1947). In this study, we focus on detecting rumors on social media, instead of "fake news" strictly defined as a news article published by a news outlet that is verifiably false . State-of-the-art techniques using deep neural networks (DNNs) (Bian et al. 2020;Lin et al. 2021a;Rao et al. 2021) have promoted the development of rumor detection, but they are all data-driven models that require extensive annotated data for model training. Most corpora are open-domain and presented in English, which makes them not scalable to emerging events in new languages where only few/no labeled data is available. Zero-shot rumor detection task (ZRD) aims to adapt knowledge learned in the source rumor data to the target data without labeled training samples in the target language and domain, as shown in Figure 1. Previous related studies (Du et al. 2021;Tian, Zhang, and Lau 2021) directly utilize pre-trained language models (PLMs) (Devlin et al. 2019) to fine-tune on ZRD task. However, they just formulated the zero-shot rumor detection as a cross-lingual text classification problem and detected the single claim post with a heavy task-specific fine-tuning stage, which makes it deviate from the pre-training target on masked language modeling, even ignoring the domain-invariant interaction of user opinions during the diffusion of rumors. More recently, Lin et al. (2022) propose a contrastive learning framework to detect rumors from different languages and domains, where a small number of target annotation is required. However, it is prone to be poor at emerging events propagated in minority languages without any expertise annotation, especially in some underdeveloped countries and regions. For breaking events with scarce annotated data in different languages, the study on the zero-shot regimes is more urgent and practical for rumor detection on social media.
In this paper, we focus on exploring efficient prompting with language and domain transfer for zero-shot rumor detection. We assume there are no accessible annotations in the target language and domain, so prompt learning mechanisms (Zhao and Schütze 2021) based on existing multilingual PLMs can be utilized. However, the standard prompt learning paradigm adopts discrete or soft prompts, where the discrete prompt requires experts of native speakers to design rumor-related templates/rules for different languages, Figure 1: Illustration between the task-specific fine-tuning and the prompt learning paradigms for solving ZRD task. and the soft prompt uses optimized token representations trained on a large dataset. Unlike the standard prompt-tuning paradigm, we propose to decouple shared semantic information from the syntactic bias in specific languages based on multilingual PLMs, which could enhance the semantic interaction between the prompt and rumor data. Besides, as the diffusion of rumors generally follows spatial and temporal relations that provide valuable clues on how a claim is transmitted irrespective of specific domains (Zubiaga et al. 2018), we aim to develop a novel prompt learning mechanism to take such social context into consideration.
To this end, we propose a zero-shot Response-aware Prompt Learning (RPL) framework to detect cross-lingual and cross-domain rumors on social media. More specifically, we firstly rank responsive posts toward the claim to represent diverse propagation threads. Then a hierarchical prompt encoding mechanism is proposed based on multilingual PLMs, which alleviates the effort of prompt designing for different languages. On the other hand, as the propagation structure contains domain-invariant features on how a claim is responded to by users over time, we model the absolute and relative propagation position to capture the latent structure of the propagation thread for better domain adaptation. To further improve the zero-shot model training, we incorporate a new virtual response augmentation mechanism into the prompt learning framework. As there is no public benchmark available for detecting rumors in low-resource languages with propagation threads in tweets, we collected a new rumor dataset corresponding to COVID-19 from Twitter in Cantonese and Arabic languages. Extensive experiments conducted on three real-world rumor datasets corresponding to COVID-19 confirm that (1) our model yields outstanding performance for detecting zeroshot rumors over the state-of-the-art baselines with a large margin; and (2) our method performs particularly well on early rumor detection which is crucial for timely intervention and debunking especially for breaking events.
Related Work
Pioneer studies for automatic rumor detection focused on learning a supervised classifier utilizing features crafted from post contents, user profiles, and propagation patterns (Castillo, Mendoza, and Poblete 2011;Yang et al. 2012;Liu et al. 2015). Subsequent studies then proposed new features such as those representing rumor diffusion and cascades (Kwon et al. 2013;Friggeri et al. 2014;Hannak et al. 2014). Zhao, Resnick, and Mei (2015) alleviated the engineering effort by using a set of regular expressions to find questing and denying tweets. Deep neural networks such as recurrent neural networks (Ma et al. 2016), convolutional neural networks (Yu et al. 2017), and attention mechanism (Guo et al. 2018) were then employed to learn the features from the stream of social media posts. To extract useful clues jointly from content semantics and propagation structures, some approaches proposed kernel-learning models (Wu, Yang, and Zhu 2015;Ma, Gao, and Wong 2017), tree-structured recursive neural networks (RvNN) (Ma, Gao, and Wong 2018), self-attention models (PLAN (Khoo et al. 2020), STANKER (Rao et al. 2021)) and graph neural networks (BiGCN) (Bian et al. 2020) have been exploited to encode conversation threads for higher-level representations.
Recently, zero-shot transfer learning techniques are applied on PLMs to detect fake news (Du et al. 2021;Schwarz, Theóphilo, and Rocha 2020;De et al. 2021) by downstream task-specific fine-tuning methods. Tian, Zhang, and Lau (2021) utilized PLMs and a self-training loop to adapt the model from the source language to the target language in multi-step iteration. However, these approaches only consider cross-lingual text classification and face problems such as task mismatch between pre-training and fine-tuning, and ignore the domain-invariant propagation patterns from community response. Considering that rumors can be domainspecific and/or presented in different languages, Lin et al. (2022) first introduced supervised contrastive learning for few-shot rumor detection based on propagation structure. However, their few-shot paradigm still relies on a small number of target data for training, which cannot perform well in detecting more minority-language rumor data without any expertise annotation in case of emerging topics.
Prompt learning converts downstream tasks to language modeling tasks via textual prompts, which is found more effective to use PLMs than typical fine-tuning on specific tasks (Brown et al. 2020;Liu et al. 2021a). In recent years, prompt learning has achieved great success in a variety of NLP tasks, such as text classification (Min et al. 2022), semantic parsing (Schucher, Reddy, and de Vries 2021), text generation (Li and Liang 2021), sentiment classification (Seoh et al. 2021) and dialog state tracking (Lee, Cheng, and Ostendorf 2021), etc. Despite the flourish of the research in prompting methods, there is only limited attention being put on the low-resource rumor detection task . Different from a few previous multilingual work (Zhao and Schütze 2021;Winata et al. 2021;Lin et al. 2021b) on either discrete or soft (Liu et al. 2021b;Lester, Al-Rfou, and Constant 2021) prompts, in this paper, we tune the level-grained models' parameters for language-agnostic rumor prompts, which further attends to user interactions from community response for zero-shot rumor detection task.
Problem Statement and Background
In this work, we define the zero-shot rumor detection task as: given a dataset as source, classify each event in the target dataset as a rumor or not, where the source and target data are from different languages and domains. Specifically, we define a source dataset for training as a set of events where M is the number of source events. Each event C s = (y, c, T (c)) is a triplet representing a given claim c which is associated with a veracity label y ∈ {rumor, non-rumor}, and ideally all its relevant responsive microblog post in chronological order, i.e., where m is the number of responsive posts in the conversation thread. We consider the target dataset with a different language and domain from the source dataset for testing ) shares the similar structure as that of the source.
This task could be formulated as a supervised classification problem that trains a language/domain-agnostic classifier f (⋅) transferring the features learned from source datasets to that of the target events, that is, f (C t |D s ) → y. In this work, we convert the rumor detection as a clozestyle masked language modeling problem. For example, given a cloze-style template p (e.g., "For this [MASK] story." as the prompt, spliced with the claim c intoĉ, the standard prompt learning leverages PLMs to obtain the hidden state for the [MASK] token, to infer the rumor-indicative words to fill in [MASK]. The probability of label y is: where V is a set of rumor-related label words, V y is the subset of V corresponding to y and g(⋅) is a manual verbalizer to transform the probability of label words into that of the label. In this way, we could map predicted words for [MASK] into the veracity label to make a decision on the claim.
Our Approach
In this section, we introduce our Response-aware Prompt Learning framework for zero-shot rumor detection, in crosslingual and cross-domain settings. Because the rumorrelated prompt design for different languages can be biased and labor-intensive, we propose to learn languageindependent prompts including the template and verbalizer. On another hand, as the responsive posts could provide a domain-invariant propagation structure for representation learning, we explore how to fuse such community response into the prompt learning framework. Figure 2 illustrates an overview of our proposed model, which includes: 1) Response Ranking, which presents each event as diverse propagation threads following temporal or spatial relations; 2) Hierarchical Prompt Encoding, which is the backbone to learn language-independent interaction between the prompt and the event with the prior knowledge of multilingual PLMs; 3) Propagation Position Modeling, which equips our proposed prompt-based framework with the latent structure of the propagation thread; and 4) Response Augmentation, which adds noise to responsive posts to improve model training for better robustness.
Response Ranking
To highlight the social context for enhancing the contextual representation learning for the event, we propose to attend over evidential responses. The core idea is to rank all the responses based on diverse propagation threads. First, we hypothesize that the attitudes of responsive posts towards the claim will become more inclined as time goes by, thus the responsive posts can be sorted in chronological and inverted order on the time sequence. Specifically, for the chronological order, responsive posts with earlier time stamps are prioritized, i.e., T (c) = [x 1 , x 2 , ⋯, x m ], and vice versa for the inverted order on the time sequence, i.e., Besides the perspective of time sequence, inspired by (Ma, Gao, and Wong 2018;Bian et al. 2020), we further represent the propagation thread as a tree structure where G refers to a set of nodes each representing a responsive post of c, and − → E is a set of directed paths conforming to the responsive relation among the nodes in G. We scrutinize the optimal search algorithms on the tree structure to select more evidential posts in depth-first and breadth-first order. Specifically, the depth-first search studies the propagation patterns during information flows from the ancestor to the children nodes while the breadth-first search gives priority to the interaction of user opinions among sibling nodes. Taking the propagation tree in Figure 2 as an example, the depth-first order of the response ranking would be [x 1 , x 2 , x 5 , x 3 , x 4 , x 6 ]; for the breadth-first order, it would be [x 1 , x 3 , x 2 , x 4 , x 5 , x 6 ].
In this way, concerning perspectives of time sequence or propagation tree, we could investigate the importance of different responses T (c) on the verdict of a claim.
Hierarchical Prompt Encoding
Generally, it will lead to bias towards syntax in specific languages if we directly utilize the existing tokens from the vocabulary like expertise words or language-specific slang as the template. To bridge the gap between languages in this task, the template shall not depend on any specific language. Although the soft prompt is a potential way to solve this problem, its trainable tokens require enough target rumor data for training, which is challenging in zero-shot regimes. To this end, we hope to implicitly disentangle shared semantic information from different languages with languagespecific syntactic knowledge, by leveraging the priors of multilingual PLMs. Previous literature (Jawahar, Sagot, and Seddah 2019; Rao et al. 2021;Huang et al. 2022) has shown that the lower layers of PLMs can capture syntacticlevel features while the upper layers of PLMs model the semantic-level features. Therefore, we can present a Hierarchical Prompt Encoding (HPE) mechanism for languageindependent representation learning of the template and the event at syntactic and semantic levels. In our approach, we hypothesize that the semantic information could be shared over different languages though the syntax is languagedependent.
SynEncoder Layer. At the syntactic level, to obtain the intermediate syntax-independent embeddings, we copy and froze the parameters of the lower k layers from multilingual PLMs encoders to encode the template and the event, respectively. Specifically, the original template p is syntactically mapped into a shared vector space: where X p ∈ R |p|×d is the template embeddings and d is the dimension of the output state of SynEncoder.
For an event C, as all the responsive posts are presented in the same language and domain as the claim either at the training or testing stages, we could concatenate them in the same frozen SynEncoder to obtain the embeddings of the event: where [⋅, ⋅] means the splicing operation, X cr ∈ R o×d is the embeddings of the event (i.e., claim with its community response), o is the maximum sequence length of PLMs. Based on the obtained response ranking, the contextually coherent posts could be potentially retained from the perspectives of the temporal and spatial relations, respectively, under the input length restriction of PLMs (Devlin et al. 2019).
SemEncoder Layer. At the semantic level, we initialize a trainable semantic encoder with the (k + 1) th layer to the top layer of the PLMs. Then we concatenate and refine the output states of the template and the event, on top of the frozen SynEncoder, to further model the semantic interaction between the template and the event: (4) In summary, we map the simple English discrete prompt into a shared embedding space at the syntactic level by the prior knowledge of SynEncoder, which is then fed into the SemEncoder for semantic interaction with the event. On top of the SemEncoder, we present a prototypical verbalizer to map the output states H m of [MASK] token into the label y without manual rumor-related label words for specific languages, which would be depicted in Sec. 4.5.
Propagation Position Modeling
To bridge the prompt learning and propagation structures for zero-shot rumor detection on social media, we further propose Absolute and Relative Propagation Position Modeling, to inject the propagation information into the tunable SemEncoder for domain-invariant structural features extraction at the semantic level.
For Absolute Propagation Position, we exploit the propagation path of a responsive post in the propagation tree, which is complementary to its sequential counterpart (Devlin et al. 2019). Specifically, given a token q from a post x i , we treat the claim c of the event as the root and use the distance of the responsive path from the current post to the root as the absolute propagation position: Note that in this work, we make the tokens in the same post share the propagation position of the post in the propagation tree. Thus we update the input representation of the token q for the tunable SemEncoder by summing the corresponding token embeddings in X cr and its absolute position embeddings, where the absolute position embeddings are trained with learnable parameters (Gehring et al. 2017).
For Relative Propagation Position, we mainly focus on the local context of a responsive post in the propagation tree as its relative propagation position. As each post in the propagation tree may trigger a set of responsive posts, we aim to capture the relative user opinions among responsive posts in such a subtree structure. Specifically, towards a post x i , we consider the relative posts with five relationships in the subtree as the relative propagation position: 1) P arent (+) ; where +/− denotes the relative post comes earlier/later than the current post in the subtree. We then extend the selfattention computation to consider the pairwise relationships among posts in the same subtree and project the relative propagation position into the SemEncoder by drawing the practice of Shaw, Uszkoreit, and Vaswani (2018). In this way, the relative propagation patterns in a local subtree can be captured explicitly as users share opinions towards the same subtree root, to cross-check the inaccurate information.
Response Augmentation
Since the model could suffer from noisy responses, we propose to enhance the prompt learning by creating additional adversarial examples. We present a new virtual response augmentation algorithm, ViRA, a variant of the virtual adversarial algorithm (Miyato et al. 2018). To create an adversarial example, we apply Fast Gradient Value (Rozsa, Rudd, and Boult 2016) to approximate a worst-case perturbation, where the gradient is normalized to represent the direction that significantly decreases the model's performance, and a norm is used to ensure the approximation is reasonable. However, the value ranges (norms) of the embedding vectors vary among different data and models. The variance gets larger for bigger models with billions of parameters, leading to some instability of adversarial training. To this end, we first apply layer normalization (Ba, Kiros, and Hinton 2016) on top of the frozen SynEncoder to normalize the embeddings into stochastic vectors, and then perform a mask operation to filter out the template and claim embeddings, lastly add the perturbation to the normalized embedding vectors of responsive posts. Adversarial noise enables the model to handle extensive noisy responsive posts and can be regarded as a response augmentation mechanism.
Model Training
On top of the SemEncoder where the template and a event sample (i.e., a claim and its responsive posts) could be transformed into a shared semantic latent space, inspired by Prototypical Networks (Snell, Swersky, and Zemel 2017;Lin, Yan, and Chen 2021), we further introduce a prototypical verbalizer paradigm to prevent the rumor-related label words from heavily relying on the language-specific expertise words. The core idea is to utilize the representative features of instances from the same classes for encapsulating event-level semantic features instead of the languagedependent label words. Given the [MASK] token representation H m i of a training example C i , we minimize a prototypical loss as follows: where y is the ground truth of H m i , S denotes the normalized cosine similarity score. l y denotes the learnable prototype vectors of the class y, which is the cluster representative of the embedded support points belonging to the class. By optimizing the above objective function L proto , rumor features can be close to corresponding rumor prototype in semantic space and be away from the non-rumor prototype.
In addition, we adopt the contrastive loss to pull up the intra-class variance and down the inter-class variance of instances in a batch: where B y i is the number of source examples with the same label y i in the event C i in a batch, and 1 is the indicator.
We jointly train the model with the prototypical contrastive objectives: L = αL proto +(1−α)L con , where α is a trade-off parameter set as 0.5 in our experiments. So we generate a pseudo augmented example for C i based on response augmentation, which is again fed into the tunable SemEncoder to compute the new lossL. Finally, we use the average loss L avg = mean(L +L) for the back-propagation (Collobert et al. 2011) with the AdamW optimizer (Loshchilov and Hutter 2018). We set the layer number k of the SynEncoder as 6. The learning rate is initialized as 1e-5. Early stopping (Yao, Rosasco, and Caponnetto 2007) is applied to avoid overfitting.
Datasets
We utilize FOUR public datasets TWITTER, WEIBO (Ma et al. 2016), Twitter-COVID19 and Weibo-COVID19 ) for experiments. TWITTER and Twitter-COVID19 are English rumor datasets with conversation thread in tweets while WEIBO and Weibo-COVID19 are Chinese rumor datasets with the similar composition structure. Furthermore, as there are no public benchmarks available for detecting rumors in low-resource languages with propagation structure in tweets, we organized and constructed a new low-resource rumor dataset CatAr-COVID19. Specifically, we resort to two COVID-19 rumor datasets (Alam et al. 2021;Ke et al. 2020), which only contain multilingual textual claims in Cantonese and Arabic without propagation thread. We extend each claim by collecting its propagation thread via Twitter academic API in python. Finally, we annotated the claim tweets by referring to the labels of the events from the original datasets 2 . Statistics of the five datasets are shown in Table 1.
Experimental Setup
We compare our model with several state-of-the-art zeroshot rumor detection systems: 1) Vanilla-Finetune: Finetune the model for classification by adding a task-specific linear layer with the [CLS] token on top of PLMs (Devlin et al. 2019); 2) Translate-Finetune: Utilize rumor data in source language for training and translate the claim into target languages for testing (Du et al. 2021); 3) Contrast-Finetune: We employ and extend an existing few-shot learning technique, supervised contrastive learning , to fine-tuning on the source data in the zero-shot scenario; 4) Adapter: Fix the parameters of PLMs and add only a few trainable parameters per task within a residual adapter (Houlsby et al. 2019); 5) Parallel-Adapter: An adapter-based variant (He et al. 2021) by transferring the parallel insertion of prefix tuning into adapters; 6) Source-Prompt: A prompt-based tuning method (Lin et al. 2021b) both trains and tests the model by prompt in source languages; 7) Translate-Prompt: Train on prompts in the source language and test on the target-lingual prompts after translation (Zhao and Schütze 2021); 8) Soft-Prompt: Instead of discrete tokens, tunable tokens (Lester, Al-Rfou, and Constant 2021) are utilized as the prompt; 9) RPL-*: Our proposed response-aware prompt learning framework with the diverse propagation threads, i.e., chronological (Cho) and inverted (Inv) order in time sequence, depthfirst (Dep) and breadth-first (Bre) order in tree structure.
In this work, we consider the most challenging case, i.e., detecting events (i.e., target) from a new domain and language. Specifically, we use the well-resourced TWIT-TER (Ma, Gao, and Wong 2017) and WEIBO (Ma et al. 2016)) datasets as the source data, and Weibo-COVID19, Twitter-COVID19 and CatAr-COVID19 datasets as the target. We use accuracy and macro-averaged F1, as well as class-specific F1 scores as the evaluation metrics. Table 2 shows the performance of our proposed method versus all the compared methods on the Weibo-COVID19, Twitter-COVID19 and CatAr-COVID19 datasets with predetermined training datasets. From Table 2, it is observed that the performance of the baselines in the first group are obviously poor due to heavy reliance on downstream classification objectives with a task-related linear layer added on top of PLMs, which is randomly initialized and too easily overfit the source data to generalize to the target.
Rumor Detection Performance
The prompt-based baselines in the third group are relatively better than the adapter-based baselines in the second group though Soft-Prompt is somewhat related to the adapter style in the form of parameter tuning (He et al. 2021). However, their performance are still limited to the following reasons: 1) Source-Prompt lacks cross-lingual transferability. Generally, the multilingual PLMs cannot deal well with the cross-lingual combination between the template in the source language and the claim post in the target language, where such data format is rarely seen in the pretraining stage. 2) Translate-Prompt easily suffers from error propagation of the machine translation quality, and the language-agnostic knowledge is not decoupled and transferred from the source template to the target. 3) Soft-Prompt requires abundant target rumor data for sufficient optimization, which cannot be satisfied with the zero-shot setting.
In contrast, our proposed RPL-based approaches achieve superior performance among all the baselines, which suggests their strong generalization for zero-shot transfer between different languages and different domains. It's observed that the performance of RPL-Inv is relatively better than that of RPL-Cho. We speculate that the reason is that questioning posts at the later stage of propagation could indicate a higher tendency that the claim is rumorous or not. Although achieving promising performance, RPL-Dep does not achieve the expected best performance because with the propagation of the claim there is more semantic and structural information but the noisy information is increased simultaneously, especially in the vein of relatively deep conversation or argument. Overall, RPL-Bre obtains stable and excellent performance generally among the four RPL-based variants by making full use of the subtree-structure property via breadth-first ranking and propagation position modeling for response fusion, which verifies that inaccurate information on social media can be "self-checked" by making a comparison with responsive posts towards the same topic.
Ablation Study
We perform ablation studies by discarding some important components of our best-performed approach RPL-Bre on CatAr-COVID19, which include 1) w/o RR: We simply encode the claim without the Response Ranking (RR) strategies that consider the social contexts in community response. 2) w/o APP: We discard the Absolute Propagation Position. 3) w/o RPP: We discard the Relative Propagation Position (RPP). 4) w/o ViRA: We neglect the Virtual Response Augmentation (ViRA) mechanism. 5) w/o HPE: Instead of our proposed Hierarchical Prompt Encoding (HPE) mechanism, we devise our backbone as two tiers of transformers: one for encoding all the responsive posts independently, and another for processing the sequence of posts using representations from the first transformer (i.e., PLMs), where the second-tier transformer has a similar architecture to PLMs, but has only 2 layers and its parameters are initialized randomly. 6) w/o PV: We design a manual verbalizer for label mapping, to replace the Prototypical Verbalizer (PV) Table 3: Ablation studies on our proposed model.
for model training.
As demonstrated in Table 3, the ablative models suffer different degrees of such performance degradation, indicating the effectiveness of our proposed components for adapting features learned from source rumor data to that of the target. Specifically, RPL-Bre's performance significantly decreases without response ranking due to the lack of collective wisdom on social media. Both w/o APP and w/o RPP also achieve worse performance than RPL-Bre, suggesting that both perspectives of propagation position modeling are comparably helpful to the domain-variant propagation patterns extraction in zero-shot regimes; RPL-Bre makes improvements over w/o ViRA, which implies the promoting role of ViRA that enables our approach hardly compromised when the input length is limited and there may be noise in response. Moreover, w/o HPE leads to much performance degradation, which implies the prompt encoding framework ingeniously reserves the prior syntactic and semantic knowledge from the PLMs and contributes more accurate zero-shot rumor predictions with language disentanglement. Compared with RPL-Bre, the performance of w/o PV also significantly decreases, highlighting the importance and complementary of the prototypical paradigm in our framework for language and domain adaptation.
Early Detection
Early alerts of rumors can prevent the wide-spreading of rumorous contents. By setting detection checkpoints of "delays" that can be either the count of reply posts or the time elapsed since the first posting, only contents posted no later than the checkpoints is available for model evaluation. The performance is evaluated by Macro F1 obtained at each checkpoint. To satisfy each checkpoint, we incrementally scan test data in order of time until the target time delay or post volume is reached. Figure 3 shows the early detection performance of our approach versus Soft-Prompt, PLAN, STANKER, BiGCN and RvNN at various deadlines. To make fair comparisons, the inputs of all baselines are encoded with the same multilingual PLM. We observe that our proposed RPL-based approach outperforms other baselines throughout the whole lifecycle, and reaches a relatively high Macro F1 score at a very early period after the initial broadcast. One interesting phenomenon is that our method only needs about 20 posts on CatAr-COVID19 and 4 hours on Twitter-COVID19, to achieve the saturated performance, indicating the advanced response fusion strategy and remarkably superior early detection performance of our method. Figure 4 shows the effect of layer number of the SynEncoder on zero-shot rumor detection performance, with the CatAr-COVID19 as the target, TWITTER and WEIBO as the source data, respectively. We can observe that when the SynEncoder is initialized with the lower 4 layers of PLMs, it is still biased to specific languages due to the surface features mainly learned. Since PLMs could unearth rich linguistic features at the lower 6 layers, the best performance is obtained when k is set to 6 (i.e., the setting in our model), which is in line with the finding of Jawahar, Sagot, and Seddah (2019). After that, as k continues to increase, although the capacity to decouple shared semanteme from specific linguistic features is enhanced, the number of SemEncoder layers with prior semantic knowledge activated for the interaction of prompts and events decreases, thus the generalization ability of the model to rumor data in different domains is limited, resulting in a fluctuated decline of performance.
Conclusion and Future Work
In this paper, we propose a zero-shot Response-aware Prompt Learning framework to bridge language and domain gaps in rumor detection. We present a prompt-based approach to avoid the reliance on language-specific rumor prompt engineering, with effective response fusion strategies to incorporate influential and structural propagation threads for domain adaptation. Results on three real-world benchmarks confirm the advantages of our zero-shot detection model. For future work, we plan to study specialized PLMs for rumor detection to better utilize the wisdom of crowds and circumvent the sequence length limit, then collect and apply our model to more languages and domains. batch size to 16. The max sequence length of the syntactic encoder is set to 512. We use accuracy and macro-averaged F1 score, as well as class-specific F1 score as the evaluation metrics. For each experiment reported in this work, we use 10 different random seeds to run the model and report the average results. We hold out 10% of the target test datasets for tuning the hyper-parameters. The number of total trainable parameters is 278,197,248 for our model. The number of total parameters is 513,123,200 for our model. We run all of our experiments on one single NVIDIA Tesla V100 GPU. We implement our model with pytorch 4 and HuggingFace Transformers (Wolf et al. 2020), and the template prompt is in English generated by the code released by (Gao, Fisch, and Chen 2021).
RPL Algorithm
Algorithm 1 presents the training and testing procedure of our approach.
Supplemental Experiments
Zero-shot Rumor Detection on CatAr-COVID19 We pick the Arabic and Cantonese claims, respectively, with their propagation thread in the CatAr-COVID19 dataset to conduct a supplementary experiment for our proposed model, using the WEIBO dataset as the source data, as shown in Table 5. Different from Cantonese or Chinese Traditional which have many cognates with Chinese Simplified, generally, the complicated grammar and the "long and obscure" sentences are the characteristics of Arabic, as shown in Figure 5. We could find that our models perform well on the low-resource data in Arabic corresponding to COVID-19, leveraging the source data in Chinese Simplified. Due to the limited resource for the target language, the volume of Arabic data is relatively smaller. Therefore, we plan to collect more related data for the low-resource languages for providing comprehensive guidance for future rumor detection about breaking events on social media. Furthermore, we could observe that our models trained on Chinese Simplified data obtain comparable performance on Cantonese, which is mainly written in Chinese Traditional, not only popular in Guangdong Province, Hong Kong, Macau even Taiwan, but also widely used by overseas Chinese people. Though Cantonese is relatively closer to Chinese, it is joined with British English and combined with the local culture, as the examples shown in Table 6. How to use the propagation structure to further improve the performance on breaking events in this language needs more systematic and targeted research.
Zero-shot Cross-lingual Rumor Detection
In this section, we evaluate our proposed framework with different source datasets to discuss the zero-shot settings in our experiments. Considering the cross-domain and crosslingual settings concurrently in the main experiments, we also conduct an experiment in only cross-lingual settings. Specifically, for the TWITTER as the target data, we utilize the WEIBO dataset as the source data with rich annotation. In terms of WEIBO as the target data, we set the WEIBO dataset as the source data. Table 7 depicted the results in zero-shot cross-lingual settings. It can be seen from the results that the RPL-Bre performs generally better in crosslingual settings than the other variants of our model, which reaffirms that the breadth-first ranking is the stable choice in terms of cross-lingual scenarios across different datasets. Figure 6: Effect of target training data size on Weibo-COVID19.
Zero-shot Cross-domain Rumor Detection
In this section, we also conduct an experiment in only crossdomain settings to evluate our proposed framework. Specifically, for the Weibo-COVID19 as the target data, we utilize the WEIBO dataset as the source data with rich annotation. In terms of Twitter-COVID19 as the target data, we set the TWITTER dataset as the source data. With WEIBO as the source data, our model can achieve ranging from 86.9% Accuracy and 85.7% Macro F1 score to 90.4% Accuracy and 89.6% Macro F1 score of rumor detection performance on the target data Weibo-COVID19, which indicates that our superior capacity in zero-shot cross-domain rumor detection in Chinese language. However, the overall performance on Twitter-COVID19 is relatively worse with the TWITTER as the source dataset. We speculate the reason is that the number of the events in TWITTER dataset is smaller than that in WEIBO, in which our model could achieve about 72.3% Accuracy and 71.1% Macro F1 score among the variants of response ranking. It further demonstrates the key insight to bridge the low-resource gap is to relieve the limitation imposed by the specific language resource dependency besides the specific domain. Our proposed prompt learning framework could alleviate the low-resource issue of rumor detection as well as reduce the heavy reliance on datasets annotated with specific domain and language knowledge, which is enable to leverage the knowledge from WEIBO instead of just TWITTER to detect rumors in Twitter-COVID19 for better performance.
Effect of Target Training Data Size for Few-shot Rumor Detection
To make a fair comparison with the few-shot rumor detection model ACLR proposed by Lin et al. (2022), we also evaluate our model versus ACLR in few-shot settings to investigate the performance of our model when the target training data size increases. Figure 6 and Figure 7 show the effect of target training data size on Weibo-COVID19 and Twitter-COVID19. We randomly choose training data with a certain proportion from target data and use the rest set for evaluation. We use the cross-domain and cross-lingual settings concurrently for model training, the same as the main experiments. Results show that with the increase of training data size, the performance gradually increases. It can be observed that even when only 20 target data are used for training, our model can still achieve more than approximately 81% and 78% rumor detection performance (Macro F1 score) on two target datasets Weibo-COVID19 and Twitter-COVID19 respectively, compared with the 59% and 65% performance in terms of ACLR, which further proves RPL has stronger applicability for improving rumor detection on social media under low-resource regimes.
Limitations
For more targeted work in the future, we conduct the limitations of our work based on error cases where our model can not predict the correct label of the claim: • Due to the limitation of the input sequence length in PLMs, models based on PLMs could only process about 30 responsive posts for each claim according to statistics. Though we have tried the two-tier Transformer architecture to alleviate the issue, it would lead to feature loss during the transformation from token-level features at the first tier to post-level features at the second tier. Therefore, it does not obtain a more satisfactory performance than our proposed model RPL in the zero-shot scenario, which employs the response ranking to highlight the potentially evidential and contextual posts. It's necessary to study specialized PLMs for the rumor detection task to better utilize the wisdom of crowds and circumvent the sequence length limit. • Currently, the cross-lingual and cross-domain benchmarks for zero-shot rumor detection on social media still lack normative completeness, since the research on zeroshot rumor detection with propagation structure has just begun. The volume of low-resource languages like Arabic and even dialects is relatively hard to be organized with propagation structures on social media. Although this work collects a small annotated Arabic dataset corresponding to COVID-19 with propagation threads for model evaluation, we plan to evaluate our model on the datasets about more breaking events in low-resource domains and/or languages (e.g., Hindi) by leveraging existing datasets with rich annotation. Although there is a long way to go, where there is a will, there is a way. | 2022-12-05T06:42:41.949Z | 2022-12-02T00:00:00.000 | {
"year": 2022,
"sha1": "ca7363451d032c0ffc229b4e5efc390d52ddeebb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ca7363451d032c0ffc229b4e5efc390d52ddeebb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259182595 | pes2o/s2orc | v3-fos-license | Electric field stimulation unmasks a subtle role for T-type calcium channels in regulating lymphatic contraction
We previously identified two isoforms of T-type, voltage-gated calcium (Cav3) channels (Cav3.1, Cav3.2) that are functionally expressed in murine lymphatic muscle cells; however, contractile tests of lymphatic vessels from single and double Cav3 knock-out (DKO) mice, exhibited nearly identical parameters of spontaneous twitch contractions as wild-type (WT) vessels, suggesting that Cav3 channels play no significant role. Here, we considered the possibility that the contribution of Cav3 channels might be too subtle to detect in standard contraction analyses. We compared the sensitivity of lymphatic vessels from WT and Cav3 DKO mice to the L-type calcium channel (Cav1.2) inhibitor nifedipine and found that the latter vessels were significantly more sensitive to inhibition, suggesting that the contribution of Cav3 channels might normally be masked by Cav1.2 channel activity. We hypothesized that shifting the resting membrane potential (Vm) of lymphatic muscle to a more negative voltage might enhance the contribution of Cav3 channels. Because even slight hyperpolarization is known to completely silence spontaneous contractions, we devised a method to evoke nerve-independent, twitch contractions from mouse lymphatic vessels using single, short pulses of electric field stimulation (EFS). TTX was present throughout to block the potential contributions of voltage-gated Na+ channels in perivascular nerves and lymphatic muscle. In WT vessels, EFS evoked single contractions that were comparable in amplitude and degree of entrainment to those occurring spontaneously. When Cav1.2 channels were blocked or deleted, only small residual EFS-evoked contractions (~ 5% of normal amplitude) were present. These residual, EFS-evoked contractions were enhanced (to 10–15%) by the KATP channel activator pinacidil (PIN) but were absent in Cav3 DKO vessels. Our results point to a subtle contribution of Cav3 channels to lymphatic contractions that can be unmasked in the absence of Cav1.2 channel activity and when the resting Vm is more hyperpolarized than normal.
Introduction
Collecting lymphatic vessels generate spontaneous, twitch-like contractions that propel lymph centrally, accounting for of peripheral lymph ow during quiet standing 1,2 .These contractions are triggered by action potentials (APs) in lymphatic muscle cells (LMCs), whereby a single AP evokes a transient contraction that is entrained for the length of one or more lymphangions 3,4 .Although the ionic conductances underlying the AP in LMCs have not been completely resolved, inward current during the AP spike is carried by voltage-gated calcium channels (VGCCs) with a contribution of voltage-gated sodium channels (VGSCs) in some species [5][6][7][8][9][10][11][12] .T-type VGCCs are also expressed in mesenteric lymphatic vessels from rat 13 and sheep 7 and proposed to regulate the frequency of the ionic pacemaker driving spontaneous contractions 10,13 .
In a recent study of peripheral collecting lymphatics, we con rmed that L-type VGCCs (Ca v 1.2, encoded by Cacna1c and hereafter referred to as Ca v 1.2) and T-type VGCCs (Ca v 3.1 and Ca v 3.2, encoded by Cacna1g and Cacna1h, respectively, and hereafter referred to as Ca v 3.1 and Ca v 3.2, respectively, or collectively as Ca v 3) are expressed in LMCs of both rats and mice.We demonstrated through patch clamp protocols that products of Ca v 1.2 and Ca v 3 transcription form functional calcium channels gated by depolarization in rat and mouse LMCs 14 .However, contractile tests of lymphatic vessels from Ca v 3.1 −/− mice, and Ca v 3.2 −/− mice, and even Ca v 3.1 −/− ;Ca v 3.2 −/− double knock-out (DKO) mice, exhibited nearly identical parameters of spontaneous twitch contractions as wild-type (WT) control vessels (i.e., frequency, amplitude, ejection fraction or fractional pump ow), over a wide pressure range.Thus, although functional Ca v 3 channels are expressed in murine lymphatic smooth muscle, we concluded that they do not play a detectable role in determining these parameters of spontaneous lymphatic contractions in mice.In contrast, smooth-muscle speci c deletion of Ca v 1.2 abolished all spontaneous contractions, con rming that Ca v 1.2 channels are critical for their initiation and generation.While these ndings may apply only to the mouse and not to other species, commonly-used concentrations of the T-channel inhibitors Ni 2+ and mibefradil produced consistent inhibition of spontaneous contractions in lymphatic vessels from Ca v 3.1 −/− ;Ca v 3.2 −/− (Ca v 3 DKO) mice, suggesting that these compounds inhibit lymphatic smooth muscle primarily through their actions on Ca v 1.2 channels; they, therefore, have limited use in detecting the speci c contributions of Ca v 3 channels.Until the development of truly Ca v 3 selective inhibitors, genetic deletion strategies are necessitated to provide de nitive answers about the contributions of Ca v 3 channels to lymphatic contractile function.
In the present study, we developed additional tests to determine if Ca v 3 channels play a more subtle role in lymphatic function than revealed by standard analyses of spontaneous lymphatic contractions.First, we hypothesized that Ca v 3 channels, while insu cient to initiate lymphatic action potentials in the absence of Ca v 1.2, may participate in the calcium entry required for full-amplitude contractions and normal pacemaking, and that progressive and selective inhibition of Ca v 1.2 channels by a dihydropyridine antagonist might uncover a role for Ca 2+ entry through Ca v 3 channels.That idea was tested by comparing the contractile responses of WT vessels to increasing concentrations of nifedipine (NIF), in which voltage-dependent Ca 2+ entry into LMCs could be mediated by both Ca v 1.2 and Ca v 3 channels, with the responses of Ca v 3 DKO vessels, in which voltage-gated Ca 2+ entry into LMCs could only be mediated by Ca v 1.2 channels.Second, we hypothesized that shifting the LMC resting membrane potential (Vm) to a more negative voltage might unmask a contribution of Ca v 3 channels to contractions.
Because even slight hyperpolarization can eliminate spontaneous contractions 15 , we optimized methods to evoke nerve-independent, twitch contractions from mouse lymphatic vessels using an external eld of depolarizing electrical current.In WT vessels, single, short pulses of electric eld stimulation (EFS) evoked single, large-amplitude contractions that were entrained along the length of the cannulated lymphatic vessel.This activity could be blocked by NIF in WT vessels and was absent in Ca v 1.2 KO vessels.
However, in the presence of TTX to block VGSCs and NIF to block Ca v 1.2 channels, small residual EFSevoked contractions were present in WT vessels; these were enhanced by the K ATP channel activator pinacidil (PIN) but absent in Ca v 3 DKO vessels, and thus presumably mediated by Ca v 3 channels.The responses of WT lymphatic vessels were then compared to vessels from Ca v 3 DKO mice and smoothmuscle speci c Ca v 1.2 KO mice in the presence of TTX, PIN and/or NIF.Collectively, the results of these protocols point to a subtle contribution of Ca v 3 channels to lymphatic contraction amplitude and pacemaking frequency, that can be unmasked when resting Vm is more hyperpolarized than normal.
Results
We hypothesized that, if Ca 2+ in ux through both Ca v 1.2 and Ca v 3 channels is required for full-amplitude contractions and/or to maintain a normal pacemaking frequency, progressive inhibition of Ca v 1.2 might uncover a role for Ca v 3. Thus, we predicted that lymphatic vessels from Ca v 3 DKO mice would be more sensitive than WT vessels to inhibition by NIF.To test this idea, popliteal lymphatics were isolated, cannulated and allowed to establish a regular spontaneous contraction pattern at a xed intraluminal pressure.NIF was added to the bath in cumulative concentrations from 1 nM to 30 mM, while recording spontaneous contractions for 2 min at each concentration.After the experiment, AMP and FREQ were determined from the diameter recording and FPF was calculated as described in Methods.The data were then t to the Hill equation (when possible) to determine the IC 50 values from the concentration-response relationships.
The results of the NIF protocol are illustrated in Fig. 1.Panels A and B show representative recordings of spontaneous contractions in popliteal lymphatic vessels from WT and Ca v 3 DKO mice in response to progressively higher concentrations of NIF.NIF began to inhibit contraction amplitude (AMP) at about 10 nM in both vessels but completely inhibited AMP and frequency (FREQ) of the Ca v 3 DKO vessel at 100 nM, whereas the WT vessel required a concentration of 300 nM for complete inhibition.This same pattern is evident in the summary responses shown in Fig. 1C, where there is a slight left-shift in the AMP- [NIF] curve for the Ca v 3 DKO vessels, with statistically signi cant differences between the normalized AMP of WT vs. Ca v 3 DKO vessels at 3x10 -8 and 1x10 -7 M NIF.Likewise, the FREQ-[NIF] curve for Ca v 3 DKO vessels was left-shifted from the WT curve (Fig. 1E) by ~1/2 log order, as was the FPF-[NIF] curve (Fig. 1D), suggesting enhanced sensitivity of Ca v 3 DKO vessels to NIF.We also computed normalized FREQ (normalized to the initial average frequency of each vessel before NIF application) because, for concentration-response curves, this parameter is often a more sensitive indicator of a drug effect due to vessel-to-vessel variations in the basal FREQ that sometimes occur.When FREQ was expressed as the change from control, there was ~1-1.5xlog left-shift in the normalized FREQ-[NIF] curve for Ca v 3 DKO vessels, compared to the curve for WT vessels (Fig. 1F).We repeated this protocol using popliteal lymphatics from Ca v 3.1 -/-and Ca v 3.2 -/-(single KO) mice, as shown in Suppl.Fig. 1-2.The differences between WT and Ca v 3.1 -/-vessels were more subtle than between WT and Ca v 3 DKO vessels but showed a similar trend.The FREQ-[NIF] curve for Ca v 3.1 -/-vessels was left-shifted by ~1/2 log order (Suppl.Fig. 1E) compared to WT vessels and the normalized FREQ-[NIF] curve was left-shifted by ~1 log order (Suppl.Fig. 1F) compared to WT vessels.Ca v 3.2 -/-vessels also showed the same trend but with smaller left-shifts in AMP, FREQ, and normalized FREQ (Suppl.Fig. 2E-F).The IC 50 values for all protocols are listed in Table 1.These results are consistent with the hypothesis that Ca v 3 isoforms may have contributed to the increased sensitivity of Ca v 3 DKO vessels to NIF, with possibly a greater contribution from Ca v 3.1 than Ca v 3.2 channels.
The results shown in Fig. 1 coupled with our previous ndings 14 raise the possibility that the contributions of Ca v 3 channels to lymphatic contractions are too subtle to detect in standard tests of spontaneous contractions of mouse vessels.We hypothesized that a more de nitive role for Ca v 3 channels might be uncovered if the resting Vm prior to AP initiation was more hyperpolarized than normal, bringing Ca v 3 channels more into their optimal voltage activation window.To test this, however, would require not only hyperpolarizing the membrane, but 1) inhibiting Ca v 1.2 channels (which would otherwise predominate), 2) blocking any possible contribution from VGSC channels [in which TTXsensitive Na v 1 isoforms are predominant 6 ], which might also be enhanced at hyperpolarized potentials, and 3) evoking contractions independent of the intrinsic pacemaker, which would likely be inhibited at hyperpolarized resting potentials.We devised the protocol depicted in Fig. 2 to test our hypothesis.The theoretical window currents for Na v 1, Ca v 3 and Ca v 1.2 channels, relative to the resting Vm of mouse lymphatic smooth muscle, are illustrated in Fig. 2A.At the LMC resting Vm, Ca v 1.2 channels are predicted to be well within the range of their optimal window current, but Ca v 3 channels are predicted to be barely within their range.Fig. 2B depicts how this situation would change after inhibition of Na v 1 channels 6 by TTX and inhibition of Ca v 1.2 channels by NIF, as the latter is known to depolarize LMCs by ~10 mV 12 .
The subsequent addition of PIN to activate K ATP channels would shift Vm to a hyperpolarized value, as we have shown recently 15 , where a greater fraction of Ca v 3 current potentially would be available to be activated.Although the membrane likely would be too hyperpolarized to allow spontaneous activation of an AP by the intrinsic pacemaker potential, contractions potentially could be evoked by an external stimulus.An example recording of Vm in mouse LMCs at rest when spontaneous APs are ring, after the application of 1 mM NIF, and after subsequent addition of PIN, is shown in Fig. 2C.Resting Vm was -40 mV, depolarized to -33 mV after NIF and hyperpolarized to -40 mV after addition of 300 nM PIN, then to ~-50 mV after 1 mM and 3 mM PIN.The results of several such experiments are summarized in Fig. 2D and are consistent with the NIF-and PIN-induced shifts in Vm predicted in Fig. 2B.We were unable to directly determine the amount of LMC depolarization produced by EFS because the high voltage could have damaged the head-stage circuitry of the ampli er during Vm measurement.The amount of PINinduced hyperpolarization was quite variable between vessels and could be transient [see Fig. 2C and 15 ].
In addition, the PIN effect on a particular vessel might be su cient to hyperpolarize Vm out of the range for EFS-mediated depolarization.For these reasons, we tested a 10-fold range of PIN (0.3 to 3 mM) on each vessel, expecting that at least one of the concentrations would produce a degree of hyperpolarization that was su cient to recruit Ca v 3 channels and yet still be overcome by a subsequent EFS pulse.
We then implemented the protocol illustrated in Fig. 3A.Single, short pulses of EFS (0.1-0.2 mS, 90V) were used to elicit single contractions from WT popliteal lymphatics.The duration of the EFS pulse was set at < 0.3 mS because twitch contractions were often slow to recover when stimulus durations exceeded 1 mS and sometimes exhibited prolonged diastolic relaxation times and increased tone for seconds or minutes (note the contractions evoked by 1 and 5 mS pulses in Suppl.Fig. 3).Depending on the baseline contraction FREQ, pressure was lowered to 1 or 2 cmH 2 O to reduce the rate of spontaneous contractions, allowing for a su ciently long diastolic pause in the contraction cycle, during which we could evoke an extra contraction.Thus, EFS pulses were typically delivered within a few seconds after completion of a spontaneous contraction, and we designated EFS-induced contractions as those occurring within 50 ms following an EFS pulse.The amplitudes and durations of the evoked contractions were nearly identical to those of spontaneous contractions (Fig. 3B, withtiming of the EFS pulses shown) and the entrainment of each EFS-evoked contraction wave was similar to that of a spontaneous contraction, as measured from off-line analysis of the spatio-temporal (ST) maps (Suppl.Fig. 4).The ability of EFS to evoke entrained twitch contractions is in general agreement with the ndings of McHale et al. 16 in bovine mesenteric lymphatics, except for the speci c values of the stimulus parameters, which are expected to vary depending on a number of factors, including the species, vessel size, chamber design, electrode diameter and placement.Contractions evoked by single EFS pulses (0.1-0.3 mS, 90V) were not inhibited in the presence of TTX (Fig. 3B).Curiously, the application of TTX (1 mM) in itself had no effect on spontaneous contraction AMP or FREQ in ~50% of vessels (Fig. 3B) but caused a transient cessation of spontaneous contractions in the other ~50% of vessels; however, those vessels recovered after 1-4 min and resumed a normal FREQ and AMP in the continued presence of TTX.
Representative recordings are shown for each combination of vessel genotype and inhibitor in Fig. 4. All traces were recorded in the presence of TTX (1 mM).The set of traces at the top (Fig. 4A-C) for a WT vessel show that spontaneous contractions were blocked by NIF (1 mM) and that EFS pulses initiated only small contractions (<5 mm in AMP; Fig. 4B).Although these contractions were much weaker than those elicited in the absence of NIF, they conducted over most of the vessel (Suppl.Fig. 4B).The record in Fig 4B also shows one spontaneous contraction that occurred in the presence of NIF; events of this kind exceeding 3 mm in AMP were extremely rare, were associated with much lower conduction speeds (Suppl.Fig. 4B) and did not consistently conduct over long distances.The addition of PIN (3 mM) in the continued presence of NIF resulted in larger contraction amplitudes in response to identical EFS pulses (Fig. 4C).Spontaneous contractions over 3 mm in AMP were not observed in vessels from Ca v 1.2 smKO mice (Fig. 4D).In these vessels EFS pulses initiated only very small or negligible contractions that nevertheless were enhanced in AMP by PIN (Fig. 4E).In contrast, vessels from Ca v 3 DKO mice showed spontaneous contractions with normal AMP and EFS pulses evoked additional contractions of equivalent AMP (Fig. 4F).However, in the presence of NIF (1 mM) EFS pulses failed to evoke any residual contractions in these vessels (Fig. 4G) even after the addition of PIN (Fig. 4H).
The data for the various genotypes and pharmacological treatments are summarized in Fig. 5. TTX was present in all protocols.The amplitude of spontaneous contractions was averaged over a 2-min period.The amplitude of EFS pulses was the average for the contractions evoked by three EFS pulses, excluding any cases in which the pulse was delivered too close (< 50 ms) to a spontaneous contraction to be certain which was the initiating event.There were no signi cant differences in the average AMP of spontaneous contractions vs the average AMP of EFS-evoked contractions (Fig. 5A).However, both were signi cantly different than the average AMP of EFS-evoked contractions in the presence of NIF alone or NIF plus PIN (either 300 nM, 1 mM or 3 mM, or the largest AMP of any the three PIN concentrations for each vessel).A Wilcoxon matched pairs signed rank test was used to compare the difference in the AMP of EFS-evoked contractions in NIF alone vs NIF + the largest AMP of the three PIN concentrations.This difference was highly signi cant, indicating that PIN signi cantly increased the AMP of EFS-evoked contractions when Ca v 1.2 channels were blocked.The same analysis is presented in Fig. 5B for vessels from Ca v 1.2 smKO mice.In contrast to WT vessels, Ca v 1.2-de cient vessels had extremely small spontaneous contraction amplitudes (≤ 3 mm in all but one case) and EFS pulses likewise evoked contractions with amplitudes < 1 mm.The amplitudes of EFS-evoked contractions were enhanced by all concentrations of PIN, with the difference between EFS alone and EFS + the most effective PIN concentration (in this case always 3 mM PIN) being highly signi cant.Thus, the results shown in Fig. 5A and 5B are in agreement in showing that the amplitudes of the residual contractions evoked by EFS when Ca v 1.2 channels are deleted or blocked are enhanced when the membrane is hyperpolarized by PIN.
Finally, the same analysis is shown in Fig. 5C for vessels from Ca v 3 DKO mice.As in vessels from WT mice, there was not a signi cant difference in the amplitudes of spontaneous vs. EFS-evoked contractions.However, only very small contractions (<3 mm on average) could be evoked by EFS in the presence of NIF and these were not enhanced by any concentration of PIN.Nearly identical results for each of the three genotypes were produced when normalized AMP, rather than raw AMP, was used for the analysis (Suppl.Fig. 5).Collectively, these results suggest that Ca v 3 channels are mediating the PINinduced enhancement of the residual EFS-evoked contractions when Ca v 1.2 channels are deleted or blocked.
Discussion
In this study we asked: If functional Ca v 3 channels are expressed in lymphatic muscle, why do they not contribute a detectable component to the AP in lymphatic muscle 14 or make a signi cant contribution in the frequency or strength of spontaneous lymphatic contractions?We rst examined this issue by comparing concentration-response curves for WT and Ca v 3 DKO vessels to the Ca v 1.2 dihydropyridine antagonist NIF, reasoning that lymphatic vessels from Ca v 3 DKO mice would be more sensitive than WT vessels to inhibition by NIF because WT vessels have both Ca v 1.2 and Ca v 3 channels as Ca 2+ in ux sources whereas Ca v 3 DKO vessels have only Ca v 1.2 channels.Vessels from Ca v 3-de cient mice were indeed more sensitive to NIF (Fig. 1., Suppl.Figures 1-2), and the effect was more substantial for FREQ (~ 10-fold more sensitive than WT) than for AMP (~ 3-fold).Even though the NIF concentrations (30-100 nM) associated with leftward shifts in the AMP and FREQ of Ca v 3 DKO vessels (Table 1) were well below those causing substantial off-target effects on Ca v 3 channels (≥ 3 µM, 14 ), we could not completely rule out the possibility of off-target effects.Nevertheless, the results of that protocol were consistent with Ca v 3 channels contributing subtly to both the AMP and FREQ of spontaneous lymphatic contractions.We then devised a second set of experiments to test for a subtle role of Ca v 3 channels, reasoning that they might be mostly inactivated under the standard conditions used previously 14 to assess spontaneous contractions.EFS was used to initiate contractions after Ca v 1.2 channels had been inactivated, either by nifedipine application or by genetic deletion of Ca v 1.2 from lymphatic smooth muscle.All vessels were treated with TTX to eliminate any possible contribution of Na V channels whose activity could drive calcium in ux through the sodium-calcium exchanger (NCX) in reverse mode.Under both conditions EFS produced small, residual contractions, 2-4 µm in AMP, compared to a normal contraction AMP of ~ 40 µm.These contractions were enhanced (to 5-10 µm, equivalent to 10-15% of the AMP of a typical, spontaneous twitch contraction) after hyperpolarizing the membrane with the K ATP channel activator pinacidil prior to the EFS pulse.Importantly, this enhancement was absent in vessels from Ca v 3 DKO mice (Fig. 5), con rming that the residual EFS-evoked contractions were mediated by Ca v 3 channels.We conclude that Ca v 3 channels make < 5% contribution to the spontaneous contraction AMP and/or FREQ of mouse lymphatic vessels under normal conditions, but that this may be enhanced to 10-15% under conditions when the resting membrane potential is slightly hyperpolarized.Methodological limitations.Separating the contributions of Ca v 1.2 and Ca v 3 channels to Ca 2+ in ux has proven di cult in many different cell types, including LMCs.Studies of rat lymphatic vessels suggested a selective role for Ca v 3 channels in controlling LMC pacemaking 13 , based on the effects of inhibition with mibefradil and Ni 2+ .We previously showed that two Ca v 3 isoforms are expressed in mouse and rat LMCs and used patch clamp protocols to con rm the presence of functional channels.However, standard contraction tests revealed no signi cant differences between WT and Ca v 3 DKO vessels in either the FREQ vs. pressure or AMP vs. pressure relationships.The typical activation threshold for Ca v 3 channels is -20 to -30 mV more negative than that for Ca v 1.2 channels, and window currents for Ca v 3 channels are similarly left-shifted [17][18][19][20] .These values are estimates from arterial SM because no comparable measurements have been made in lymphatic SM.At the resting Vm that we measure in mouse LMCs (~-35 mV), it is likely that Ca v 3 channels are almost completely inactivated, unless a more left-shifted splice variant of Ca v 3 is expressed, as demonstrated for Ca v 1.2 21 .However, the resting Vm is slightly more negative in rat and human mesenteric LMCs [-40 and − 45 mV, respectively, 15,22 ], potentially enabling more basal activity of Ca v 3 channels in those species.
EFS was used in these experiments to override the normal LMC pacemaking mechanism so that contractions could be induced without the involvement of Ca v 1.2 channels.Although our results suggest that the residual contractions evoked by EFS are mediated by Ca v 3 channels, EFS could also have increased Ca 2+ in ux through other smooth muscle cation channels, e.g., TRPC6, TRPM4, and/or Ca v 2 or Ca v 1.3 channels, all of which are resistant to NIF.Although Ca v 1.3 channels in smooth muscle 23,24 are voltage-gated (but less sensitive to dihydropyridine antagonists than Ca v 1.2 25,26 ), TRPC6 and TRMP4 channels are relatively insensitive to membrane potential 27 and currents through those TRP channels would not be predicted to be signi cantly enhanced by PIN-induced hyperpolarization.Both TRPC6 and TRPM4 are expressed in mouse LMCs (our unpublished observations), but there is no evidence for the expression of Ca v 1.3 in lymphatic muscle, nor have we detected message for Ca v 1.3 channels in RT-PCR assays of puri ed mouse LMCs or in scRNA seq assays (our unpublished observations).Importantly, the possible contributions of TRPC6, TRMP4, and other channels to the residual EFS-evoked contractions should have been the same in Ca v 3 DKO and WT vessels and are therefore not consistent with the absence of those contractions in Ca v 3 DKO vessels (Fig. 5B).Another possible explanation for the residual EFS-evoked contractions is that 1 µM NIF may not have completely inhibited Ca v 1.2 channels, such that hyperpolarization prior to the EFS pulse then recruited Ca v 1.2 current rather than Ca v 3 current.Higher concentrations of NIF could have possibly blocked Ca v 3 channels 28,29 and it was for this reason that we also tested vessels from Ca v 1.2 smKO vessels.Our nding that EFS-evoked contractions in vessels de cient in Ca v 1.2 (Fig. 5) were of the nearly identical AMP as those in WT vessels + NIF argues against this possibility.Additionally, PIN treatment of Ca v 3 DKO vessels would also have recruited whatever fraction of Ca v 1.2 channels were not inhibited by NIF (presumably to the same degree as in WT vessels) and yet PIN did not potentiate evoked contractions under the same conditions in Ca v 3 DKO vessels.Physiological Relevance.Our results suggest that slight hyperpolarization of mouse LMCs can recruit additional Ca 2+ in ux through Ca v 3 channels.One implication is that rat and human LMCs, for which resting Vm levels are slightly more hyperpolarized than mouse LMCs, may normally have a larger (but probably still < 15%) contribution of Ca v 3 channels to the AMP and/or FREQ of spontaneous contractions.This conclusion is consistent with observations of Lee et al. 13 , despite the uncertainties of the off-target effects in that study of Ni 2+ and mibefradil on Ca v 1.2 channels.Although mouse Ca v 3 channels normally contribute < 5% to the contraction amplitude of mouse LMCs, if mouse LMCs were chronically hyperpolarized, e.g., by an endogenous or exogenous vasoactive agent, rapid depolarization to threshold would be predicted to recruit Ca v 3 channels to participate in a subsequent AP, and potentially enhance contraction AMP and/or FREQ.This hypothesis remains to be tested.
An incidental nding from our study is that Ca v 1.2 appears to not only mediate the upstroke of the AP in mouse LMCs (with likely contributions from Na V in rat and human LMCs), but to also modulate the pacemaker.The data in Fig. 1E-F, Suppl.Figures 1E-F, 2E-F show ~ 50% rise in FREQ that occurs in response to partial inhibition of Ca v 1.2 by low concentrations of NIF, suggesting that Ca 2+ entry through Ca v 1.2 channels normally retards the pacemaker.As multiple ion channels with interrelated activities comprise the currents that initiate and contribute to the LMC action potential, there are several potential mechanisms by which the sub-maximal NIF concentrations could drive increased frequency.Of note, 1µM NIF results in a signi cant depolarization and presumably sub-maximal concentrations might also depolarize the cell toward the threshold potential.Additionally, the activation of Ca v 1.2 channels with the agonist BayK8644 dramatically lengthens the duration of the AP plateau phase 30 , whereas inhibition of Ca v 1.2 and reduced calcium in ux during the AP would be expected to accomplish the opposite, as there would be reduced activation of Ano1 and potentially of NCX.A reduction in the plateau period would shorten the overall electrical cycle and thus a higher FREQ could be achieved.Another possibility is that while cytosolic calcium is typically considered to drive depolarization 31 , differential spatial coupling of calcium store release channels to Ano1 and Ca v 1.2 channels vs. hyperpolarizing channels such as BK 32,33 could provide a condition in which Ca 2+ entry through Ca v 1.2 channels normally retards the pacemaker.
Clinical Relevance.The relevance of Ca v 3 channels to lymphatic function in human medicine relates to their possible therapeutic targeting to reverse lymphatic collector dysfunction in chronic lymphedema.Olszewski's observations of patients with impaired lymphatic smooth muscle contraction strength and lower contraction frequency, or even complete loss of spontaneous contractions in various stages of secondary lymphedema [34][35][36] , point to a problem involving disruption of the pacemaking mechanism that potentially could be corrected pharmacologically.However, eventual therapeutic targeting of ionic dysfunction in human lymphatic muscle will require additional insights into the speci c types of ion channels involved in pacemaking, the speci c isoforms of those channels expressed in humans (which may be different than in rodents), and the development of selective inhibitors to block those channels.Whether Ca v 3 channels are expressed in human lymphatic muscle and are critical to some aspect of lymphatic function remains unknown at the present time.
Methods
Animal procedures.All procedures were approved by the animal care committee at the University of Missouri and complied with the standards stated in the "Guide for the Care and Use of Laboratory Animals" (National Institutes of Health, revised 2011).The study is reported in accordance with ARRIVE guidelines.
Animals.C57BL/6J wild-type (WT) mice were purchased from Jackson Laboratory (JAX, Bar Harbor, ME, USA).Ca v 3.1 -/-(Cacna1g null) mice on the C57BL/6J background, originally generated by Hee-Sup Shin (Korea Institute of Science and Technology; 37 , were a gift from Jeffrey Molkentin (University of Cincinnati), and rederived at MMRRHC, Columbia, MO, in the C57Bl/6 background.Ca v 3.2 -/-mice, originally generated by Chen et al. 38 , were obtained from JAX (B6;129-Cacna1h,tm1Kcam./J; #013770), bred into the C57Bl/6 background for at least 8 generations.Ca v 3.2 -/-and Ca v 3.1 -/-mice were bred to generate Ca v 3.1 /-;Ca v 3.2 -/-double KO mice on the C57Bl/6 background.Myh11-CreER T2 mice (B6.FVB-Tg(Myh11-cre/ERT2)1Soff/J), obtained from Dr. Stefan Offermanns, were bred with Ca v 1.2 f/f mice (Cacna1c tm3Hfm /J; #024714), which were purchased from JAX, to generate Myh11-CreER T2 ;Ca v 1.2 l/l mice (hereafter referred to as Ca v 1.2 smKO mice).All genotypes were veri ed by PCR.Mice from the latter strain were injected with tamoxifen (10mg/ml, 100ml i.p.) for 5 days and allowed to recover for 2 weeks before being used for experiments.Mice were provided ad libitum access to food and water and housed under normal light and dark cycles in cages of up to ve mice.Mice of either sex (except for Ca v 1.2 smKO mice) were studied at 5-10 weeks of age (18-25 g).Lymphatic vessel isolation.Mice were anesthetized with pentobarbital sodium (60mg kg − 1 , i.p.).An incision was made on the dorsal-medial side of either leg from the ankle to the groin to access the popliteal lymphatics.An excised lymphatic vessel was pinned on a Sylgard platform (Sylgard® 184, Dow Corning, Midland, MI, USA) in Krebs' buffer supplemented with 0.5% albumin, and isolated by dissection from the surrounding connective tissue and fat.After surgery, the animal was euthanized.
Pressure myography.An excised lymphatic vessel containing at least one valve was transferred to a 3 mL chamber where it was cannulated onto two micropipettes and pressurized.The bath was exchanged at a rate of 0.5 ml/min with Krebs buffer and equilibrated for 30-60 minutes at 37 o C with pressure set to 3 cmH 2 O, as previously described 14 .The pipettes contained 0.5% albumin-supplemented Krebs buffer.
Vessels used for further experimentation (except those from Ca v 1.2 smKO mice) developed robust, spontaneous contractions, with contractions that were entrained over the entire vessel length and amplitudes exceeding 30% at pressure = 3 cmH 2 O. Inner diameter at a representative region was measured continuously from video images using digital edge-detection 39 .Pressures and diameter were digitized using a National Instruments A-D system (Austin, TX) under the control of a LabVIEW program as described previously 40 .
Sharp electrode recordings of Vm.In separate experiments, Vm was recorded in the smooth muscle cell layer of pressurized WT mouse lymphatic vessels to verify the extent of PIN-induced hyperpolarization after L-type VGCC inhibition.To permit stable recordings of Vm in contracting vessels, wortmannin (1-3 mM, 20-30 min) was used to inhibit myosin light chain kinase and blunt vessel movement; the concentration and exposure time were adjusted to preserve minimal contractions (< 5 microns) that con rmed preservation of viability.The smooth muscle layer was impaled with an intracellular microelectrode (300-350 MW) lled with 1M KCl, and Vm was recorded using a NPI SEC-05x ampli er (ALA instruments, Farmingdale, NY) as previously described 31 .The ampli er output was digitized at sampled at 1 KHz using a D-A interface (National Instruments).After a successful impalement, Vm was allowed to stabilize for 15-30 seconds.The most negative value during the AP was approximately − 35 mV.After recording multiple contraction cycles, 1 mM NIF was added to the bath solution to inhibit L-type Ca 2+ channels.In some cases the impalement was lost due to the mixing procedure and, when that happened, attempts were made to impale the same cell or an adjacent cell and continue the protocol.Subsequently, PIN was added in cumulative concentrations (0.3, 1, 3 mM) while recording Vm.Once the recording was completed, the electrode was retracted from the cell and the recorded values were corrected for the offset potential.Electric eld stimulation.EFS was achieved using two 0.5 mm platinum wires (Warner Instruments, #64-1942), separated by 2.5 mm within the 3 mL bath chamber.The wires were positioned 2 mm above the bottom of the observation chamber and insulated except for the terminal 4 mm.The cannulated vessel was positioned 1 mm from the chamber bottom, equidistant between the two wires.A Grass S48 stimulator provided the depolarizing current.Initial tests showed that single twitch contractions, of amplitude comparable to those of spontaneous contractions, could be elicited with short duration (< 1 mS), single pulses of 80-90V.90V pulses were routinely used to ensure consistent responses.The synch output of the stimulator was ampli ed and digitized using an A-D interface (National Instr., Austin TX) to document pulse delivery in register with the diameter recording.For EFS protocols, pressure was usually set to either 1 or 2 cmH 2 O, depending on the spontaneous contraction rate, to provide a contraction pattern with a su ciently long diastolic period to allow for single EFS pulses to be delivered in lymphatic diastole.
Contraction wave analysis.To quantify the degree of entrainment of EFS-evoked contraction waves, bright eld videos of spontaneous contractions were acquired at video rates ranging from 30 to 50 fps.Recorded videos were then stored for o ine processing, analysis, and quanti cation of the conduction speed.Videos of contractions were processed frame by frame to generate two-dimensional spatiotemporal maps (STMs) representing the measurement of the outside diameter (encoded in 8-bit grayscale) over time (horizontal axis) at every position along the vessel (vertical axis), as described previously 3 .All video processing and analyses were performed using a set of custom-written Python programs.Conduction speed was determined for each wave by the slope of the corresponding band on the ST map (by linear t of the points de ning the leading edge) and the speeds were averaged for all the contractions in a given video.
Experimental Protocols.After a vessel established a consistent pattern of spontaneous contractions, one of two protocols was conducted.
The rst protocol assessed the concentration-dependent inhibition by NIF on spontaneous contractions.
After equilibration and establishment of a consistent pattern of spontaneous contractions at constant pressure, bath perfusion was stopped and NIF was added in cumulative concentrations (1 nM to 10 µM) to the bath.Pressure was set at either 1 or 2 cmH 2 O, depending on the spontaneous contraction rate of a given vessel.Contraction responses were recorded for 2-3 min before the next concentration was applied and the protocol was completed within 20 min, a time period found previously not to produce signi cant effects on contraction FREQ or AMP due to bath evaporation.
For the second protocol, single voltage pulses (typically 0.1-0.3mS, 90 V) were applied during the diastolic phase of the contraction cycle, with the pulses delivered 30-60 sec apart and timed to produce minimal disruption to the spontaneous contraction pattern; this was repeated 3 times.With pressure maintained at 3 cmH 2 O, the bath perfusion was stopped and TTX (1 µM) applied.After assessing the effect of TTX on the contraction pattern for 3-4 min, three identical stimulus pulses were again delivered (30-60 sec apart).For WT vessels, NIF (1 µM) was subsequently added to the bath and after 4 min the stimulus pulses were repeated.In a similar set of tests, vessels from Ca v 1.2 smKO mice were used in lieu of NIF treatment.In both cases the K ATP channel activator, pinacidil (PIN), was then added to the bath in increasing concentrations (0.3, 1, 3 µM) to hyperpolarize LMCs, allowing 2-3 min equilibration at each concentration before delivering stimulus pulses.Each time a drug was added to the bath the light path was temporarily blocked to create a vertical blanking artifact on the diameter trace.Tests using the same protocol were conducted on vessels from Ca v 3.1 -/-;Ca v 3.2 -/-mice.In each case the total protocol was completed in less than 20 min.
2 Fractional
Pump Flow (FPF)= EF•FREQ v 3.1 -/-;Ca v 3.2 -/-popliteal lymphatics are more sensitive to inhibition by NIF than WT lymphatics.A) Response of a WT popliteal lymphatic vessel to increasing concentrations of NIF (applied cumulatively).Each contraction is a downward de ection (individual contractions cannot be resolved with this compressed time scale).Vertical lines are intentional artifacts created by blanking the light path to mark when a new concentration was added, followed by ~10 secs of mixing.Pressure was held constant at 2 cmH 2 O.The cumulative DMSO concentration was < 0.4% and without effect alone.B) Response of a Ca v 3.1 -/-;Ca v 3.2 -/-popliteal lymphatic to the same NIF protocol.Contractions in the Ca v 3.1 -/-;Ca v 3.2 -/- vessel are completely inhibited at 100 nM NIF whereas the WT vessel requires at least 300 nM NIF to block contractions.C) Summary data for normalized AMP (normalized to the average AMP during the control period) as a function of NIF concentration.The curve for the Ca v 3.1 -/-;Ca v 3.2 -/-vessels is shifted to the left by ~1/2 log order, with two concentrations being signi cantly different.Summary data for FPF (D) and Frequency (E) as a function of NIF concentration.One concentration was signi cantly different for each parameter.F) Summary data for Normalized FREQ as a function of NIF concentration (normalized to the average FREQ during the control period).Two concentrations were signi cantly different and the curve for the Ca v 3.1 -/-;Ca v 3.2 -/-vessels was shifted to the left by ~1 log order.Statistical tests were two-way repeated measures ANOVAs with Tukey's multiple comparison post-hoc tests (*, p<0.05).WT: N=5; n=9.Ca v 3 DKO: N=8; n=15.
Figure 3 Protocol
Figure 3
Figure 4 Representative
Figure 4 Data analysis.Data were collected and analyzed using LabVIEW (National Instruments, Austin TX), Excel (Microsoft, Redmond, WA) and Prism 8 (Graphpad, La Jolla, CA, USA).Original recordings were plotted in IGOR (Wavemetrics, Oswego, OR).IC 50 values were determined in Prism or IGOR.The four standard tests in Prism for normality (Anderson-Darling, D'Agostino & Pearson, Shapiro-Wilk, Kolmogorov-Smirnov) were used to evaluate each data set and revealed that at least half of the data sets were not normally distributed.Subsequently, one-way ANOVAs with Krusal-Wallis post-hoc tests were performed to compare the amplitude of spontaneous and EFS-induced contractions across pharmacological treatments for each genotype, and Wilcoxon matched pairs signed rank tests were used to compare pairs of data sets within each genotype.The speci c tests used for each protocol are indicated in the gure legends.The data are expressed as mean ± standard error of the mean.P values < 0.05 were considered statistically signi cant, but other signi cance levels are marked when appropriate.N refers to the number of animals and n refers to the number of vessels or cells included per group. | 2023-09-24T06:16:01.094Z | 2023-06-05T00:00:00.000 | {
"year": 2023,
"sha1": "485ed10b79dc6f73ebde7385e4600059792a816f",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10275045",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "47cc70739953c6a95f78bd92c28a9cd4e392438c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243847434 | pes2o/s2orc | v3-fos-license | Generative Dynamic Patch Attack
Adversarial patch attack is a family of attack algorithms that perturb a part of image to fool a deep neural network model. Existing patch attacks mostly consider injecting adversarial patches at input-agnostic locations: either a predefined location or a random location. This attack setup may be sufficient for attack but has considerable limitations when using it for adversarial training. Thus, robust models trained with existing patch attacks cannot effectively defend other adversarial attacks. In this paper, we first propose an end-to-end patch attack algorithm, Generative Dynamic Patch Attack (GDPA), which generates both patch pattern and patch location adversarially for each input image. We show that GDPA is a generic attack framework that can produce dynamic/static and visible/invisible patches with a few configuration changes. Secondly, GDPA can be readily integrated for adversarial training to improve model robustness to various adversarial attacks. Extensive experiments on VGGFace, Traffic Sign and ImageNet show that GDPA achieves higher attack success rates than state-of-the-art patch attacks, while adversarially trained model with GDPA demonstrates superior robustness to adversarial patch attacks than competing methods. Our source code can be found at https://github.com/lxuniverse/gdpa.
can craft perceivable patches to replace part of images for adversarial attack. The advantage of this perceivable patch attack is that it is more practical than the imperceptible adversarial attacks in the real world: adversaries can paste a sticker on a traffic sign to attack the autopilot system of autonomous vehicles. There are several situations where patch attack is significant concerning due to its security threats: 1) an attacker uses adversarially designed eyeglass frames [31] to fool face recognition (Fig. 1a), 2) an attacker pastes adversarially crafted stickers [8] on stop signs to fool traffic sign classification (Fig. 1b), and 3) a universal adversarial patch [3] causes targeted misclassification of any object (Fig. 1c). However, it is a significant limitation that most patch attack algorithms do not consider the problem of finding the best location in an image to inject the patch. Existing patch attack algorithms either use a fixed position as patch location [8,31,41] or learn patches that are universal across different locations [3,17,40]. The fixed location methods show high attack success rates but are poorly performed at other locations, while the random location patches do not have competitive attack success rates compared to the fixed location methods. To address this issue, in this paper we propose a Generative Dynamic Patch Attack (GDPA), which learns image-dependent patch pattern and patch location altogether. GDPA is inspired by the idea that different images have different sets of weak pixels since DNN classifiers typically focus on different image regions when queried by different images [33]. Therefore, an image-dependent dynamic patch attack would be more effective than a fixed location or random location patch attack.
On the other hand, due to the security threats of adversarial attacks, a variety of adversarial defense algorithms have been developed recently [12,21,30], among which adversarial training (AT) [12] has been proved the most effective one for hardening neural networks against adversarial attacks. Although AT with the PGD attack [22] is the most scalable and effective method for learning robust models, a recent work of Wu et al. [37] shows that AT exhibits limited effectiveness against three high-profile physically realizable patch attacks: eyeglasses attack [31], sticker attack [8] and adversarial patch [3]. To overcome this limitation, Wu et al. [37] propose a Rectangular Occlusion Attack (ROA) for adversarial training, which yields models highly robust to patch attacks. ROA is a two-stage patch attack algorithm, which first uses a gray pattern to find the location in image that maximizes the crossentropy loss via grid search, and then optimizes the patch pattern at the identified position. However, this two-stage patch attack method is suboptimal and has quite a few limitations (see a discussion in Sec. 2), which motivates us to propose GDPA that learns patch pattern and patch location simultaneously. Moreover, to improve the inference efficiency, GDPA employs a generator to generate patch pattern and location with one forward propagation, without expensive iterative optimizations that are employed by other attack algorithms, such as PGD [22] and ROA [37]. Concretely, we make the following contributions: • We introduce a generic patch attack method GDPA that can generate dynamic/static and visible/invisible patch attacks with a few configuration changes.
• GDPA employs a generator to generate patch pattern and patch location altogether per image, and reduces the inference time substantially (e.g., 40-50x faster).
• GDPA is an end-to-end differentiable patch attack algorithm and can be readily integrated for adversarial training to defend against high-profile patch attacks.
• Experiments show that GDPA has superior attack success rates over strong patch attack baselines, and the adversarially trained model with GDPA is more robust to various adversarial attacks than state-of-the-art methods.
Related Works
Adversarial Attack Most adversarial attack methods focus on adding imperceptible perturbation covering the entire image [10,12,35]. Recently, researchers have shown that perturbing a part of image with perceptible noise is another practical method to attack DNN models [3,8,17,18,19,31,37,40,41]. Sharif et al. [31] propose to add eyeglasses with a specially constructed frame texture to attack face recognition. Eykholt et al. [8] show that adding specific rectangular solid-colored patches on traffic signs can fool traffic sign classification. LAVAN [17] learns visible and localized patches that are transferable across images and locations by training the pattern at a random location with a randomly picked image in each iteration. Recently, Wu et al. [37] propose a Rectangle Occlusion Attack (ROA) to generate adversarial patches for adversarial training. ROA uses an exhaustive search (ROA-Exh) or a gradient guided search (ROA-Grad) to find the location that maximizes the cross-entropy (CE) loss and optimizes the patch pattern afterwards. Specifically, ROA-Exh exhaustively searches on images with a stride, and ROA-Grad uses the magnitude of gradient of the CE loss as the sensitivity of regions to identify the top candidate regions to accelerate the location search. However, ROA has some considerable limitations. Firstly, it employs a two-stage attack generation, which separates the process of finding the patch location and patch pattern into two steps: it first finds the position using a gray pattern and then optimizes the patch pattern at that position. Hence, the location identified by a gray pattern may not be the best patch location for the optimized pattern. Secondly, the two-stage optimization of ROA is computationally expensive and slows down the patch generation process during inference. Different from these algorithms, our GDPA trains a generator to generate the patch pattern and location altogether for each input image. Moreover, GDPA is end to end differentiable, which entails an efficient optimization and easy integration for adversarial training.
Before GDPA, several works [2,27,28,38] have proposed to train generators to generate perturbation to improve the fooling rate and inference speed. Poursaeed et al. [27] present a trainable network to transform input images to adversarial perturbations. Baluja and Fischer [2] train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network. Different to these generator-based attack methods, our GDPA generates both patch pattern and patch location altogether, and employ an affine transform to synthesize adversarial patch examples.
Adversarial Defense Defending against adversarial attacks is a challenging task. Different types of defense algorithms have been proposed in the past few years [1,4,7,9,13,14,14,15,20,21,25,37,39], among which adversarial training (AT) [22] has been proved the most effective one against adversarial attacks. AT employs adversarial examples as data augmentation to train a robust model. It has been shown that this method can improve the defense accuracy effectively and sometimes can even improve the accuracy upon the model trained only on the original clean dataset [36]. However, a recent work of Wu et al. [37] shows that robust models trained by AT exhibit limited effectiveness against high-profile patch attacks [3,8,31]. As the first work attempting to defend patch attacks, Wu et al. [37] propose DOA, which performs a standard adversarial training with Rectangle Occlusion Attack (ROA). As we discussed earlier in this section, ROA has some considerable limitations, which limit its performance on adversarial defense. Our GDPA does not suffer from those limitations of ROA, and is end-to-end differentiable and more amenable for adversarial training. GDPA is a framework that aims to conduct dynamic patch attack by generating adversarial patch pattern and patch location altogether for each input image. It has a generic formulation that can generate dynamic/static and visible/invisible patch attacks. As an overview, Figure 2 illustrates the GDPA generation pipeline, while Figure 3 demonstrates how GDPA can be utilized to train an adversarially robust model.
Problem Formulation
We start with the definition of dynamic patch attack. Let D = {X , Y} denote a training dataset, where X is a set of images of size w × h, and Y are their corresponding labels. Let T : X → Y denote a target model that we attempt to attack. Given an image x ∈ X and a target model T , our dynamic patch attack aims to find a pattern of size w × h and a position in image that once placed on image x it can mislead the target model.
Localized Pattern Generation
One crucial component of GDPA is the generator that generates patch pattern and patch location for a given image. Since patch pattern and patch location are coupled to a given image, we design a generator G with two heads that share the same latent features extracted by an encoder. Specifically, our generator includes an encoder G E to extract the feature representation of image x, followed by a location decoder G L and a pattern decoder G P to generate location and pattern of the adversarial patch: where l x and l y are the location (2D coordinates) of a patch in image x with the origin at the center of image, and pattern is the patch pattern of size w × h . To keep the patch location l x and l y within the boundary of image, we use a tanh function to constrain l x and l y in the range of [−1, 1], where β is a hyperparameter that controls the slope of tanh. All experiments in this paper use β = 3000, which we found to work well across a variety of architectures and datasets. Similarly, we use another tanh to impose the pattern values in the range of [0, 1] 1 . Specially, we use a convolutional neural network as our encoder network G E , with an architecture adapted from the work of image-to-image translation [43]. On top of G E , we use two fully-connected networks as our decoders G P and G L , respectively. Due to page limit, details of the network architectures are provided in the Appendix.
Weighted Adversarial Patch Injection
With the generated patch location and patch pattern, we then define a function to inject the patch into image x. Standard adversarial attacks [22] employ an additive function to inject noise: x = x + p, where p is an imperceptible adversarial perturbation. Recently, other forms of perturbations, such as multiplicative ones x = x m [42], have been explored to inject perturbations. In addition, LAVAN [17] employs (1 − m) x + m p with a binary mask m ∈ {0, 1} w ×h to generate patch attack adversarial examples. Inspired by LAVAN, we extend this function by relaxing the binary mask to a continuous mask m ∈ [0, 1] w ×h for adversarial patch injection. Specifically, we employ the weighted adversarial patch injection x adv = (1 − m) x + m p, with m ∈ [0, 1] w ×h , which is a convex combination of original image x and patch pattern p with the weight defined by m. We find this relaxed version is more flexible and easier to optimize than the one LAVAN explored. Next, we discuss how to use the generated (l x , l y ) and pattern to inject an adversarial patch to image x.
Differentiable Affine Transformation
We employ an affine transformation in GDPA to inject adversarial patches into images. To make the whole pipeline differentiable w.r.t. l x and l y , a bilinear interpolation is used to estimate the pixel values that are not on the pixel grids after transformation. By doing this, the whole pipeline is fully differentiable and the gradient can be back-propagated end-toend to update parameters of generator G. Specifically, we adopt the affine transformation and image sampling method of Spatial Transformer Networks [16] to define a differentiable translate operator, which can translate a source image to a target image by a displacement of (l x , l y ).
We first use an affine transform to compute the pixel index relationship between source image and target image: where (x t i , y t i ) is the pixel index of target image, and (x s i , y s i ) is the corresponding pixel index in source image. We set θ 11 = 1, θ 21 = 0, θ 12 = 0, θ 22 = 1, θ 13 = w /2 · l x and θ 23 = h /2 · l y for translation purpose 2 . Thus, we have x s i = x t i + w /2 · l x and y s i = y t i + h /2 · l y , where l x , l y ∈ [−1, 1]. Since (x s i , y s i ) are continuous variables, we can use a bilinear interpolation to sample the pixel values from source image: where u jk is the pixel value at index ( j, k) of the source image, and v i is the output value of pixel i at index (x t i , y t i ) of the translated image. With the affine transform and bilinear sampler described above, we have a differentiable translate operator, which we denote as Translate() in the rest part of the paper. Figure 2 illustrates the GDPA generation pipeline, which includes the three components we described above: patch pattern and location generator, differentiable affine transform, and the weighted adversarial patch injection to produce an image-dependent dynamic patch attack. Figure 3: The GDPA-AT pipeline. Given an image, GDPA generates an adversarial patch to maximize the loss of classifier T , while classifier T learns from the patch attack to minimize its loss.
Generative Dynamic Patch Attack
As shown in Figure 2, we introduce an initial mask m center of the same size of input image, and the center part of the mask has value 1 and rest of 0. Then we use the affine transform Translate() to translate m center by a displacement of (l x , l y ): where α ∈ [0, 1] is a hyperparameter that controls the visibility of adversarial patches. When α = 1, the patch would be completely visible and replace the original image pixel values; otherwise, the visibility of the adversarial patch will be lower. In practice, we can use a small value of α to generate human imperceptible adversarial patches. Similarly, we can generate a translated patch pattern. As shown in Figure 2, once pattern is generated, we zero-pad it to create a pattern p center of the same size of input image with pattern at the center. We then translate p center by (l x , l y ) via the affine transform: p = Translate(p center , l x , l y ).
Finally, we can generate a GDPA adversarial example for image x by x adv = (1 − m) x + m p. As we can see, all the components in Figure 2 are differentiable. Therefore, the whole GPDA generation pipeline is fully differentiable and can be optimized efficiently with gradient-based methods. argmin We can also launch a targeted patch attack to fool the target model T to misclassify an input x as target class argmin G L CE (T, x adv , y target ).
Details of the GDPA training algorithm can be found in Algorithm 1.
Adversarial Training with GDPA
Adversarial training with the PGD attack exhibits limited effectiveness against high-profile patch attacks [37]. In this section, we discuss how to utilize GDPA for adversarial training to improve model robustness against high-profile patch attacks. Figure 3 illustrates the GDPA adversarial training (GDPA-AT) pipeline to train a robust model against patch attacks. Similar to Generative Adversarial Networks [11], GDPA-AT trains generator G and target classifier T iteratively to optimize the following minimax objective: where the inner maximization step optimizes generator G to maximize the classification loss of T , while the outer minimization step optimizes target classifier T to minimize the classification loss. Unlike the traditional adversarial training, in which the inner maximization step usually optimizes an adversarial example x adv directly, our GDPA-AT optimizes a generator G to generate patch attack with one forward propagation. As the iterative training proceeds, the generator G searches for the weakest image region to attack classifier T at each iteration, while T learns from the current patch attacks and becomes more resilient to these attacks over time. Details of our GDPA-AT algorithm are described in Algorithm 2.
Algorithm 1: GDPA Training Algorithm 2: GDPA-AT Training Input: training set D Output: target classifier T ; generator G initialize classifier T and generator G; for number of training epochs do for each (x, y) ∈ D do
Experimental Results
We now validate GDPA on benchmark datasets for adversarial patch attack and adversarial defense. Specifically, we evaluate the performance of GDPA on patch attack in Section 4.1 and GDPA-AT on improving model robustness in Section 4.2. To evaluate the inference efficiency, we also compare the run-times of GPDA and state-of-the-art attack algorithms in Section 4.3. All our experiments are performed with PyTorch on Nvidia RTX GPUs. Our source code is provided as a part of supplementary materials.
Experimental Setup
We evaluate GDPA and GDPA-AT on three benchmark datasets: VGGFace [31], Traffic Sign [8] and ImageNet [5]. To evaluate GDPA's attack performance, we compare GDPA with LAVAN [17] and ROA [37], two state-of-the-art patch attack algorithms that generate patches based on iterative optimizations. Following their experimental settings, we run LAVAN and ROA for 50 optimization iterations with a learning rate of 4. For adversarial defense experiments, we compare GDPA-AT with DOA [37] and PGD-AT [22]. The former is a state-of-the-art defense algorithm for patch attacks, while the latter is a well-established defense algorithm for adversarial attacks. We evaluate the robustness of the models under eyeglasses attack [31], sticker attack [8]. Following the settings in DOA [37], we use 70 × 70 patches with stride 5 for VGGFace and 7 × 7 patches with stride 2 for Traffic Sign to generate ROA attacks. We set ε = 16 for PGD-AT since this yields the best results of PGD-AT. We use attack success rate (ASR) [6] as the metric to evaluate the effectiveness of an attack, and use classification accuracy to evaluate the robustness of a model when under adversarial attacks. Details of benchmark datasets, high-profile patch attacks, network architectures and training procedures can be found in the Appendix.
Dynamic Patch Attack
We first evaluate the performance of GDPA on non-targeted and targeted patch attacks and compare it with the state-of-the-arts: LAVAN [17] and ROA [37]. We provide results of two versions of ROA: ROA-Exh and ROA-Grad, where the former exhaustively searches for a patch location in images with a fixed stride, and the latter uses the magnitude of gradient as the sensitivity of regions to identify top regions to accelerate the location search. We evaluate the effectiveness of the attack algorithms when perturbing different percentages of pixels. To interpret the results, we also visualize the perturbed images generated by GDPA. Table 1 reports the ASRs of GDPA and the other competing algorithms for non-targeted and targeted patch attacks. The ASRs of an attack algorithm are evaluated on a model trained with cross-entropy (CE) loss when attacked with patches of different sizes (1%, 2%, 5% or 10% of pixels). Specifically, We use square patches of width 3, 5, 7, 10 for Traffic Sign and 23, 32, 50, 71 for VGGFace and ImageNet. For targeted attacks, we choose the first class for each of the three datasets as the target class, i.e., "AddedLine", "Aamir Khan" and "tench, Tinca tinca", respectively. As expected, the larger patch size is, the higher ASR is achieved for all patch attack algorithms. In most of the cases, GDPA achieves higher ASRs than the competing algorithms. Visibility α vs. ASR We further investigate the impact of visibility parameter α of Eq. 5 to GDPA's ASR. The results on VGGFace are shown in Figure 5, where we consider different patch sizes. As expected, when α increases, the attack strength of GDPA gets stronger for all different patch sizes. Notably, when the patch size is 5% or 10% of pixels, GDPA can reach almost the highest ASRs when α ≥ 0.6, indicating that when patches are sufficiently large, the attack can be more invisible to attack a model successfully. Example perturbed images generated by GDPA with different α are provided in the Appendix.
Dynamic vs. Random Patch Location Tab 2 shows the comparison of GDPA framework with random patch locations or dynamic patch locations. As we can see from the table, GDPA achieves higher ASR with dynamic patch location than random location. This shows that learned image-dependant dynamic locations contribute to the superior performance of GDPA.
Dynamic Patch Adversarial Training
Next we validate the robustness of models trained by GDPA-AT against various adversarial patch attacks. Specifically, we report the results of GDPA-AT trained models against Eyeglass Attack, Sticker Attack and LAVAN, and compare them with state-of-the-art defense methods. Table 3 reports the accuracies of robust models trained by different defense algorithms against three types of patch attacks: 1) eyeglasses attack on VGGFace, 2) sticker attack on Traffic Sign and 3) LAVAN on ImageNet. As can be seen, PGD-AT, a well-established defense method for conventional adversarial attacks, is not robust to all three patch attacks, which is consistent with the results reported in [37]. While both DOA and GDPA-AT improve the robustness over PGD-AT significantly, GDPA-AT achieves substantially higher accuracies than the two variants of DOA.
Inference Speed
Besides the improved attack and defense performance of GDPA, another advantage of GDPA is its superior inference speed to generate attacks over the optimization-based methods, such as PGD [22] and ROA [37]. To have a quantitative comparison in terms of inference time, we evaluate the run-time of GDPA, PGD and ROA on the VGGFace test dataset (470 images). GDPA needs one forward propagation to generate a patch attack, while we follow the settings of ROA and PGD and run 50 iterative optimizations to generate their attacks. As shown in Table 4, GDPA is about 40x faster than PGD and 47x faster than ROA.
Additional Experimental Results
As GDPA is a generic attack algorithm, we conduct additional experiments to evaluate its performance with different configurations and also validate some design choices. Due to page limit, details are relegated to the Appendix.
Conclusion
This paper introduces GDPA, a novel dynamic patch attack algorithm, that generates patch pattern and patch location altogether for each input image. Due to its generic formulation, GDPA can generate dynamic/static and visible/invisible patch attacks. GDPA is end-to-end differentiable, which entails an efficient optimization and easy integration for adversarial training. We validated our method on multiple benchmarks with different model architectures. GDPA demonstrates superior ASR over strong patch attack methods, and the adversarially trained model with GDPA is more robust to high-profile patch attacks. Moreover, GDPA is 40-50x faster than competing attack algorithms, making it a highly effective attack and defense algorithm.
Acknowledgment
We would like to thank the anonymous reviewers for their comments and suggestions, which helped improve the quality of this paper. We would also gratefully acknowledge the support of Cisco Systems Inc. for its research fund to this work.
A Experimental Details
We first describe the three benchmark datasets and target models used in our experiments. These datasets are used to train our GDPA generator, robust models with adversarial training, and evaluate the performance of patch attacks.
A.1 VGGFace
Dataset The VGGFace dataset [26] is a benchmark for face recognition, containing 2,622 subjects and 2.6 million images in total. Same with DOA [37], we choose 10 subjects and sample face images only containing those individuals. We process the data to the size of 224 × 224 by standard crop-and-resize, and perform class-balanced split to generate training, validation, and test datasets with ratio 7:2:1. As a result, we obtain 3178, 922 and 470 images for training, validation and test, respectively. The training set is used to train the target model, the GDPA generator and robust models with adversarial training. Likewise, the test set is used to evaluate the target model, the performance of patch attack and adversarial defense.
Target Model We use the VGGFace CNN model [26] as the target classifier in our experiments. We use standard transfer learning on our processed dataset, keeping the convolutional layers in the VGGFace CNN model, but adjusting the number of output neurons of the last fully connected layer to 10. In order to use the pre-trained weights from the convolutional layers of VGGFace CNN model, we convert the images from RGB to BGR and subtract the mean value [129.2, 104.8, 93.6]. We set the batch size to 64 and use the Adam Optimizer with an initial learning rate of 10 −4 . We drop the learning rate by 0.1 every 10 epochs.
For hyperparameter tuning and model selection, we track the accuracy on validation set to avoid overfitting. We train the model on training set for 30 epochs and obtain an accuracy of 98.94% on test data.
A.2 Traffic Sign
Dataset To have a fair comparison with DOA [37], we pick the same 16 traffic signs from the dataset LISA [23] with 3,509 training and 1,148 validation images. Following the prior works [8,37], we further sample 40 stop signs from the validation set as the test data to evaluate performance of the stop sign classification. Similarly, all the data are processed by standard crop-and-resize to 32 × 32 pixels. Same with VGGFace, we use the training set to train the target model, the GDPA generator and robust models with adversarial training. We use the test set to evaluate the performance of the target model, patch attack and adversarial defense.
Target Model We use the LISA-CNN [8] as the target model, which contains three convolutional layers and one fully-connected layer. We use the Adam Optimizer with initial learning rate 0.1 and drop the learning rate by 0.1 every 10 epochs. We set the batch size to 128. After 30 epochs, we achieve an accuracy of 98.69% on the validation set, and 100% accuracy on the test data.
A.3 ImageNet
Dataset ImageNet [5] is a well-known large scale object recognition benchmark. To develop the training and validation sets to train and evaluate the GDPA generator and robust models with adversarial training, we follow Moosavi-Dezfooli et al.
[24] to select a subset of 10, 000 images from ImageNet training set (randomly choose ten images for each class) as our training set, and use the whole ImageNet validation set (50, 000 images) as our validation set.
Target Model Following Poursaeed at el.
[27], we use a pre-trained VGG19 model [32] from PyTorch library as the target model. This model achieves an accuracy of 72.4% on the validation set.
B Patch Attacks
Eyeglasses Attack This is an effective physically realizable patch attack developed by Sharif et al. [31]. It first initializes the eyeglass frames with 5 different colors, and chooses the color with the highest cross-entropy loss as starting color. For each update step, it divides the gradient value by its maximum and multiplies the results with the learning rate. Then it only keeps the gradient value in the eyeglass frame area. Finally, it clips and rounds the pixel values to keep them in the valid range. We evaluate the eyeglasses attack on the test set of VGGFace.
Sticker Attack Proposed by Evtimov et al. [8], this is another physically realizable patch attack. It initializes the stickers on the stop signs with random noise at fixed locations. For each update step, it uses the Adam optimizer with the learning rate 0.1 (and default parameters) to maximize the classification loss of the target model. Just as the other patch attacks, adversarial perturbations are restricted to the mask area; in our experiments, we use the same collection of small rectangles as in [8]. We evaluate the sticker attack on the test set of Traffic Sign.
C GDPA Network Architecture and Training Details
Network Architecture For VGGFace and ImageNet, both having images of size 224 × 224, we adopt the encoder network structure G E from the work of image-to-image translation [43]. For the Traffic Sign dataset, which has images of size 32 × 32, we adopt a CNN of 3 convolutional layers with kernel size 4 and stride 2 as the encoder network G E . We then use a neural network of one fully-connected layer with output size 3 × w × h as the pattern decoder G P , and a neural network of one fully-connected layer with output size 2 as the location decoder G L .
GDPA Training Details Following Algorithm 1, we train the GDPA generator G by using the Adam optimizer with an initial learning rate of 0.1 for VGGFace and ImageNet, and 0.01 for Traffic Sign. We drop the learning rate by 0.2 every 10 epochs and train the generator for 30 epochs. We set the batch size to 32 and β to 3000, which we find works well across various architectures and datasets in our experiments.
GDPA-AT Training Details Following Algorithm 2, we train the GDPA generator G and target model T iteratively. We initialize the generator with a pre-trained GDPA generator and the target model with a cross-entropy trained model. We set the w and h to 70 for VGGFace and Imagenet and 7 for Traffic Sign during the adversarial training. We use the Adam optimizer to train the generator and the target model, with a learning rate of 0.0001 for both VGGFace and Traffic Sign and 0.001 for imagenet, and drop the learning rate by 0.2 every 50 epochs. We use batch size 32 and train for 1000 epochs for VGGFace, 100 epochs for Imagenet and 5000 epochs for Traffic Sign.
D Ablation Study D.1 Generate pattern vs p Instead of generating pattern from the GDPA generator, we can generate p directly by adjusting the output size of pattern decoder G P to 3 × w × h. Directly generating p can simplify the pipeline of GDPA as we do not need to translate pattern to generate p in two steps. Thus, it's worth investigating which design choice works better. Table 5 shows the results comparing these two design choices. As we can see, generating pattern achieves significantly higher ASRs than generating p directly. We conjecture that this is because p has a larger space to optimize than pattern, and thus is more difficult to optimize. Hence, in our GDPA pipeline we generate pattern first and then translate pattern to generate p.
D.2 Visibility α vs. ASR
In Section 4.1, we investigate the impact of visibility parameter α of Eq. 6 on GDPA's ASR. Figure 6 visualizes some example perturbed images generated by GDPA with different α's and patch sizes. As we can see, by using different α's, we can control the visibility of GDPA attack.
D.3 Effect of β
The β in Eq. 1 controls the slope of tanh that constrains l x and l y in the range of [−1, 1]. It is critical to find an appropriate value of β to train the GDPA generator. Intuitively, a too large or too small β value can cause different training difficulties. If β 's value is too small, the tanh activation function saturates quickly and pushes l x and l y to the saturated value of -1 or 1, which corresponds to corners of an image. On the other hand, if β is too large, the tanh activation function has a slow transition from -1 to 1, which may not be able to push l x = .
and l y away from the origin [0, 0] of an image, and likely causes ineffective training as well. Therefore, we treat β as a hyperparameter and tune it on the validation set. The results with different values of β on VGGFace are shown in Table 6 and Figure 7. It can be observed that we get the highest ASR with β = 3000. With small β s like 100 or 500, the patch location saturates at the corners of images; With large β s such as 5000 or 7000, the learned patch locations are close to the origin for most of the images. We find β = 3000 works well across a variety of architectures and datasets, and thus set it as the default value. E Eyeglass Attack Visualization Figure 9 shows example results when using eyeglasses attack to evade a standard CE-trained model (a) and the GDPA-AT trained model (b). As we can see, the eyeglasses attack fails to attack the GDPA-AT trained model because it is not able to generate effective adversarial patterns on the eyeglass frames in 5 out of 6 cases, while being very successful on standard CE-trained model.
F Generating Static Patch Attack with GDPA
Contrary to dynamic patch attack, static patch attack uses a fixed patch location for all the images. To conduct static patch attack with GDPA, we set l x and l y to fix values instead of generating them from G L . To compare the performance between dynamic and = 100 = 500 = 1000 = 7000 = 5000 = 3000 . We use patch size 32 × 32 (2% of pixels) in the experiment. Figure 8 shows the ASRs of static patch attacks at the 25 locations. As we can see, patch location is an important factor in the performance of static patch attack. Notably, patch locations around the area of eyes have the best ASRs. The highest ASR we obtain from static patch attack is 73.9%, while dynamic GDPA achieves 76.4%, demonstrating the effectiveness of dynamic GDPA.
G Generating Adversarial Attack with GDPA
Thanks to its generic formulation, we can also generate conventional adversarial attacks with GDPA by adjusting its pipeline slightly. To do this, we use a fixed mask of value 0.5 for all image pixels, and update the generator to produce p of the same size of image directly. To make sure the adversarial noise is within a small L ∞ -norm bound, we multiple p by ε/255 such that the adversarial noise is bounded by ε/255. Finally, we scale the perturbed image by 2 and clip its pixel values to [0, 1] to create an adversarial example. We call this GDPA version of adversarial examples as GDPA-ADV.
We then compare the attack performances of GDPA-ADV with PGD [22] and PI-FGSM [10] on VGGFace and ImageNet. The PGD attack is generated with learning rate 10 for 20 iterations. The results with different ε's are provided in Table 7. We can observe that GDPA-ADV achieves slightly higher ASRs than PGD in all the cases considered. Compared with the other more competitive method PI-FGSM, GDPA-ADV has slightly worse ASR except on VGGFace when ε = 6. Some adversarial examples generated by GDPA-ADV on VGGFace are visualized in Figure 10
H GDPA-AT against Adversarial Attack
We evaluate the robustness of models under conventional adversarial attacks, such as the PGD attack [22]. The results are reported in Table 8, where different PGD attack strengths ε have been considered. We set step size as 20 and iterations as 300. It can be observed that GDPA-AT achieves significantly higher robustness than DOA against the PGD attack. More interestingly, the accuacies that GDPA-AT achieve are almost on par with PGD-AT even though GDPA is a patch attack algorithm. We believe this is because during the adversarial training process, GPDA generates the adversarial patches to attack the classifier iteratively; even though each patch attack is localized, the combination of all patch attacks generated during the iterative process resembles a whole image attack that PGD usually produces. For this reason, the model trained by GDPA-AT can defend conventional adversarial attacks. These results demonstrate that GDPA-AT is a generic defense algorithm that can defend both patch attacks and conventional adversarial attacks, while PGD-AT and DOA fail on one of them.
I Cross Attacks and Defenses
In this section, we compare the defense performances of PGD-AT, DOA and GDPA-AT when they are attacked by their corresponding attack algorithms. In this experiment, the PGD attack uses ε = 8, and ROA and GDPA use 10% pixels as patch size. The results on VGGFace are shown in Table 9. As we can see, PGD-AT achieves the highest robustness under the PGD attack, but is not very robust under the ROA and GDPA attacks. On the other hand, DOA achieves decent robustness under the ROA and GDPA attacks, but fails completely under the PGD attack. Notably, GDPA-AT is the only defense algorithm that achieves almost the highest robustness under all three attacks. It's expected that GPDA-AT would be robust under the ROA and GDPA attacks since both are patch attacks. An explanation of the robustness of GDPA-AT under the PGD attack is provided in Section 4.2.
J Additional Results on Targeted Attack Figure 11 provides additional perturbed images generated by targeted GDPA attack on VG-GFace. The top row shows the target subjects, while the bottom two rows show the perturbed | 2021-11-09T02:16:34.829Z | 2021-11-08T00:00:00.000 | {
"year": 2021,
"sha1": "ad25390afd84fbcfa88da739a09e1ddf2340bf77",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ad25390afd84fbcfa88da739a09e1ddf2340bf77",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219184112 | pes2o/s2orc | v3-fos-license | Effects of MP-AzeFlu enhanced by activation of bitter taste receptor TAS2R
MP-AzeFlu is relatively new a pharmaceutical drug used in the treatment of allergic rhinitis. It is comprised of azelastine hydrochloride (AZE), a potent histamine-H1-receptor antagonist and fluticasone propionate (FP), corticosteroid. It’s somewhat bitter taste (often considered a disadvantage) can be attributed to AZE. We here hypothesize that MP-AzeFlu may induce some of its beneficial effects through activation of bitter taste receptors (Tas2R), which have recently been described in human airways. In the nose Tas2Rs induce secretion of antimicrobial peptides and increase ciliary activity, while in the lung they cause airway smooth muscle relaxation. The mechanisms behind Tas2R-mediated effects are not yet fully known. In order to evaluate the role of Tas2R in the effects induced by MP-AzeFlu the dilatory response of pre-contracted isolated airways from Balb/c mice was investigated in tissue bath myographs in the presence or absence of various well-characterized pharmacological antagonists or their corresponding vehicles. MP-AzeFlu caused a potent dose-dependent relaxation of pre-contracted airways, an effect probably mediated by its AZE component. The dilatory effect of MP-AzeFlu and AZE both mimicked the response induced by the Tas2R agonist, chloroquine, but was independent of histamine receptor (H1-, H2- and H3-), prostaglandins, cAMP and cGMP involvement, all known to be common pathways for airway dilation. Other bitter-tasting antihistamines (i.e. olopatadine and desloratadine) also relaxed airway segments. These data support the notion that MP-AzeFlu has the ability to activate Tas2R in the same way as chloroquine. The effect appears to be mediated by AZE, but not via the histamine receptor. Activation of Tas2R by MP-AzeFlu may contribute to its superior efficacy over FP observed in controlled clinical trials in patients with moderate/severe allergic rhinitis.
© The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article' s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article' s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/publi cdoma in/ zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
To the editor MP-AzeFlu (Dymista ® , Meda, Solna, Sweden) is a new class of allergic rhinitis (AR) treatment, comprising an intranasal anti-histamine (azelastine hydrochloride [AZE]), and an intranasal corticosteroid (INS) (fluticasone propionate [FP]). Its efficacy as a first line therapy, has been documented in randomized controlled trials (RCTs) [1,2]. Over-additive effects of MP-AzeFlu has been observed for certain symptoms, most notably nasal congestion. This may be due to the fact that MP-AzeFlu is more than a simple fixed dose combination product, comprising anti-histaminic, mastcell stabilizing, anti-leukotriene and anti-inflammatory properties [3]. MP-AzeFlu is well tolerated by patients both in RCTs and in real life, but a bitter taste (due to azelastine) has been reported, and traditionally classified as an adverse event. We hypothesized that MP-AzeFlu may induce some of its beneficial effects via a bitter taste receptor mediated pathway, specifically activation of bitter taste sensing type 2 receptors (TAS2R).
TAS2Rs belong to the family of G-protein coupled receptors (GPCRs) which recognize a wide range of substances. Previously, TAS2Rs were found in the oral cavity, their function being to prevent intake of harmful substances, which often taste bitter [4]. More recent evidence shows that these receptors are present both in the human upper airway mucosa and lower airway smooth muscle [5][6][7][8], and have functions independent of taste. Their activation can increase ciliary beat frequency and the release of anti-microbial peptides [4], inhibit histamine and prostaglandin release from human mast cells [7] and induce airway dilation in the lower airways [5,6]. The present study was designed to explore if MP-AzeFlu has the ability to activate bitter taste receptors without the involvement of the histamine system using an in vitro model of isolated murine airways (known to have a weak histamine receptor system) (for method see Additional file 1). Since no TAS2R antagonists currently exist, proof of TAS2R agonism was inferred by (i) exclusion of other pathways and (ii) by mimicking the effects of a known TAS2R agonist (i.e. chloroquine).
The results show that MP-AzeFlu, azelastine but not fluticasone induced a strong relaxation of carbachol (CCh)-induced contractions, and a modest relaxation to the thromboxane receptor agonist U-46619induced tone of mouse trachea (Fig. 1a, b). The bitter taste receptor TAS2R agonist, chloroquine (acting on the subtypes TAS2R3, TAS2R10 [6]), dilated isolated murine airways in the same dose-dependent manner as MP-AzeFlu and azelastine. The chloroquine induced dilation was more potent when the segments had been pre-contracted with carbachol than with U-46619 [6]. At high concentrations, fluticasone induced a weak relaxation of CCh and U-46619 pre-contracted tissues, but this was likely due to the vehicle used, as vehicle alone induced a similar relaxation.
Though it is well known that mice have a weak histamine receptor system, histamine receptors H1, H2 and H3 were separately pharmacologically inhibited prior to addition of azelastine, to assess the role of this system in relaxation. Chloroquine was used as a comparison and to evaluate the bitter taste effect. The relaxation induced by MP-AzeFlu, azelastine and chloroquine were unaffected by the presence of mepyramine, metiamide and thioperamide, known to block the dilatory activity of the H1, H2 and H3 receptors, respectively (Additional file 2: Fig S1). The mechanisms behind bitter taste receptormediated relaxations are not yet known. Evidence suggests that there is more than one common pathway for all TAS2Rs [6]. In this study, pathways associated with relaxation of the smooth muscle, as well as possible bitter taste transduction pathways were evaluated. Nitric oxide (NO) and carbon monoxide (CO) increase intracellular cGMP and NO activates K+ channels, all of which lead to smooth muscle relaxation. Prostaglandins are also known to mediate relaxation, through increases in intracellular cAMP [5]. Therefore, L-NAME (which blocks NO synthase), zinc protoporphyrin-9 (which inhibits CO synthesis) and indomethacin (to inhibit prostaglandin production) were added to pharmacologically inhibit these pathways. However, none of these agents affected relaxation (Additional file 2: Fig S2).
The relaxation seen from MP-AzeFlu was not a general anti-histamine effect. To verify the possibility of an anti-histamine class effect, the impact of four other antihistamines, (desloratadine, fexofenadine, olopatadine and levocabastine) on CCh-induced pre-contraction were investigated using the same set up used for MP-AzeFlu and azelastine. The relaxatory capacity of azelastine at concentrations comparable to other anti-histamines was additionally assessed. The result precludes a general anti histamine effect, instead strengthening the bitter taste theory. Olopatadine [9] and desloratadine [10], both known for their bitter taste, relaxed airway segments in a way that resembled the effects of MP-AzeFlu and azelastine (Fig. 1c, d). Fexofenadine and levocabastine, which are not associated with a bitter taste, did not induce relaxation (Fig. 1c, e).
The effects induced by bitter taste activation in the airways are, as stated previously, not fully known. A possible candidate for the mediation of the presently postulated effects of bitter taste receptors in allergic rhinitis may be related to epigenetic histone modification. There are at least two levels at which the role of histone modifications is manifested. One is the regulation of cells that contribute to the allergic inflammation (T cells and macrophages) and those that participate in airway remodeling (myo-fibroblasts). The other is the direct association between histone modifications and allergic phenotypes [11]. Hence, it is tempting to speculate that MP-AzeFlu, in addition to its previously well documented anti-allergic effects, also could function as an inhibitor of histone-modifying enzymes, something that might explain its over-additive effects in allergic rhinitis.
The effect that MP-AzeFlu notable had on the smooth muscle could, based on our data, be due to the activation of TAS2R. Activation of these receptors in the nose have been shown to increase ciliary beat frequency and the release of anti-microbial peptides lending to a reduction of nasal congestion and the formation of biofilm [4]. However, TAS2R may also induce relaxation of the smooth muscle on the vascular smooth muscle. Relaxation of vascular smooth muscle in the nasal passages would dilate those vessels, causing an increase in congestion. Here the general anti-histamine properties of MP-AzeFlu may be of importance as anti-histamines can induce a smooth muscle contraction [12]. The use of MP-AzeFlu in the case of AR would then only lead to the beneficial effects of a TAS2R activation.
In summary, MP-AzeFlu is a potent dilator of pre-contracted airways, an effect mediated by the azelastine component. It is clear that this effect is not the result of histamine receptor activation and the study concludes that this is not a general mechanism for the antihistamines, but rather a mechanism specific to bitter anti-histamines. The findings in this work strongly suggests that MP-AzeFlu could activate bitter taste receptor in the same way as chloroquine. | 2020-06-03T14:30:37.997Z | 2020-06-03T00:00:00.000 | {
"year": 2020,
"sha1": "abd944baa542fb145b8d8ede69ecdf5d3b04546e",
"oa_license": "CCBY",
"oa_url": "https://aacijournal.biomedcentral.com/track/pdf/10.1186/s13223-020-00438-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abd944baa542fb145b8d8ede69ecdf5d3b04546e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17645095 | pes2o/s2orc | v3-fos-license | On-farm dynamic management of genetic diversity: the impact of seed diffusions and seed saving practices on a population-variety of bread wheat
Since the domestication of crop species, humans have derived specific varieties for particular uses and shaped the genetic diversity of these varieties. Here, using an interdisciplinary approach combining ethnobotany and population genetics, we document the within-variety genetic structure of a population-variety of bread wheat (Triticum aestivum L.) in relation to farmers’ practices to decipher their contribution to crop species evolution. Using 19 microsatellites markers, we conducted two complementary graph theory-based methods to analyze population structure and gene flow among 19 sub-populations of a single population-variety [Rouge de Bordeaux (RDB)]. The ethnobotany approach allowed us to determine the RDB history including diffusion and reproduction events. We found that the complex genetic structure among the RDB sub-populations is highly consistent with the structure of the seed diffusion and reproduction network drawn based on the ethnobotanical study. This structure highlighted the key role of the farmer-led seed diffusion through founder effects, selection and genetic drift because of human practices. An important result is that the genetic diversity conserved on farm is complementary to that found in the genebank indicating that both systems are required for a more efficient crop diversity conservation.
Introduction
Ten thousand years ago, human societies began to domesticate wild species so they could be easily cultivated, more productive, and better adapted to their needs (Diamond 2002). As the result of interactions between the environment, human uses and farming practices, these cultivated species were submitted to strong bottlenecks through genetic drift and artificial selection (Purugganan and Fuller 2009). This dynamic led to genetic differentiation in time and space, particularly at the molecular level, as shown by different levels of diversity between species and varying degrees of genetic structure, indicating a complex history (Haudry et al. 2007). The genetic diversity and structure of crops are typically studied at different scales in space ranging from the village level, which allows the characterization of diversity maintained by local community (Pressoir and Berthaud component of the overall genetic diversity of a cultivated species is particularly sensitive to recent changes in farming practices. Modern methods of plant breeding, with the development of pure lines, caused a drastic reduction of the within-variety genetic diversity present in farming systems before the industrialization of agricultural systems (Roussel et al. 2004; Thomas et al. 2011). In addition, seed diffusion became linear and top-down from the plant breeder to the seed company and then to the farmer, and farmers purchased seed each year, stopping the adaptation process that occurs when farmers save and replant seeds of genetically diverse population-varieties (Bonneuil 2008).
In traditional farming, human and natural processes still strongly interact to determine the rate of change in population-varieties (Dyer and Taylor 2008). Two levels of human processes should be taken into account: first, the seed diffusion between farmers; second, cultural practices, including selection (also termed 'artificial selection' to distinguish it from 'natural selection'), and seed storage conditions. Because farmers use their own saved seed for several years, seed diffusions are not very frequent (Perales et al. 2003). Farmers' selection is generally applied on inflorescences (ears or panicles), which may induce kin-structured founder effects, as seeds in a single inflorescence are full or half-sibs. This kin-structured founder effect can cause an increase in differentiation among populations (Louette et al. 1997;Ingvarsson and Giles 1999). Environmental processes also include stochastic events such as catastrophic weather (strong drought, flood...). Thus, an extinction event can be the result of a climatic disaster or of a farmer's decision not to grow a particular variety (sub-population) in a particular field and year. Local extinction occurs when a seed lot is not re-sown for various reasons. Colonization occurs when a new population arrives in a new farm after a diffusion event between two farmers. Farmers generally receive seed from a single source (propagule pool-like situation) (Rice et al. 1998) or from a limited number of sources (Almekinders et al. 1994;Zeven 1999;Perales et al. 2003;Alvarez et al. 2005;Badstue et al. 2007;Hodgkin et al. 2007; Barnaud et al. 2008).
In industrialized countries, although landraces and folk varieties are no longer cultivated by the majority of farmers, seed saving and seed exchange networks have recently emerged in the context of organic agriculture [reviewed by Thomas et al. (2011)]. Organic farmers, faced with a shortage of varieties meeting their needs in terms of agronomic and quality traits, have begun cultivating varieties obtained from genebanks or from elders. Farmers within these associations generally exchange small quantities of seed which are then multiplied on farm for their own use. While these seed exchanges share characteristics with the informal seed systems of traditional agricultures, they also have specificities as they are situated in the context of modern organic agriculture in developed countries (recent social connection among farmers through seed circulation, renewing of communities of practices, longscale seed exchanges, etc.…) (Demeulenaere and Bonneuil 2012).
The role of this type of seed exchange network in the conservation of genetic diversity in an industrialized context can be important but is not yet well characterized. In this paper, we develop an interdisciplinary approach by combining genetics and ethnobotany to assess for the first time the level of genetic diversity and the population structure at the variety level, from the example of Rouge de Bordeaux (RDB), a folk variety of bread wheat distributed among a network of actors in France. Our goal was to assess to what extent seed diffusion and farming practices influence the genetic diversity of this variety and its population structure. Outcomes from this research could contribute to the proposition of recommendations in terms of management strategies of crop diversity.
Materials and methods
Population origin and sampling strategy Initially, a socio-anthropological study focused on the dynamics of seed circulation within the social network composed of farmers from the national Réseau Semences Paysannes organization, an organization created in 2003 to revive on-farm management of seeds and linking concerned farmers' associations (literally 'Peasant seed network', below referred to as RSP) and with the curator of the French National Genebank at Clermont-Ferrand (CLM). A snowball approach was used to trace back seed circulation of bread wheat varieties among the different actors. This study revealed that RDB was one of the most popular varieties among farmers in the RSP (Bonneuil and Demeulenaere 2007).
Historical archives revealed that RDB appeared probably around 1865 in Lectoure, in the south-western France, then started moving toward Bordeaux (still in south-western France) and toward the central France during the years 1870-1871 (Vilmorin-Andrieux Companie 1880). RDB was present in at least 75% of French departments in 1912 (Brétignière 1912). Afterward, its use began to decline as it was replaced by more productive varieties. Wheat varieties of the time were mostly genetically heterogeneous. For this reason, they are called population-varieties, following Bustarret's definition (1944). RDB is thus a population-variety characterized by its ear type, which is red and awnless.
Relying on this information, we asked the genebank curator and some farmers cultivating RDB to provide us one or more seed samples from their populations. The nomenclature used to identify each sample was as follows: the first three-first characters for the name of the seed lot provider and two characters for the year of the last harvest. One optional letter was added if two samples came from different seed management practices on the same farm in the same year. We obtained 19 seed samples from 11 actors distributed among the French territory (for the privacy of the farmers, we have used code names) ( Table 1, Fig. 1).
Interviews focusing specifically on sampled populations of RDB were performed to obtain more detailed information about seed circulation and cultural practices. Applying the snowball approach to trace back the seed circulation of RDB, new actors mentioned during the interviews were contacted and interviewed. For each dissemination event, we recorded the actors involved, the date, and when this information was available, the quantity of seed diffused.
Although farmers involved in seed systems have received increasing attention as potential partners for participatory plant breeding and development programs (McGuire 2008), only few studies depict these systems through an analysis and the graphic representation of seed exchange networks (Subedi et al. 2004;Bonneuil et al. 2006;Aw-Hassan et al. 2008;Emperaire et al. 2008). In these studies, seed exchange networks between farmers were drawn in which the node corresponds to the farmer and the link materializes the seed flow. Depending upon the study, a multi-species or multi-variety seed exchange network was represented. In this study, to better understand the consequences of actor practices on the genetic structure of the crop, we focused on the partial seed diffusion and reproduction (number of generations) networks at the population-variety level (RDB). In our case, the node corresponds to the wheat population seed lot and the link combines the seed flow and reproduction.
Molecular analyses
In Seed sample name: the three-first characters represent the seed lot provider, two numerals for the year of the last harvest and one optional characters was added if more than one sample was provided by the same farmer the same year; Location corresponds to the number used in Fig. 1 to localize the origin of the seed samples; Receipt year: year of the last diffusion (colonization) event; Harvest year: year of the last harvest of the seed sample; No. of reproduction cycles: number of reproduction cycles from the last diffusion event; Coordinates: geolocalization data of the seed samples; Population size: qualitative population size of the sampled populations based on the cultivated area (small = 1-10 m 2 , medium = 10-100 m 2 , large > 100 m 2 ).
Xgwm539, and Xgwm642, one (wmc231) by Somers et al. (2004), and a bi-loci marker (CFD17) on two chromosomes by Guyomarc'h et al. (2002) were used for genotyping the 586 individuals studied. This set of 19 markers covers 19 out of the 21 chromosomes of bread wheat. Only chromosomes 1A and 6B were not covered. PCR protocols were adapted from Röder et al. (1998) and Guyomarc'h et al. (2002): an initial denaturation (3 min at 94°C), and 35 cycles of 30 s at 94°C for denaturation, 30 s at 50°C (between 45 and 60°C, depending on the primer) for annealing and 30 s at 72°C for extension, followed by a final extension step of 5 min at 72°C. Amplified fragments were separated on a ABI 3130xl semi-automatic sequencer (Applied Biosystems, Courtaboeuf, France) and analyzed with GeneMapper 3.7 (Applied Biosystems, Courtaboeuf, France). Flowering time is a major adaptive trait in plants and in particular in the case of wheat because it determines the environmental conditions of reproduction with respect to climate and pathogen pressures (Remington and Purugganan 2003;Goldringer et al. 2006;Rhoné et al. 2008;Rhoné et al. 2010). The VRN-1 gene has been shown to be strongly associated with flowering time in wheat (Yan et al. 2003Rhoné et al. 2010;Rousset et al. 2011). In addition, wheat experimental populations cultivated for several years in either northern or southern France have shown significant contrasting responses in terms of allele and haplotype frequency variation (Rhoné et al. 2008;Rhoné et al. 2010). Thus, to search for some adaptation to climatic conditions in the populations, four VRN-1 polymorphic sites located in the three orthologous copies of VRN1 were genotyped: (i) duplication, insertion, and deletion in the promoter of VRN-1A (denoted VRN-1Apr in the following) revealed by Yan et al. (2004), (ii) a substitution in the seventh exon of VRN-1A (VRN-1Aex7) revealed by Sherman et al. (2004), (iii) a 4-kb deletion in the first intron of VRN-1B (VRN-1Bint1), and (iv) a 4-kb deletion in the first intron of VRN-1D (VRN-1Dint1) revealed by Fu et al. (2005). For all the VRN-1 polymorphic sites, PCR conditions and PCR product digestion protocols were the same as defined by the authors. To detect variations at VRN-1Apr, forward primers were modified with an M13 extension according to Boutin-Ganache et al. (2001), and PCR amplifications were performed in the presence of fluorescent-labeled M13 extension. The amplification products, loaded on 6.5% denaturing polyacrylamide gels, were analyzed on a LI-COR automated DNA sequencer (LI-COR Biosciences, Lincoln, Nebraska USA). The variations at VRN-1Aex7 (CAPS marker) and at VRN-1Bint1 and VRN-1Dint1 (presence or absence of deletions) were revealed by migration on 2% and 0.8% agarose gels, respectively, and visualized with UV light.
Genetic analyses
Population structure was assessed at two levels, among and within populations.
Genetic structure among populations The multivariate graph theory method Population Graphs developed by Dyer and Nason (2004) was used to study the genetic structure among populations. This approach is derived from graph theory and aims to describe complex population structures based on the distribution of the genetic covariance among the studied populations using SSR molecular data. Individuals of each population define a multidimensional population centroid. Each centroid defines a unique multidimensional coordinate representing the average genetic individual within the population considered. The same pairwise distances as in amova (Excoffier et al. 1992) were calculated, and a weighted saturated Population Graph was drawn where the weight corresponded to the distance. An informative topology was obtained by selecting an edge set that sufficiently described the among-population genetic covariance structure. Relying on genetic covariance properties and conditional independence, Whittaker (1990) proposed a statistical test to perform this edge selection with an alpha level for the fit of the network after edge On-farm crop metapopulation of bread wheat Thomas et al. removal set to 0.05. The network was constructed using the software GENETIC STUDIO (Dyer 2009). To quantify differentiation among sampled populations, we used the conditional graph distance metric (cGD), which is estimated as the length of the shortest path connecting pairs of populations, following Dyer et al. (2010). Values of F ST were also estimated for each pair of populations using Weir and Cockram's h estimator (Weir and Cockerham 1984) implemented in GENETICS software (Belkhir et al. 2000).
To understand the general organization of the Population Graph, it was necessary to detect whether structural sub-units (communities) were associated with more highly interconnected parts of the network. A deterministic approach that detects potentially overlapping communities based on the Clique Percolation Method with weight (CPMw) was performed using Palla's algorithm implemented in CFinder software (Adamcsek et al. 2006). In this approach, a k-clique is defined as a complete subgraph of k nodes all linked together (k)1 edges per node). Then, a community corresponds to the union of all k-cliques that can be reached from one to the other through a set of adjacent k-cliques (where adjacent means share k)1 nodes). The inverse of the distance matrix was used as a weighted matrix for the community detection. Communities can then be defined using an algorithm adapted for the weighted networks (Farkas et al. 2007). The intensity threshold (I) and the size of the clique (k) need to be chosen to have the lowest possible values while avoiding the detection of a single giant network. No giant network appeared when k is equal to 3 and without a fixed threshold for I. The algorithm was therefore used with these parameters.
Within-population genetic structure Genetic diversity was studied for both the 19 neutral markers and the four loci (VRN-1Apr, VRN-1Aex7, VRN-1Bint1 and VRN-1Dint1) located in three orthologous genes (VRN-1A, VRN-1B and VRN-1D). Mean number of alleles (R S ), unbiased Nei's estimate of genetic diversity (H e ) (Nei 1978), mean observed heterozygosity (H o ), and the deviation from Hardy-Weinberg genotypic proportions (F IS ) were calculated with Genetix software (Belkhir et al. 2000). Genotype richness (also called polyclonality) was estimated as the number of unique genotypes divided by the number of individuals per population. Following Goldringer and Bataillon (2004), we estimated the effective population size (N e ) using the temporal method proposed by Waples (1989) that relies on the variance of allelic frequency (F c ): N e ¼ tyÀtx 2FcÀ1=SxÀ1=Sy , where S x is the number of individuals sampled at the t x generation (respectively S y individuals at t y ).
The fine population structure was studied considering each genotype as two haplotypes. Haplotype reconstruc-tion and inference of missing data were performed using PHASE software (Stephens et al. 2001). Based on the methods of a recent paper , the MR algorithm was used. Runs consisted of 100 iterations as burn-in, 100 main iterations, and thinning interval equal to 1. Recombination rate between loci was equal to 0.5 because all markers were on different chromosomes. Then, pairs of haplotypes were selected using the best probability for each individual. This new dataset constituted a phased Multi-Locus Genotype (pMLG) dataset that was used with Arlequin software (Excoffier and Lischer 2010) to compute the inter-haplotype distance matrix, that is, the number of differences between each pair of haplotypes. We drew a saturated weighted network with each node corresponding to a distinct haplotype and edges linking each pair of haplotypes. Then, a threshold was fixed at one difference between haplotypes to conserve a link between two haplotypes. The haplotypic network was drawn with the Pajek software (Batagelj and Mrvar 2002). Kamada-Kawai's force-based algorithm (Kamada and Kawai 1989) was used to provide spatial distribution of the unconnected sub-networks composed of sets of nodes connected together and further called connected components. Each connected component composed of more than two nodes was defined as an independent haplotype class. Other haplotypes were defined as off-types (OT). The Minimum Spanning Network (MSN) obtained with these haplotypes was also drawn. The network representation of this MSN was achieved with the Pajek software (Batagelj and Mrvar 2002) with each node corresponding to a distinct haplotype and one edge linking two haplotypes with one difference. Color of nodes corresponds to the haplotype class of each haplotype. Intermediate haplotypes that were not observed were represented by '.' on haplotype networks. The same procedure was followed to determine haplotype frequencies and MSN for the four markers in the VRN1 gene copies, except that because no double heterozygote was found in the dataset, genotypes have not been phased.
Haplotype variation within populations was calculated by estimating the unbiased genetic diversity (H d ), which accounts for small population sizes, computed as: where n is the number of gene combinations analyzed in a population and p is the frequency of the ith haplotype in a population (Nei 1987).
A shared haplotype network (SHN) was drawn to track haplotypes represented at low frequencies among populations. Two populations were considered connected if they shared at least one haplotype. A threshold of haplotype occurrence in the whole dataset was set to 50 to represent only rare haplotypes. The Clique Percolation Method (CPM) was performed on the SHN using Palla's algorithm implemented in Cfinder software (Adamcsek et al. 2006), to detect communities of populations characterized by their shared allele composition.
Student's tests were performed using R software (R Development Core Team 2005) to test (i) whether populations taken in each of the seed diffusion and reproduction networks (SDRN, connected components) detected based on the interviews were more distant than populations from the same SDRN, (ii) for a significant difference between the mean values of diversity indexes estimated in each independent SDRN.
Seed diffusion and reproduction of RDB populations
The interviews with the different actors allowed us to trace the circulation of RDB populations to almost 30 years back. Thirty-five populations of RDB were documented with 28 seed diffusion events identified between 17 actors in addition to the 11 who provided seed samples. Populations were grown from 1 to 14 generations on the same farm. Based on this information, an oriented SDRN was drawn (this information was summarized in Fig. 2). Nodes represent seed lots of RDB and edges represent diffusion or reproduction events for these seed lots. This information defined two connected components (SDRN1 and SDRN2) where each node is a RDB population described by a location (farmer's name), a year and an optional character for multiple samples from the same farm and in the same year (see Fig. 2 and Table 1 for details). VIC provided us with two samples from two origins (VIC06A and VIC06B). Among the 19 sampled RDB populations, seven were connected together in the first SDRN (SDRN1). They shared a common ancestral population maintained in the Vilmorin-Verneuil collection (VER?). This SDRN included the seed lot maintained by the French genebank (CLM03). A second connected component (SDRN2) was detected grouping nine other RDB populations. These populations shared a common ancestral population grown between 1980 and 1993 in an alternative community farm (ARC80). This population was alternatively cultivated within a mixture composed of at least three distinct varieties and as a pure variety after a selection step based on spike type. Incomplete information made it impossible to connect three populations (JEF06, FRP06, and ALP05) to any network. Our knowledge about seed diffusion thus does not extend back far enough in time to find a seed diffusion event that connected the two connected components.
The interviews with the different actors indicated that three main cultural practices were observed: populations grown on small (1-10 m 2 ), medium (10-100 m 2 ), or large (>100 m 2 ) plots. These different areas corresponded to different functions: small plots were used for collec-tions of several varieties (ALB03A, BER03, BER06, CLM03, and CLM04); medium plots are also used for collections of a few varieties or multiplication of seed lots to increase the seed quantity as preliminary step before production (ALB03B, ALB06B, ALB06C, JFB05, PHC06, FRP06, JOP06, VIC06A, VIC06B, and JAS04); and large plots corresponded to production in fields (ALP05, JEF06, JFB03, and JFB06). Practice diversity is observed among farms but also within farms. For example, ALB used three different practices on his farm. ALB03A corresponded to a population maintained in collection (small plot). ALB03B and ALB06B are temporal samples of the same population maintained following conservation practices (selection for a particular varietal phenotype), with seed samples grown on 10 m 2 (medium). ALB06C has been grown in isolation within a field of another species (medium plot size). We also learned that JOP06 applied spike mass selection when he received the RDB in mixture with other varieties. JFB made a selection within his RDB population in 2001 based on an ear type with awns. This population was sampled in 2005 after four generations cultivated independently to his RDB population (JFB05). Another sample of this selection was obtained by CLM and was provided for this study after one cycle of reproduction using the conservation practices of CLM (CLM04).
Allelic within-population diversity
The level of genetic diversity estimated in each population with the unbiased Nei's index showed a large range of values (between 0.01 and 0.35, Table 2). An estimation of the effective size (N e ) was possible for the only temporal samples we had: the JFB and BER populations between 2003 and 2006 (respectively JFB03-JFB06 and BER03-BER06). Genetic effective population size was estimated as 104.5 individuals for the JFB population. N e tended toward infinite for the BER population because allele frequencies varied only very little leading to a very low F c value compared with the sample size effect.
Structure of genetic diversity among populations
Based on SSR molecular data and using the conditional independence method, the network topology that fits the global genetic covariance held in the dataset with an alpha error of 0.05 needed 47 edges to link the 19 RDB populations. This network clearly showed two groups of populations (group1 and group2) where populations from the same group were more connected than populations from different groups. This observation was confirmed by a community detection using CPMw algorithm. Two nonoverlapping communities were detected for a size k = 3, with k being the clique size parameter in the community search algorithm. The first one contained seven populations (denoted as group1) and the other 11 populations (denoted group2) ( Fig. 3A; group1 in blue and group2 in green). A third overlapping community was also detected (JAS04, ALB06B, and ALB03A), making the link between the two nonoverlapping communities.
The Population Graph obtained for the four VRN1 loci revealed a similar structure (data not shown). Eighteen among the 19 studied populations fell into the same groups regardless of the kind of marker. Only JOP06 was in the green group for the SSR markers but in the blue group for the VRN1 genes. This result was confirmed by a strong correlation between pairwise F ST computed for SSR markers and VRN1 genes, respectively (Fig. 4). Points with a pairwise F ST VRN1 value close to 0 and a pairwise F ST SSR value above 0.5 corresponded to pairs of populations comprising JOP06 and one of the populations from group1.
Individual haplotypic structure
The MSN based on the 19 SSR multilocus genotypes (MLG) included 119 distinct nodes, where each node was a distinct haplotype. The haplotype distribution among individuals (Fig. 5A,B) showed two main haplotypes (h1 and h11: 321 and 339 occurrences, respec-tively) differing at 12 of the 19 loci. A third haplotype (h2) was detected 91 times and was close to h11 (separated by four differences). These three haplotypes contributed for 64% of the whole dataset. The remaining haplotypes were detected from one to 47 times, and of these haplotypes, 76% were rare (i.e., present fewer than three times). The network topology of this MSN showed that most of the minor haplotypes were closely connected to the three main ones which suggested that they could be variants around the main haplotypes. The haplotype network, where two nodes were connected if the two haplotypes differ by one difference, showed four connected components composed of more than two nodes ( Figure S1). Based on this property of the network topology, we defined four classes of haplotypes (Fig. 5A): class I included h11 and 14 closely connected haplotypes (in blue in Fig. 5A), class II included h1 and 45 close haplotypes (in green), class III included h2 and 11 close haplotypes (in gray), and we also defined as class IV (in light green), a set of 16 haplotypes found at a low frequencies but highly connected (differing at one or two loci). This class was closely connected to class II (Fig. 5A). Finally, 29 haplotypes were considered as OT because they were too distant from the four classes. Among them, haplotypes h100, h106, and h105 (observed in populations CLM04 and FRP06) seem to derive from recombination between another off-type (h72) and one of the main haplotypes (h11). Within-and among-population haplotypic structure Using the previous haplotype clustering, we plotted the frequency of each haplotype group in the sampled populations, using pie charts on the Population Graph presented in Fig. 3B. This representation confirmed the existence of two main genetic groups of sampled populations. Each group showed a distinct pattern. The first one (in blue) (BER03, BER06, ALB03A, ALB03B, ALB06B, ALB06C, CLM03) was clearly homogeneous and mainly composed of class I haplotypes with a majority of the h11 haplotype. The rest was satellite haplotypes bearing between 1 and 3 differences compared with the h11 haplotype. Very few OT (<1%) were observed in this group of populations. The second genetic group was mainly composed of haplotypes of class II. JAS04, one of the three overlapping populations between the two groups, presented the same pattern. Thus, it seems sensible to bring it closer to the second genetic group rather than to the first group. The same argument could be applied for ALB06B and ALB03A to move them closer to Group1. Group2 was clearly more heterogeneous. Some populations were composed of individuals bearing mainly haplotypes of class II (JEF06, CLM04, VIC06A, VIC06B, and JAS04), one population (JOP06) was composed of individuals bearing haplotypes from the unique class III, while the rest consisted in composite populations composed of individuals of class II and III haplotypes (PHC06, JFB06, ALP06, FRP06) except for the population JFB05, which included haplotypes from classes II and IV. Only one population (JFB03) had individuals that shared haplotypes from three classes (I, II, and III). The proportion of off-type haplotypes in this second genetic group was higher than the first genetic group, with on average 4% OT per population.
A SHN was drawn to track haplotypes that were present in different populations at low frequencies (Fig. 6). A 6-clique community composed of six populations was found (PHC06, FRP06, JFB06, JEF06, VIC06A, VIC06B). This finding highlights that a set of haplotypes is shared by several populations. The 5-clique community included JFB03 in the group of six populations. Two other populations (CLM04 and JOP06) were connected to this core in the 4-clique community. All of these populations had been previously assigned to group2. A 3-clique community was found composed of three populations (ALB03B, BER03, and JFB03). Owing to a class I haplotype shared with JFB03, this community overlapped with the 3-clique community comprised by the populations already included in the 4-clique community. This was because JFB03 shared a class I haplotype. With H e : unbiased Nei's estimate of genetic diversity (Nei 1978), H o : mean observed heterozygosity, R S : mean number of alleles, GS diversity: the multivariate genetic diversity index (Dyer and Nason 2004), H d : unbiased genetic diversity for haplotypes, F IS : the deviation from Hardy-Weinberg genotypic proportions.
On-farm crop metapopulation of bread wheat Thomas et al.
Cross analysis between seed circulation information and genetic data
Based on our knowledge on seed diffusion, a pairwise matrix between the 16 populations belonging to a known diffusion and reproduction network (SDRN1 or SDRN2) was built to describe whether two populations belong to the same connected component or not. To quantify genetic differentiation among sampled populations, averaged cGD were computed within each group and between the two groups on the Population Graph. We tested for a significant difference in cGD values within and between groups using a Student's test. The difference was highly significant (P-value < 2.2 · 10 )16 ) with cGD averaging 5.8 for populations belonging to the same SDRN and 22.8 for populations that did not belong to the same SDRN. This result was consistent with the high level of differentiation observed between the two genetic groups detected in (Fig. 3A) (Table 2). This body of evidences indicated that the information on seed diffusion gathered through interviews was strongly consistent with the genetic structure detected with molecular data and that seed diffusion strongly influence the genetic structure and the levels of diversity of the managed populations. Three populations were not assigned to any SDRN. JEF06 was composed of haplotypes from class III, and ALP06 and FRP06 were composed of haplotypes from classes II and III (Fig. 3B). These results suggested that they were closer to SDRN2 than to SDRN1. This finding was confirmed by the fact that JEF06 and FRP06 were included in the 5-clique community (Fig. 6).
Discussion
The RDB population structure This study analyzed the structure of genetic diversity in a subdivided bread wheat population-variety named RDB. The sub-populations have been circulated for several years in a network of French actors (including farmers and the national genebank) involved in conservation and use of crop diversity. The goal of these analyses was to provide insights into the history of the populations to assess the impact of human practices on genetic diversity at the molecular level, to guide decisions on the conservation of genetic resources. In this study, we did not analyze quantitative genetic variation of adaptive or economical significance.
We applied the Population Graph method (Dyer and Nason 2004), which is a network theory-based method, to study inter-population relationships rather than F ST -based or distance-based methods developed within the theoretical framework of population genetics (Wright 1951;Nei 1972;Excoffier et al. 1992). While both methods rely on the covariance structures between all populations with no assumptions about the underlying evolutionary processes, the Population Graph method accounts for multiple relationships among populations using partial regression coefficients. Nineteen sub-popula- (between 1 and 11). Class I is composed of haplotypes in blue, Class II is composed of haplotypes in gray, class III is composed of haplotypes in green, class IV is composed of haplotypes in light green, class off-type is composed of haplotypes in red). (B) Distribution of haplotype occurrence based on the 586 genotypes of the dataset.
On-farm crop metapopulation of bread wheat Thomas et al. tions (586 individuals) were analyzed using 19 neutral markers. Two main genetic groups of populations (group1 and group2) were detected and found to be connected to each other. These two groups were also detected based on the four VRN1 polymorphisms. The Population Graph topology is expected to strongly reflect the migration model, as shown by a simulation approach using N-island and one-dimensional stepping-stone models (Dyer 2007). The observed topology of the RDB population-variety differed from both the stepping-stone and the N-island model because a strong clustering was detected, highlighting a more complex migration system. This pattern seemed to be mostly shaped by human activities (in particular by seed diffusion practices). A similar pattern was encountered in natural populations of Sonoran Desert cactus (Lophocereus schottii L.) submitted to an historical vicariance (induced splitting of population, into discontinuous parts, by sea) (Dyer and Nason 2004). In a study on a metapopulation of the seagrass Poseidonia oceanica in the Mediterranean basin, the authors highlighted the key role of a few populations as hubs for relaying gene flow (Rozenfeld et al. 2008). In the RDB case, five populations contributed to the transition between the two genetic groups and might play an analogous role. Yet, we should be cautious in the comparison because Rozenfeld et al. (2008) used a different network theory-based approach. In our study, the three populations from group2 (JAS04, JOP06, JFB05) were composed of haplotypes from classes II, III, or IV. As haplotypes from class II were very close to haplotypes from the class I, almost all alleles were shared between both classes, which could explain their position in the Population Graph (Fig. 3B). Except for one individual found in JFB03, there was thus no evidence that group2 received specific haplotypes or alleles from group1. Two populations of group1 (ALB03A and ALB06B) showed one specific allele from class III that explained their boundary position in the Population Graph. This shared allele could be the footprint of an ancestral common population rather than recent gene flow between the two groups of populations. With recent gene flows, we would expect a higher frequency of haplotypes intermediate between the two groups.
Intra-population genetic structure was studied through the haplotype spanning network. Indeed, defining the haplotype approach was relevant because as bread wheat is mainly a self-pollinated species [5-10% outcrossing (Enjalbert et al. 1998;Enjalbert and David 2000)] recombination is not expected to be frequent. Thus, pairwise linkage disequilibrium estimated for each pair of loci over all the 19 populations was significant for more than 80% of the cases. Haplotype clustering revealed 29 OT, while these were not detected using STRUCTURE-like softwares. Thus, when we used the INSTRUCT software (Gao et al. 2007) on this dataset, it induced instability in assigning OT to the genetic groups and altered likelihood values for the different number of ancestral group assessed (data not shown). As a consequence, the criterion to choose the optimal number of groups did not show a strong and stable elbow. Haplotype clustering highlighted different population substructures ranging from homogeneous populations (composed of only one haplotype class) to composite populations (composed of up to three haplotype classes). In addition, the global genotype richness (polyclonality) level was 19.4%. Polyclonality has been previously observed in cassava (Manihot esculenta Crantz) landraces (Elias et al. 2000(Elias et al. , 2001Pujol et al. 2005a,b) with values between 29% and 55% associated with an excess of heterozygote genotypes ()0.94 < F IS < )0.37). This was because of a complex system of agricultural management: volunteer plants recruited from soil seed banks often resulted from outcrosses. The most productive volunteer plants, in general largely heterozygous, are propagated by clonal reproduction. For this reason, heterozygotes occured at a high frequency. In bread wheat, rare spontaneous cross-pollination can also occur, which could increase the heterozygosity. However, after successive generations of self-pollination, heterozygosity decreases. Thus, self-pollination in heterogeneous populations can lead to the maintenance of polyclonal or composite populations with a low level of heterozygotes, as has been shown in natural population of Medicago truncatula (Siol et al. 2008). Following the practices of the different actors (farmers and genebank curators) have been divided into two distinct processes, one acting at the overall scale of the system, that is, seed diffusions, and the other acting locally, at the farm level, that is, reproduction of the seed lot, which is largely dependent on agronomic practices.
Impact of the seed diffusion network on the genetic structure As far as we know, this is the first interdisciplinary ethnobotanic and genetic study conducted at the level of a single population-variety. Previous studies have pointed out that seeds have such a symbolic importance for farmers. In most cases, farmers explain that they have been maintaining the same variety for a long time, even if they occasionally substitute entirely or mix their own seed with seed from external sources (Louette et al. 1997;Smale et al. 1999;Badstue et al. 2007), actions which would affect the genetic make-up of populations. Contrary to these situations, the genetic structure found in our study was highly consistent with the SDRNs obtained through interviews: within-SDRN cGD was significantly lower than between-SDRN cGD. Consistence between the rules described as structuring social networks of seed exchange between farmers communities and the genetic structure of manioc (Manihot esculenta Crantz) was also recently described in Gabon (Delêtre et al. 2011). In general, several cycles of reproduction are conducted between two events of seed diffusion. Recycling seeds from one's own harvest is the backbone of local seed supply (Perales et al. 2003;Carpenter 2005;Delaunay et al. 2008). This is also what we observed in this network of actors. On average, the 19 populations sampled in this study had been grown 5.7 generations in the same farm since the previous diffusion event. In comparison, populations were grown from 4.1 to 15 generations in farmer communities in Ethiopia (McGuire 2007). In other words, in our study, 89% of the seed source comes from the previous harvest of the same farmer. This value is similar to those observed in local farming contexts [80% in farmer communities growing sorghum in Burkina Faso (Delaunay et al. 2008), 53% in farmer communities growing maize in Mexico (Louette et al. 1997)].
Seed diffusion can be considered as a colonization event in the metapopulation model with two basic mechanisms: the 'migrant pool' model and the 'propagule pool' model (Slatkin 1977). In the seed diffusion process described here, colonization events mainly correspond to the propagule model with the exception of one seed sample (JOP06), which came from seed mixtures (following the migrant model). Even though strong differentiation among subpopulations is expected because of strong founder effects in the propagule model of colonization (Whitlock and McCauley 1990), the fact that we found no evidence of connection between the two SDRNs might indicate that two independent founding effects have occurred in the past. In addition, as bread wheat is mainly a self-pollinated species, the differentiation might be increased by a family group founding effect (Ingvarsson and Giles 1999). This lack of evidence for connection was consistent with the high level of differentiation between the two connected components (SDRN1 and SDRN2: F ST = 0.697). Furthermore, the fact that all the populations have been diffused suggested that populations might not yet have achieved equilibrium.
Thus, the genetic analysis provided new insights into the seed diffusion history and by extension into the associated social processes. Relying on information collected through the interviews, it was initially not possible to connect three populations (JEF06, FRP06, ALP05) to any SDRN although we collected seed circulation information back to the 1990s. With the molecular analyses of the population structure, it was possible to assign these three populations to the SDRN2, because they showed a pattern similar to that of SDRN2 populations. In addition, because two of them also presented a composite structure, we thought that the property of composite population was relatively old in the history of the RDB population-variety. Because JEF06 was not a composite population and showed no trace of alleles from haplotype class II while showing several satellite haplotypes from class III, JEF probably received a seed lot from a RDB population before the composite pattern occurred in SDRN2. We also showed that haplotypes at low frequency were shared by different populations of the SDRN2 (Fig. 6). This result confirmed that these populations were connected by seed circulation. Although a farmer (JFB) from SDRN2 received his RDB population from a unique source (ARC) (Fig. 2), we detected that his oldest RDB population (JFB03) was composed of individuals sharing three classes of haplotypes, including one belonging to class I. This is an argument for a complex ancestral population-variety composed of three main haplotype classes (I-III). However, this hypothesis needs to be considered carefully because only one individual was observed to come from haplotype class I. Furthermore, we showed that only a few specific alleles were shared between both SDRNs. An alternative hypothesis could be that two distinct cryptic varieties with almost the same phenotypic traits are being maintained independently in these two SDRNs.
Impact of human local practices on the genetic structure SDRN2. According to the information collected during the interviews, populations from SDRN1 (Fig. 2, in blue) come from the formal seed sector. The initial donor of the SDRN1 populations was a breeder. Thus, these populations were initially subjected to a strong homogenizing pressure to follow the distinction, uniformity, and stability (DUS) criteria of the formal system. Consequently, the CLM genebank sample (CLM03) obtained from this source showed a much lower genetic diversity than most of the other samples. The trend for genebank accessions to have lower genetic diversity than in situ collection was also highlighted in several papers (see Negri et al. 2009 for a review). In contrast to the populations of SDRN1, the populations of SDRN2 have always been grown on farm without the DUS constraints and diversified agricultural practices among farms, so they were subjected to less homogenization.
Demographic size of crop populations is generally highly variable (Rice et al. 1998). In this context, population size could play an important role in the evolution of populations depending upon the seed quantity obtained after the diffusion event and/or the seed quantity recycled. Generally, actors who practice variety conservation grow their populations on small plots (a few m 2 ), in contrast to others who follow multiplication, isolation, or production practices (field surfaces from 10 to several thousand m 2 ). Genetic drift, particularly in diversified populations with a small demographic size, might reduce the genetic diversity and increase the genetic load. This situation could account for some patterns observed in SDRN1, because five populations out of seven were grown in small plots. However, as mentioned in the previous paragraph, the overall low level of genetic diversity found in SDRN1 could be explained by the historical conservative practices of the formal system. Using the temporal variation of allele frequencies between the two samples available at the farm BER resulted in an infinite estimate of effective size, N e , because allelic frequency variation was too low. This was associated with a low variation in terms of haplotype composition of the population between 2003 and 2006 which is consistent with the conservative practices used by BER. Except for JFB05 and JOP06, which followed cultural practices best described as selection, populations in SDRN2 seemed to have larger size than populations from SDRN1. Estimated N e based on the JFB03 and JFB06 populations, within SDRN2, was of the same order of magnitude of bread wheat populations grown under dynamic management experiment [104.5 in this study compared with 123.0 after 10 generations of evolution in Goldringer et al. (2001)], while within-population genetic diversity was relatively high in these populations (0.32 and 0.31, respectively, for 2003 and 2006). This trend might be amplified when there was occasional past or recent mixture with other varieties (ARC80 and JOP06 respectively).
Migration is one of the evolutionary forces that could significantly influence the differentiation within the system. In the case of an open-pollinated species such as maize, pollen-mediated gene flow is important and generally leads to a low level of genetic differentiation, though farmers' selection on ear type induces stronger phenotypic differentiation among landraces (Pressoir and Berthaud 2003). Because phenotypes are quite distinct between varieties and because wheat is a self-pollinated species, uncontrolled migration among populations is expected to be rare. However, the composite property of some populations of SDRN2 (mainly haplotype classes II and III) and the higher number of haplotypes observed in class III indicated that migration might have occurred in the past with individuals of haplotype class II that migrated into populations of haplotype class III. In addition, we know that haplotype class II is genetically very similar to class I, thus possibly indicating a common ancestral origin. While this is only the structure of the neutral genetic diversity, if a convergent phenotype was also to be observed between the different haplotype classes that could explain why farmers continue to grow these different populations under the same name RDB, a detailed phenotyping of these different haplotype classes would be helpful to confirm this point. The low outcrossing rate found in wheat [5-10% (Enjalbert et al. 1998;Enjalbert and David 2000)] is consistent with finding some recombinant individuals. This was observed in CLM04 and FRP06. Present at low frequencies, this phenomenon illustrates contact with other varieties. This is consistent with two identified practices: as already mentioned, some farmers have grown their RDB populations in mixture with other varieties, while other farmers maintain their populations in collections and grow them in small plots close together that could result in mixtures or outcrosses at different steps of the reproduction process.
Genetic differentiation (pairwise F ST ) measured in neutral regions was highly correlated with genetic differentiation measured in VRN-1 genes involved in flowering time (adaptive trait) (Fig. 4). Divergent selection between wheat populations grown for several generations in contrasted sites would have led to specific patterns such as higher F ST at genes under selection compared with F ST at neutral markers (Vitalis et al. 2001;Rhoné et al. 2010). Thus, the structure of genetic diversity observed seems more influenced by actors' practices rather than by the short-term environmental conditions where populations have been grown. Different types of selection can be described. The first is negative selection performed by farmers or genebank curators when they remove off-type plants that appeared spontaneously in the population in the field. These practices could explain the low rate of OT in the dataset. The second selection is positive: for example, the ear-based selection for the RDB ear type [red awnless (JOP06)]. The farmer explained that he received a mixture of different wheat varieties including RDB. He thus decided to select a few RDB ears type to initiate a new cycle of multiplication as a pure variety. This selected population showed low genetic diversity (unbiased H e = 0.008) with only one class of haplotype detected (class II). Finally, there was another case of positive selection when in 2001, one farmer (JFB) made a selection of a new derived ear type (red awned) which appeared spontaneously in his RDB population. He further grew the progeny as a separate population, which he named 'Rouge du Roc'. This process corresponds to the creation of a new population-variety related to RDB. In 2003, he gave a sample to CLM.
Conclusion
This article investigated how human activities shape genetic diversity of crops at the variety level. We showed that the network of actors involved in the RDB cultivation or conservation strongly influenced the population-variety structure and maintained it under a nonequilibrium state. Using a metapopulation genetic framework helped us to identify two processes that led to coexistence of two cryptic genetic groups: (i) at the global scale, the combined analysis between the seed diffusion dynamics and the genotyping of RDB populations highlighted two distinct seed diffusion pathways which appeared to be strongly consistent with the genetic structure of this populationvariety, (ii) cultural practice diversity affected the local scale (different population sizes, selection, migration…), leading to the maintenance of contrasting populations with a large range of diversity from fixed populations to composite populations.
From a genetic resources perspective, these results give convincing arguments to the stakeholders involved in genetic resource management for collecting critical information about seed circulation and cultural practices in the context of on-farm conservation of cultivated diversity. Here, we showed that on-farm conservation has the particular characteristic of maintaining intra-varietal genetic diversity. This leads us to emphasize the need to foster collaboration among partners from ex situ and in situ conservation to conserve crop genetic diversity at the different levels. | 2018-04-03T05:52:42.569Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "2228fa9c1b704bd0428645177316e06955d73bd1",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1752-4571.2012.00257.x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2228fa9c1b704bd0428645177316e06955d73bd1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
237610661 | pes2o/s2orc | v3-fos-license | Prognostic Significance of Admission Systemic Inflammation Response Index in Patients With Spontaneous Intracerebral Hemorrhage: A Propensity Score Matching Analysis
Intracerebral hemorrhage (ICH) accounts for ~15% of all strokes and is associated with high mortality and disability rates. The systemic inflammation response index (SIRI) is a novel systemic inflammatory marker based on peripheral neutrophil, monocyte, and lymphocyte counts. This study aimed to evaluate the prognostic significance of admission SIRI in patients with spontaneous ICH and compare its predictive ability with that of the neutrophil-to-lymphocyte ratio (NLR). This retrospective study was conducted based on a prospectively collected database of patients with ICH between June 2016 and January 2019. Propensity score matching (PSM) was conducted to adjust for potential imbalances in the clinical parameters. A total of 403 patients were included in the original cohort. The optimal SIRI cut-off value was 2.76. After 1:1 PSM based on potential confounding variables, a new cohort containing 262 patients was established for further analysis. In the original cohort, SIRI served as an independent predictor of 3-month functional outcome [odds ratio (OR), 1.302; 95% CI, 1.120–1.512; p = 0.001] and 1-month mortality (OR, 1.072; 95% CI, 1.020–1.126; p = 0.006), while NLR was independently associated with only 3-month functional outcomes (OR, 1.051; 95% CI, 1.004–1.100; p = 0.031) and not 1-month mortality. The same applied to the PSM cohort. Receiver operating characteristic analyses and predictive models indicated that in most instances, SIRI was superior to NLR and their components in predicting the outcomes of patients with ICH. Our study found that SIRI is determined to be an independent predictive indicator for ICH patients in 3-month functional outcomes and 1-month mortality. The prognostic predictive ability of SIRI was stronger than that of NLR.
INTRODUCTION
Intracerebral hemorrhage (ICH) is a life-threatening condition with a high mortality and disability rate and occurs due to spontaneous bleeding into the brain parenchyma, involving the ventricles and subarachnoid spaces in extreme circumstances. ICH accounts for ∼15% of all strokes (1). In terms of ICH, 75-85% of cases originate from the spontaneous rupture of small vessels damaged. by chronic hypertension or amyloid angiopathy (2). The incidence of ICH is higher in male and elderly patients. Rapid CT after onset can be used to recognize almost all forms of acute ICH and help make optimal medical decisions within the shortest time. The global burden of ICH mainly results from inadequate management of chronic hypertension and other modifiable risk factors (3) Growing evidence has indicated that inflammatory responses participate in the pathophysiological processes of brain injury after ICH, and inflammation is one of the crucial contributors to ICH-induced secondary brain injury (4). Leukocytes play an important role in immune response, cell migration, perihematomal edema formation, blood-brain barrier (BBB) integrity, and cell death after ICH (5,6). Accumulating data have demonstrated that increased blood leukocyte count is associated with more serious disease and worse outcomes in ischemic and hemorrhagic strokes (7). Neutrophil-to-lymphocyte ratio (NLR), based on the coexistence of lymphopenia and leukocytosis in the initial inflammatory response, may be a useful peripheral biomarker for predicting the prognosis of stroke (8). Other peripheral inflammatory biomarkers, whose prognostic ability in ICH patients has also been confirmed, are systemic immuneinflammation index, NLR, and platelet-to-lymphocyte ratio (9,10). Systemic inflammatory response syndrome, which is defined based on the changes in leukocyte and vital signs, is also associated with outcomes (11,12).
The systemic inflammation response index (SIRI) is a novel systemic inflammatory marker based on peripheral neutrophil, monocyte, and lymphocyte counts. In previous studies, SIRI was found to be an independent prognostic indicator in various tumors (13)(14)(15). Therefore, this study aimed to evaluate the prognostic significance of admission SIRI in patients with spontaneous ICH and compare its prognostic ability with that of NLR.
Study Design
This retrospective study was conducted based on a prospectively collected database of ICH patients at the Department of Neurosurgery of West China Hospital, Sichuan University between June 2016 and January 2019. All patients in this cohort were managed according to the latest guidelines for stroke, and their baseline clinical data were retrieved from the electronic medical record system of the West China Hospital (16).
The exclusion criteria were as follows: (1) age <18 years; (2) incomplete baseline clinical data; (3) ICH caused by a tumor, aneurysm, or arteriovenous malformation; (4) absence of CT angiography and follow-up CT within 24 h of admission; (5) a history of infectious diseases, cancers, rheumatic diseases, Abbreviations: ICH, intracerebral hemorrhage; BBB, blood-brain barrier; NLR, neutrophil-to-lymphocyte ratio; SIRI, systemic inflammation response index; GCS, Glasgow Coma Scale; mRS, modified Rankin Scale; HE, hematoma expansion; ROC, receiver operating characteristic; PSM, propensity score matching; OR, odds ratio; AUC, area under the curve. blood system diseases, or other diseases which evidently affect peripheral blood cells; (6) loss to follow-up.
Clinical Parameter Assessment
Clinical variables were retrieved from the electronic medical record system, including the following variables: (1) demographics: age of onset and sex; (2) clinical history: history of hypertension, diabetes mellitus, smoking, alcohol abuse, and stroke; (3) admission conditions: Glasgow Coma Scale (GCS) score, admission systolic blood pressure, diastolic blood pressure, and duration from onset to hospitalization; (4) ICH imaging characteristics: hematoma volume, location of hematoma, presence of intraventricular hematoma, and hematoma expansion (HE); (5) treatment; (6) routine blood tests. Notably, routine blood tests were conducted immediately after admission. SIRI was defined as neutrophil count × monocyte count/lymphocyte count, and NLR was defined as neutrophil count/lymphocyte count.
Patients were followed up every month after admission. The primary outcomes were 3-month functional outcomes and 1month mortality rate. The modified Rankin Scale (mRS) was used to evaluate patients' functional outcomes at each follow-up. Patients who had been discharged were followed up by telephone. Good outcome was defined as an mRS score of 0-2, while a poor outcome was defined as an mRS score of 3-6 (17).
The volume of parenchymal hematoma was calculated on the initial CT scans using 3D Slicer (http://www.slicer.org), and manual segmentation was applied by two independent neurosurgeons (18). HE was defined as hematoma enlargement ≥6 ml or ≥33% within 24 h (19). Surgical interventions mainly included hematoma evacuation with craniotomy and external ventricular drainage.
Statistical Analysis
All statistical analyses were performed using SPSS software (version 22.0; IBM, Armonk, NY, USA) and R software (version 3.6.1). Continuous variables are presented as mean ± SD or median with interquartile range, while categorical variables are presented as frequency and percentage. Categorical variables were compared using the χ 2 or Fisher's exact test. Continuous variables that conformed to the normal distribution were compared using Student's t-test; otherwise, the Mann-Whitney U-test was employed. Logistic regression analyses were used to determine the influence of risk factors on outcomes in patients with ICH. Variables with p < 0.1 in univariate analysis were included in backward stepwise multivariate logistic regression. Receiver operating characteristic (ROC) analysis was conducted to assess the accuracy of the SIRI, NLR, and other markers for outcomes. The optimal cut-off value of SIRI was determined by calculating the maximum Youden index using ROC. DeLong's test was employed to compare the areas under the curve (AUC). Predictive models for outcomes were constituted by independent predictive indicators in multivariate logistic regression; Harrell's concordance index (C-index) and Akaike information criterion (AIC) were used to assess the predictive accuracy and model-fitting of predictive models, respectively. Higher C-index indicated better predictive accuracy, and lower AICs indicated superior model-fitting (20,21). A two-sided p < 0.05 was considered statistically significant. Propensity score matching (PSM) was conducted to adjust for an imbalance of clinical parameters with a p-value of <0.1 in univariate analysis. These patients were matched 1:1 using the nearest-neighbor algorithm with a caliper width of 0.2 and without replacement.
Ethics
This study was approved by the Ethical Committee of Sichuan University (2013NO52) and conducted following the principles of the Declaration of Helsinki. All patients and their authorized trustees were informed and provided signed informed consent to use their clinical data for research purposes.
Baseline Clinical Characteristics
As shown in Figure 1, a total of 403 patients were included in the original cohort. The optimal cut-off value of SIRI was determined to be 2.76 in ROC analysis. Among the 403 patients, 189 patients had SIRI <2.76 and 214 had SIRI ≥2.76. After 1:1 PSM based on potential confounding variables, a new cohort containing 262 patients was established for further analysis. Data are expressed as n (%), mean ± SD, or median (25th, 75th quartile). Significant findings are expressed in bold and italic. ICH, intracerebral hemorrhage; GCS, Glasgow Coma Scale; SBP, systolic blood pressure; DBP, diastolic blood pressure; IVH, intraventricular hematoma; PLT, platelet; PT, prothrombin time; APTT, activated partial thromboplastin time; INR, international normalized ratio; SIRI, systemic inflammation response index; NLR, neutrophil-to-lymphocyte ratio.
The clinical characteristics of the PSM cohort are listed in Table 2, with a lower GCS score (p < 0.001), larger hematoma volume (p < 0.001), presence of IVH (p = 0.037), supratentorial hematoma (p = 0.001), and surgical interventions (p = 0.001) associated with unfavorable outcomes at 3 months after admission. In the group with 1-month mortality, a lower GCS score (p < 0.001) and shorter duration from onset to hospitalization (p = 0.015) were directly related to death. Higher neutrophil count, monocyte count, and SIRI were associated with unfavorable outcomes in both groups. Higher NLR was significantly related to poor 3-month functional outcomes (p < 0.001) but not 1-month mortality (p = 0.271).
Association of SIRI With Outcomes
In the original cohort, multivariate logistic analysis (
Predictive Ability of SIRI and NLR in Outcomes
ROC analysis was employed to determine and compare the predictive ability of SIRI and NLR in 3-month functional outcomes and 1-month mortality in patients with ICH (Figure 2, Supplementary Figure 1). In the original cohort, SIRI had a stronger predictive ability than NLR in 3-month functional outcome (Figure 2A, AUC 0.748 vs. 0.698; DeLong's test, Z = 2.35, p = 0.019) and 1-month mortality (Figure 2B, AUC 0.745 vs. 0.656; DeLong's test, Z = 4.73, p < 0.001). The same applied to the PSM cohort, where the predictive ability of SIRI was also better than that of NLR in 1-month mortality (Figure 2D, AUC 0.644 vs. 0.554; DeLong's test, Z = 3.14, p = 0.002). In 3-month functional outcome, although predictive ability of SIRI was superior to NLR, there was no statistical difference ( Figure 2C, AUC 0.653 vs. 0.636; DeLong's test, Z = 0.60, p = 0.550). Predictive models were conducted to further evaluate the predictive accuracy of the aforementioned markers ( Table 5). Basic models consisted of independent predictive indicators other than peripheral blood markers. The results indicated that basic models with SIRI had highest C-index and lowest AIC in 3-month functional outcome in both original and PSM cohort, indicating the best predictive accuracy and model-fitting. With regard to 1-month mortality, the basic model with SIRI was superior to that with monocytes in the original cohort, but the result was opposite in PSM cohort.
DISCUSSION
In recent years, with the improvement of quality of life and medical conditions, excellent medical treatments, including medication and surgery, have been provided, which have a potent and direct impact on ICH morbidity and mortality (16). Multidisciplinary collaborations, such as between imaging, pathology, physiology, and neurosurgery, are needed to understand this condition and its underlying mechanism. In this study, we focused on the prognostic role of systemic inflammation biomarkers in peripheral blood in patients with spontaneous ICH.
Secondary damage due to ICH in the brain parenchyma induced by inflammatory cells and inflammatory cascades plays a crucial role in disease progression, thus affecting outcomes. Local inflammation adjacent to the primary injury could not be evaluated or measured directly, whereas systemic inflammation might reflect local inflammation in the peripheral blood system to some extent. Damage-associated molecular patterns, which are released by injured or dying neurons and cytokines during early injury, can gain access to the systemic circulation through the broken BBB or cerebrospinal fluid drainage system (22). In animal models of ischemic stroke, immuno-dysregulation after ischemic stroke includes upregulation of systemic inflammatory response. In animal models, a large ICH volume results in decreased leukocytes and lymphocytes and increased monocytes (7). In the same, higher leukocyte counts have been associated to hematoma growth and early neurological deterioration (8).
Relevant evidence indicated that the peripheral cellular immune system changed dramatically in the immediate aftermath of ICH (23). Therefore, changes in specific inflammatory markers in the peripheral blood are an indicator of the severity of the primary injury, theoretically. In oncology, inflammatory markers from peripheral blood are used to predict tumor progression and prognosis (24). We have introduced a novel systemic inflammatory marker SIRI in our study, which was first reported in pancreatic cancer in 2016 (25). Since SIRI and NLR have a great similarity in their components, their predictive abilities in prognosis are compared in this study. NLR has been widely used as an effective indicator and monitor in various diseases, but not limited to tumors, rheumatic diseases, cardiovascular diseases, and infectious diseases (26)(27)(28)(29). It is a very sensitive but less specific hematologic parameter to measure stress, intensity of infection/inflammation, and severity of illness of various origin (30). It has also been determined to play a strong predictive role in prognosis for ICH and subarachnoid hemorrhage patients in previous studies (31,32). Similar to most related studies, the results of this study indicate that NLR is an independent risk factor for 3-month functional outcomes measured by the mRS. Compared with NLR, SIRI is mainly reported in the field of cancer. Recent researches about SIRI in aneurysmal subarachnoid hemorrhage showed that higher level of SIRI served as an independent indicator of unfavorable clinical outcomes (33,34). In our research, SIRI was superior to NLR in predicting 3-month functional outcomes and has significant advantages in predicting 1-month mortality. However, NLR did not serve as an independent risk factor for 1-month mortality in ICH patients in our study.
Monocytes are mononuclear myeloid cells that originate from the bone marrow and circulate within the bloodstream (35). Like neutrophils, monocyte recruitment in circulation and injured tissues is a key feature of inflammation (36). A previous study has shown that a higher monocyte count on admission is an independent predictor of HE (37). In a study by Walsh et al., absolute monocyte count was independently associated with 30day case fatality in 240 adult ICH patients, which is consistent with their previous study and our current study (38,39). In a previous study by Mackey et al., elevated monocyte count was also an independent risk factor for 30-day case fatality (40). In the current study, we also found that monocyte count also served as independent prognostic predictors in 3-month functional outcome and 1-month mortality in the original cohort, and presented an excellent predictive ability in 1-month mortality in the PSM cohort. In consideration of the prognostic ability of monocytes in ICH patients, this could partly explain why the combination of monocyte and NLR gains predictive ability in outcomes.
From another perspective, stability of prognostic capacity in single component including neutrophil, lymphocyte, and monocyte was inferior to SIRI according to the results from multivariate analysis, ROC analyses, and predictive models. In sum, the ability to mirror the extent of inflammation corresponds to the ability to predict prognosis. For peripheral blood-relevant inflammatory markers, diversity compound modes are worth trying and easily realized, which might improve the predictive ability in specific diseases.
In fact, inflammation was not only a prognostic indicator for ICH patients but also a crucial therapeutic target based on the theory that cellular and molecular components of inflammation are involved in post-hemorrhagic secondary brain injury (41). Although the progression of developing specific therapeutic targets remains challenging, markers such as NOD-like receptor family, pyrin domain-containing 3 (NLRP3), C-C chemokine receptor type 1 (CCR1), and Toll-like receptor 4 (TLR4) are proven effective in intervening the progression of ICH-related inflammation (42)(43)(44).
There are several limitations to this study. First, followup blood tests at each follow-up time point were absent in this study due to incomplete baseline clinical data. For various reasons, it was inconvenient and difficult for some patients to have blood tests regularly, especially when they were not in the hospital. Second, the sample size was not large enough to be divided into training and validation cohorts for further verification. Third, more complicated and comprehensive prognostic patterns are needed to evaluate the prognosis of ICH patients in various aspects, including cognitive function and quality of life. Fourth, not all patients were admitted to hospital within 24 h after onset. Although these patients were in the minority, this might induce unknown bias in laboratory results. Fifth, some occult infections cannot be diagnosed at an early stage by using clinical and laboratory criteria; this might also create bias. Finally, this analysis was conducted in a single institution; therefore, the results should be verified using multi-center data.
CONCLUSION
To our knowledge, this is the first study focusing on the prognostic significance of admission SIRI in patients with spontaneous ICH. In this study, SIRI was determined to be an independent predictive indicator for ICH patients in both 3-month functional outcomes and 1-month mortality. Furthermore, its prognostic predictive ability is better than that of NLR. In the near future, multi-center collabora tion is needed to further verify the results and illuminate the underlying mechanism.
DATA AVAILABILITY STATEMENT
The datasets for this study are available from the corresponding author on reasonable request.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Sichuan University. The patients/participants provided their written informed consent to participate in this study. | 2021-09-24T13:18:19.607Z | 2021-09-24T00:00:00.000 | {
"year": 2021,
"sha1": "f39375f04703a70c1f326392132ecf1be563a0da",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.718032/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f39375f04703a70c1f326392132ecf1be563a0da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3409351 | pes2o/s2orc | v3-fos-license | On the Unhappiness of Software Developers
The happy-productive worker thesis states that happy workers are more productive. Recent research in software engineering supports the thesis, and the ideal of flourishing happiness among software developers is often expressed among industry practitioners. However, the literature suggests that a cost-effective way to foster happiness and productivity among workers could be to limit unhappiness. Psychological disorders such as job burnout and anxiety could also be reduced by limiting the negative experiences of software developers. Simultaneously, a baseline assessment of (un)happiness and knowledge about how developers experience it are missing. In this paper, we broaden the understanding of unhappiness among software developers in terms of (1) the software developer population distribution of (un)happiness, and (2) the causes of unhappiness while developing software. We conducted a large-scale quantitative and qualitative survey, incorporating a psychometrically validated instrument for measuring (un)happiness, with 2220 developers, yielding a rich and balanced sample of 1318 complete responses. Our results indicate that software developers are a slightly happy population, but the need for limiting the unhappiness of developers remains. We also identified 219 factors representing causes of unhappiness while developing software. Our results, which are available as open data, can act as guidelines for practitioners in management positions and developers in general for fostering happiness on the job. We suggest considering happiness in future studies of both human and technical aspects in software engineering.
INTRODUCTION
The need and importance of managing the individuals forming the software development workforce were identified early in software engineering research [41]. Management and people related challenges grow as the numbers of software companies and developers increase with the digitalization of existing businesses and startups founded on software from day one [59]. A practice that has emerged recently is to promote flourishing happiness among workers in order to enact the happy-productive worker thesis [72]. Notable Silicon Valley companies and influential startups are well known for their perks to developers [52]. Recognizing, managing, and improving the happiness of all stakeholders involved in producing software is essential to software company success [10].
A novel line of research belonging to behavioral software engineering [47] is emerging, focusing on the relationship between the happiness of developers and work-related constructs such as performance and productivity [19,20,32,33,35,54,56], and software quality [11,44]. The empirical evidence indicates that happy developers perform better than unhappy developers [34]. The studies so far, including those by the present authors, imply that managers and team leaders should attempt to foster developer happiness.
There is the other side of the coin, though. Diener [12] and Kahneman [43] have suggested that objective happiness 1 can be assessed by the difference between experienced positive affect and experienced negative affect. The happiness equation suggests that maximizing happiness can be achieved by maximizing positive 1 We are using the more colloquial term happiness instead of subjective well-being throughout the paper as it has historical meaning to research in organizational behavior and psychology [24]. Furthermore, as our view of unhappiness contemplates it as the negative component of happiness, we interchange the two terms when dealing with quantifications of developers' (un)happiness. arXiv:1703.04993v3 [cs.SE] 10 May 2017 experiences of individuals, minimizing their negative experiences, or both.
Software developers are prone to share horror stories about their working experience on a daily basis [34]. Those in managerial positions should attempt to understand the nature and dynamics of unhappiness in the workplace to create programs for preventing dysfunctional responses among employees [68]. Psychological disorders such as stress and burnout could be reduced by analyzing the negative affective experiences of developers and turning them positive [51]. Furthermore, the voice of practitioners should be heard in software engineering research -software developers want their unhappiness to be voiced out [22,34]. For the previously stated reasons, there are calls to understand the benefits of limiting negative experiences on the job [3,12,24].
The current research on software developers' affective experience lacks a baseline estimation of the distribution of happiness among software developers, as well an understanding of the causes of unhappiness that would be based on a broad sample.
In this paper, we echo the previous calls and aim to broaden the understanding of unhappiness among software developers. Based on the existing literature, we set the following research questions (RQs). RQ1 What is the distribution of (un)happiness among software developers? RQ2 What are the experienced causes for unhappiness among software developers while developing software? To answer the RQs, we conducted a large-scale quantitative and qualitative survey of 2 220 software developers in which we asked them to voice out causes of happiness as well as unhappiness. We computed the population estimate of happiness, found 219 causes of unhappiness, and showed that the most prevalent causes of unhappiness are external to developers, suggesting that managers and team leaders have a realistic chance of influencing the happiness of software developers at work. We archived the list of causes as open data [31].
BACKGROUND AND RELATED WORK
What is happiness, and what does it mean to be happy or unhappy? Intuitively, we could associate this question to the sensing of an individual's affect. We begin by discussing affect, emotions, and moods.
Russell [61] has provided a widely agreed definition of affect as "a neurophysiological state that is consciously accessible as a simple, non-reflective feeling that is an integral blend of hedonic (pleasuredispleasure) and arousal (sleepy-activated) values" (p. 147). That is, affect is how we feel at any given point in time, about anything, and this feeling is expressed in how pleasant and activated our state of mind is. We have argued elsewhere [37] that there are several theories and definitions for emotions and moods. For clarity and brevity, we use Russell's [61] theory for the present paper, which considers affect as the atomic unit upon which moods and emotions can be constructed. We consider moods as prolonged, unattributed affect, and we consider emotions as interrelated events concerning a psychological object, i.e., an episodic process of perception of affect that is clearly bounded in time, in line with several other authors, e.g., [21,44].
From a hedonistic point of view, a blend of affect constitutes an individual's happiness [39]. Happiness is a sequence of experiential episodes [39] and being happy (unhappy) corresponds with the frequency of positive (negative) experiences [50] 2 . Frequent positive (negative) episodes lead to feeling frequent positive (negative) affect, which in turn leads to happiness (unhappiness), represented by a positive (negative) affect balance [13]. In brief, unhappy individuals are those who experience negative affect more often than positive affect, which is a condition that can be detected by a negative affect balance [13,50].
Scale of Positive and Negative Experience
Recent studies have found several shortcomings in the Positive and Negative Affect Schedule (PANAS) [69], the most prominent measurement instrument for assessing happiness, in terms of its affect coverage [13,49,66] and neglect of cultural differences [49,67]. New scales have been developed that attempt to address PANAS' limitations. Diener et al. [13] have presented the Scale of Positive and Negative Experience (SPANE), a short scale that assesses the happiness of participants by asking them to report the frequency of their positive and negative experiences during the last four weeks. SPANE has been reported to be capable of measuring positive and negative affect (and happiness) regardless of the sources, mental activation level, or cultural context, and it captures affect from the entire affective spectrum [13,49]. Respondents are asked to report on their affect, expressed with adjectives that individuals recognize as describing emotions or moods, from the past four weeks in order to provide a balance between the sampling adequacy of affect and the accuracy of human memory to recall experiences [49], as well as to decrease the ambiguity of people's understanding of the scale itself [13].
SPANE has been validated to converge to other similar measurement instruments, including PANAS [13]. The scale provides good psychometric properties (validity and reliability) which were empirically demonstrated in several large-scale studies [6,13,16,42,49,63,64]. The scale has been proven consistent across full-time workers and students [63]. For these reasons (and for its brevity), we chose SPANE for the purpose of our research.
SPANE is a 12-item scale divided into two sub-scales of positive (SPANE-P) and negative (SPANE-N) experiences. The answers to the 12 items are given on a five-point scale ranging from 1 (very rarely or never) to 5 (very often or always). The SPANE-P and SPANE-N measures are the sum of the scores given to their respective six items, each ranging from 6 to 30. The two scores are further combined by subtracting SPANE-N from SPANE-P, resulting in the Affect Balance (SPANE-B) score. SPANE-B is an indicator of the happiness caused by how often positive and negative affect has been felt by a participant. SPANE-B ranges from −24 (completely negative) to +24 (completely positive).
Related Studies
Interest in studying the affect of software developers has risen considerably in the last five years, although we have just started 2 Alternative views of happiness exist, e.g., the Aristotelian eudaimonia considers a person happy because (s)he conducts a satisfactory life full of quality [40]. We have also discussed the role of the centrality of affect in [35]. Current research in psychology supports the affect balance model as a valid approach to quantify happiness to understand the tip of the iceberg [53]. To our knowledge, no studies have offered an estimation of the happiness distribution of developers, and only a small number of causes of developers' affect have been examined in a few studies. The present study addresses this research gap.
There are some initial indicators regarding developers' happiness. Generally speaking, studies indicate a positive relationship between the happiness of developers and their performance. Graziotin et al. [33] performed a quasi-experiment on the impact of affect on analytic problem solving and creative performance. The study itself was about consequences of happiness, thus not particularly related to the present article. Yet, Graziotin et al. observed that the sample distribution of happiness, measured using SPANE, was significantly greater than 0 (SPANE-B mean=7.58, 95% CI [5.29, 9.85]; median=9). The authors noted that the SPANE-B distribution did not resemble a normal distribution. However, the sample was very limited (N=42 BSc and MSc students of the same CS faculty); further exploration and validation were suggested based on the observations. We build on these initial observations in the present study.
Some studies have attempted to uncover issues related to affect using both qualitative and quantitative approaches with different degrees of automation. De Choudhury and Counts [9] investigated the expression of affect through the analysis of 204 000 micro-blogging posts from 22 000 unique users of a Fortune 500 software corporation. The sentiment analysis revealed that IT-related issues were often sources of frustration. Day-to-day demands, e.g., meetings, were also associated with negative affect.
Ford and Parnin [22] have explored frustration in software engineering through practitioner interviews. 67% of the 45 participants reported that frustration is a severe issue for them. As for the causes for such frustration, the authors categorized the responses as follows: not having a good mental model of the code (for the category "mapping behavior to cause"), learning curves of programming tools, too large task size, time required for adjusting to new projects, unavailability of resources (e.g., documentation, server availability, . . . ), perceived lack of programming experience, not fulfilling the estimated effort for perceived simple problems, fear of failure, internal hurdles and personal issues, limited time, and issues with peers.
Graziotin et al. [35] conducted a qualitative study for constructing an explanatory process theory of the impact of affect on development performance. The theory was constructed by coding data coming from interviews, communications, and observations of two software developers working on the same project for a period of 1.5 months. The theory was built upon the concepts of events, affect, focus, goals, and performance. The study theorized the construct of attractors, which are affective experiences that earn importance and priority to a developer's cognitive system. Attractors were theorized to have the biggest impact on development performance. Finally, the study suggested that interventions (e.g., facilitating reconciliation between developers who are angrily arguing) can mediate the intensity of existing negative affect and reduce their intensity and disruption.
Wrobel [70] conducted a survey with 49 developers, assessing the participants' emotions that were perceived to be those influencing their productivity. The results showed that positive affective states were perceived to be those enhancing development productivity.
Frustration was perceived as the most prevalent negative affect, as well as the one mostly deteriorating productivity.
Ortu et al. [56], Destefanis et al. [11], and Mäntylä et al. [51] conducted a series of mining software repositories studies to understand how affect, emotions, and politeness are related to software quality issues. The studies showed that happiness in terms of frequent positive affect and positive emotions was associated with shorter issue fixing time. Issue priority was found to be associated with the arousal mental activation level, which is often associated with anxiety and burnout.
METHOD
We employed a mixed research method, comprising both elements of quantitative and qualitative research [7]. In particular, we opted to approach RQ1 with a quantitative investigation, while we addressed RQ2 with a mostly qualitative inquiry. As our aim was to learn from a large number of individuals belonging to a particular population, we considered a survey, implemented as an online questionnaire, to be the most appropriate instrument [17].
Sampling Strategy
We consider a software developer to be a person concerned with any aspect of the software construction process (such as research, analysis, design, programming, testing, or management activities), for any purpose including work, study, hobby, or passion. Generalizing to the population of software developers is a challenge, because we do not accurately know how many software developers exist in the world and how to reach them. We relied on the GitHub social coding community as a source that fits our purpose of generalization well enough, in line with several previous studies (e.g., [28]). GitHub is the largest social coding community with more than 30 million visitors each month [15], many of which are software developers working on open source and proprietary software ranging from solo work to companies and communities.
To obtain the contact information of GitHub developers, we retrieved related data through the GitHub Archive [38], which stores the collections of public events occurring in GitHub. In order to ensure a sample of high quality and variety, we obtained six months of archive data, from March 1 to September 30, 2014. We extracted unique entries that provided an e-mail address. We gathered 456 283 entries of contact data, which included email address, given name, company, and location of the developers, and the repository name related to the public activity. 41.7% of the data provided an entry for the company field.
Survey Design
We collected data and enhanced the survey in four rounds. During the first three rounds, we piloted the questionnaire design with a limited random sample of contact data (N=100 in each round). We discarded the pilots' contact data and questionnaire data from the final survey as many guidelines recommend (e.g., [48]).
The three pilot rounds allowed us to estimate and improve the participation and response rate of the study through a refinement of the questions and invitation e-mail. Through the pilot tests, we understood that we could expect a high percentage of delivered e-mails (between 97% and 98%) and that we could expect a low participation rate (between 2% and 4%). The participation rate increased in each run.
The questionnaire used in the final survey is composed of (1) questions to collect demographic information, (2) one question carrying SPANE's 12 scale items, and (3) two open-ended questions on the causes of happiness and unhappiness (in terms of SPANE-P and SPANE-N components, see Section 2.1) while developing software. The questionnaire also provides an open-ended field for further comments and a field for optionally leaving an e-mail address for possible follow-ups 3 . The questionnaire is available in an archived online appendix [31].
For estimating the sample size required to make inferences regarding our population of developers, we evaluated the sample size estimations by Bartlett et al. [2], Krejcie & Morgan [46], Cochran [4], and Yamane [71], for a-priori statistical power. None of the authors have proposed sample size estimations for open-ended, qualitative entries. Therefore, we decided to opt for the most conservative settings, i.e., Yamane's simplified formula [71] with α = .01 assuming a 2% response rate. Our calculations resulted in a desirable sample of N=664 complete responses, which we expected to reach with 33 200 requests under a 2% response rate.
We designed and published the questionnaire with eSurvey Creator and invited the participants via e-mail. We did not share the survey elsewhere.
Analysis and Data Cleaning
In order to answer RQ1, we needed to describe the distribution of the SPANE-B happiness score (see Section 2.1) and to provide an estimation for the population mean and median. We expected to employ non-parametric methods for the mean and median estimation, given our earlier study [33] and the information obtained from the three pilot runs.
In order to answer RQ2, we developed a coding strategy for the open-ended questions. We applied open coding, axial coding, and selective coding, as described by Corbin and Strauss [5], as follows. The first three authors individually open coded the same set of 50 random responses using a line-by-line strategy. We met through online video calls in order to compare the coding structure and strategy to reach an agreement, that is, a shared axial coding mechanism. Our unit of observation and analysis, the individual developer, was the starting point. We framed our construction of theoretical categories based on Curtis et al. [8] model of constructs that are internal or external, with the internal group being the developer's own being and the external group having the artifact, process, and people as subcategories. Then, we divided the responses and proceeded to open code them (each coder coded a third of the answers). We held a weekly meeting to follow progress and further discuss the coding structure and strategy. Finally, we merged the codes and performed a final selective coding round. We used NVIVO 11 for the entire qualitative task. We provide a working example of the various coding phases in the online appendix [31].
Data cleaning happened during all stages of the study. We adopted common data cleaning strategies, such as outlier analysis (for example, we examined birth years after 2000 and excluded two 1-year old participants), and excluding participants who were not in the intended population or put random text in the text fields. We list the data inclusion and exclusion criteria in the archived online appendix [31]. We used R [58] scripts for supporting and automating the data cleaning, data exploration, and data analysis steps.
RESULTS
This section details the results of our investigation. We first provide descriptive statistics on the sample demographics. Then, we provide the results related to each research question.
Descriptive Statistics
Following our conservative strategy, we randomly sampled 33 200 entries from our contact list. Our sending tool delivered 31 643 (96.6%) invitation e-mails; the remaining addresses were either malformed or bounced. 2 220 individuals participated (7% response rate). 1 908 participants provided valid data for answering RQ1 (86%) while 1 318 provided valid data to provide answers to RQ2 (59%). Based on the pilots, we anticipated that some participants would leave the open-ended questions unanswered. Our sampling strategy paid off: we exceeded the required threshold (N=664) for generalizing. The rich sample offered us the opportunity to stay conservative for analyzing the data, too. We could minimize bias by retaining only the data provided by the participants that completed the entire questionnaire. That is, we kept N=1318 for answering all our RQs.
A total of 993 (75%) of the participants were professional software developers, 15% of the sample were students, and 8% were in other roles (such as manager, CEO, CTO, and academic researcher). The remaining participants were non-employed and not students.
The participants declared an average of 8.29 years (sd=7.77) of experience with working with software development, with a median of 5 years. 240 participants developed software either as a hobby, passion, or volunteer without pay, 161 participants worked either as freelancer or consultant in companies, 105 participants were a one-person company or self-employed in a startup, and 812 were employed in a company or a public organization. The reported size of the participants' company or organization also varied considerably, with 13.3% of the participants working alone, 33.6% in small entities (2-10 persons), 34.4% in medium entities , and 18.7% in large to very large entities (250-5000 and more).
Regarding the qualitative data, we reached a total of 590 codes in the initial coding phases. After the merge and cleanup phases, we obtained 219 codes that were referenced 2 280 times in text (average of 10.41 references per code).
RQ1-What is the Distribution of (Un)
Happiness Among Software Developers?
Our sample of N=1318 participants had a SPANE-B (see Section 2.1) mean score of 9.05 (sd=6.76), a median score of 10, and a range of [-16, 24]. We followed the recent suggestion by Kitchenham et al. [45] to use kernel density plot instead of boxplots. The plot of the SPANE-B score is shown in Figure 1. The plot indicates a likely non-normal distribution of the data, as expected. A description of the SPANE-B score by the psych R package [60] showed a skew of -0.53 and a kurtosis of 0.46, indicating a slightly asymmetrical distribution with a long tail to the left that is flatter than a standard normal distribution. Strong evidence for non-normality of the data was supported by a Shapiro-Wilk test for normality (W = 0.98, p < 0.0001).
We estimated the population's true mean for SPANE-B via bootstrapping as 9.05 (2000 replications, 95% CI [8.69, 9.43]). We estimated the population's true median for SPANE-B via bootstrapping as 10 (2000 replications, 95% CI [9.51, 10.71]). We show in the online appendix [31] that estimating those values with the expanded sample of N=1908 would yield similar results.
RQ2-What are the Experienced Causes for
Unhappiness Among Software Developers While Developing Software?
We plotted the demographic data gathered from the questionnaire, and compared it with the SPANE-B value. None of the quantitative data plots indicated a relationship with the happiness of developers. This includes variables such as gender, age, nationality, working status, company size, percentage of working time dedicated to developing software, and monthly income. Thus, we conclude that they are not the primary determinants of unhappiness. This further confirmed our original research design to use qualitative data to explore the causes. We identified 219 causes of unhappiness, which are grouped into 18 categories and sub-categories (including the top category of causes of unhappiness while developing software).
We report here only the main emerged categories and top 10 factors.
Main Categories.
The main types of factors causing unhappiness among software developers are organized under two main categories. The causes of unhappiness internal to individual developers, directly related to their personal states, or originated by their own behaviors, are classified under the developer's own being category. These occurred a total of 437 times. In contrast, external causes are the causes of unhappiness external to individual developers, by which developers are affected but have little or no control of. The total occurrence of external causes is 1 843 times. This indicates that developers are much more prone to experiencing and recalling externally-provoked unhappy feelings than internally generated ones.
The developer's own being (i.e., internal causes) category contains 22 internal factors. These factors do not demonstrate a clear structure. This to some extent reflects the versatile states of mind of developers and the feelings they could have while they develop software.
The factors in the external causes category are further divided into the sub-categories shown in Table 1. People-related factors: the external causes of unhappiness related or attributable to people whom developers interact with, to their characteristics or behaviors. These occurred 416 times and are further divided based on the roles of the people. Artifact and working with artifact: the external causes of unhappiness related to artifacts in software development projects and developers' interactions with them occurred 788 times. The causes are further grouped based on the types of artifacts that developers are dealing with. Process-related factors: the external causes of unhappiness related to issues in the management of software development process and day-to-day work. This type of causes occurred 544 times. Other causes: the external causes of unhappiness not classified under any of the above-mentioned "external factors" categories. These non-specific causes occurred 95 times.
10 Most Significant Causes of Unhappiness.
We extracted a list of 10 factors that occurred most often in the survey responses as the causes of unhappiness. They are listed in Table 2.
Three of these top 10 causes are part of software developer's own being. Being stuck in problem solving is by far the most significant among the three factors. Software development is essentially composed of problem-solving activities, often intellectually demanding. It is common that developers may be stuck in coding, debugging and all sorts of other tasks. As one respondent commented: "I feel negative when I get really stuck on something and cannot get around it". Another respondent elaborated: "I also thought of situations where I'm debugging some issue with the code and I can't figure out why it isn't working -when it seems like everything should work, but it just doesn't. This is definitely one of the biggest gumption traps I encounter". Another significant internal cause is a feeling of inadequate skills or knowledge, as shown in this response: "Once I encountered hashmap, and I couldn't understand it while I knew it is important. I felt frustrated and afraid". The inadequate feeling can be manifested as feeling unskilled in certain aspects of the work, feeling under-qualified with respect to the task given, or feeling a lack of familiarity with tools, languages, frameworks, or development methods that are used in the projects.
The third significant cause related to developer's own being is not related to work, but personal issues. Software developers are not living in vacuum while working on their software projects, and often non-work related, personal or private issues may affect them and cause their unhappy feelings during work. "I never feel 100% productive when there's something from my private life bugging me. No, I'm not a robot, I'm human, and can't forget the rest of the world when I start IntelliJ up". Among the non-work related issues, family related issues are most frequently mentioned: "Family related issues has huge impact on my feeling while working, I feel down and can't achieve the goals I set for my work day".
The seven remaining most significant factors are all external. Among people-related causes, the under-performance of colleagues, either team members, colleagues in other teams, or external collaborators, most often make developers experience negative feelings and affect their work consequently. An illustrative episode is reported in this response: "Last time I felt angry when a senior developer again committed an update ruining a beautiful generic solution I've made before. It was easy to refactor it, but his ignorance or routine annoyed me". Software development is often teamwork. It is frequently frustrating to a developer to see that other colleagues do not spend time to keep up to speed with modern development technology and practices.
It comes as no surprise that bad code quality and coding practices make developers unhappy. In almost all cases, bad code was a cause of unhappiness if it was written by other developers: "Sad/angry when reading others' code that I have to use and I realize it is full of bugs"; "having encountered a particularly bad (unreadable, poorly formatted, not commented at all, badly structured) piece of code written by another developer that I had to work on". Only a few participants reported unhappiness caused by "poorly written code (often by past-me)". That is, unhappiness from code written by the participants themselves was raised only when regretting past code.
Another significant factor related to code that makes developers feel bad is when they could not explain why the code is not working as it is supposed to (unexplained broken code): "When you haven't changed the code, and suddenly the project doesn't compile anymore. Worst feeling ever. (Afraid/sad/angry)".
Apart from code, issues in the technical infrastructure a software project relies on often contribute to negative feelings among developers, especially when it is supposed to support software development, but instead imposes constraints or limitations (imposed limitation on development). One respondent described this situation perfectly: "Angry happens quite often because tools, programming languages, etc. don't do as expected. Sometimes because they are buggy, sometimes there are some limitations in them (by design/by ignoring or not considering enough use cases, etc.), which makes one need to find work-arounds/mess-up otherwise clean code, repeat code unnecessarily etc".
Regarding the top significant causes related to general software development process, the respondents consider that high time pressure they feel, often generated by "unrealistic", "unjustified" and "crazy" deadlines, will almost surely push them into very unhappy states. A respondent described vividly this situation: "I remembered a day. I have a lot of phone call from my boss to done a project. in that situation time was running and project move slowly and phone every a minute ringed".
Contrasting the image of high time pressure, with hectic rushing towards deadlines, is working on mundane or repetitive tasks, which is another process-related factor that often causes negative feelings of developers. "Tedious", "boring", "dull", "monotonous", "trivial", "recurrent", etc., are the words the respondents used to describe the tasks that make them unhappy. "I tend to feel negative or bad when I am doing something that is not challenging, I instantly feel sleepy and bored", a respondent stated.
Bad decision making is yet another process-related factor that often leaves developers in an unhappy state. More than bad business decisions, developers are more often affected (emotionally as well) by bad technical decisions made by their superiors or peers. One example depicts such a scenario: "Generally negative emotions stem from board meetings where an executive or coworker makes an uninformed or ill-advised decision that causes a 'dirty' codebase change". Bad decision making is also perceived by developers if they are not involved in decision making processes.
DISCUSSION
Through our analysis to answer RQ1 (Section 4.2), we estimated the population mean SPANE-B score to be 9.05, indicating a happiness balance on the positive side (see Section 2.1). In terms of the norms reported in Diener et al. [13], this result is in the 65th percentile, indicating that the happiness balance is also higher than what could be expected in a larger human population. The various psychometric studies of SPANE report sample means but no confidence intervals, meaning that the best comparison possible is through means and standard deviations. Those studies have found mean SPANE-B scores above zero, but several score points lower than in our sample: 7.51 (sd=8.21) in a sample of men and 4.53 (sd=8.17) in a sample of women in Italy [6]; 6.69 (sd=6.88) in a sample of college students from five universities and colleges in USA and one university in Singapore [13]; 6.66 (sd=8.18) in large sample of more than 21 000 employees in the Chinese power industry [49]; 5.96 (sd=6.72) in a multicultural student sample at a large South African university [16]; 4.41 (sd=7.79) in a sample of full-time employees and 5.10 (sd=7.54) in a sample of university students, both in Portugal [63]; and 4.30 (sd=7.50) in a sample of Japanese college students [64].
Our findings about the higher-than-expected SPANE-B score confirm and reinforce our previous observations [33] that (1) software developers are a slightly happy population, and that (2) there is no evidence that the distribution of SPANE-B scores for the population of software developers should cover the full range of [−24, +24]. This does not mean that software developers are happy to the point that there is no need to intervene on their unhappiness. On the contrary, we have shown that unhappiness is present, caused by various factors and some of them could easily be prevented. Our observations and other studies show that unhappiness has a negative effect both for developers personally and on development outcomes. Furthermore, these results have implications for research, as outlined below.
For answering RQ2, we have shown a wide diversity and weight of factors that cause unhappiness among developers (Section 4.3). The causes of unhappiness that are external to developers, and thus more controllable by managers and team leaders, could have an incident rate that is 4 times the one of the factors belonging to the developer's own being. We expected that the majority of the causes of unhappiness would come from human related considerations (416 references); however, technical factors from the artifact (788) and the process (544) dominate the unhappiness of developers, highlighting the importance of strategic architecture and workforce coordination.
Being stuck in problem solving and time pressure are the two most frequent causes of unhappiness, which corroborates the importance of recent research that attempts to understand them [35,51,54]. Lack of experience could explain the prevalence of the first category in some cases, but since software development is inherently about problem solving, and realistic projects include an element of problem solving and learning, it does not seem adequate to explain this result by lack of experience alone. It may be necessary to accept that software development comes with its share of difficult tasks that cannot be avoided. Psychological grit could be an important characteristic to train among software developers. Strategies for both coping with the negative feeling associated with being stuck and systematic strategies for actually solving problems in general and specific scenarios can be called for.
Several top causes are related to the perception of inadequacy of the self and others, which encourages recent research activities on intervening on the affect of developers [35]. Finally, we see that factors related to information needs in terms of software quality and software construction are strong contributors to unhappiness among developers. This reinforces recent research activities on those aspects (e.g., [25]) and encourages proposed [29] research activities that attempt to merge affective reactions and information needs.
Limitations
We designed our survey with the stated aim of gaining understanding of the characterization of the unhappiness of developers and the causes of unhappiness in software development. We phrased the questions in our survey by following guidelines from the literature [7,55] and from our prior experience with the research topic [18][19][20][32][33][34][35][36][37]. We phrased the questions to avoid priming specific answers to the respondents. The validation of the questions was through (1) adopting a psychometrically validated measurement instrument for happiness [13], (2) limiting the remaining quantitative questions to a demographic nature, and (3) conducting three pilot runs. We discuss specific threats to validity below.
Internal Validity-Credibility.
With respect to the happiness measurement, as reported in Section 2.1, several large scale studies have found good psychometric properties (reliability and validity) for SPANE [6,13,42,49,63], and the instrument was empirically shown to be consistent across full-time workers and students [63] and memory recall of events [13,49].
In order to classify the causes of unhappiness of developers, we used a qualitative coding process. Whether causality can be inferred only by controlled experiments is a much-debated issue [7,14,27]. Several authors, e.g., [27], maintain that human-oriented research allows causality to be inferred from the experience of the participants through qualitative data analysis, provided that there is a strong methodology for data gathering and analysis. In this case, our aim was to uncover causes of unhappiness as experienced by developers themselves. Since we extracted the causes from firsthand reports, they should accurately represent the respondents' views. As far as possible, we have remained faithful to these views when categorizing the material. The chain of evidence from source material to results is fully documented and traceable (see the online appendix [31]). The ratio between reported internal and external causes may be affected by the respondents' ability to correctly attribute their unhappiness. We note that we claim no general relationship between any specific causes and unhappiness; only experienced causes of unhappiness are claimed.
A question-order effect [62] could have influenced the responses by setting their context. As the present study was conducted in the context of a larger study on both happiness and unhappiness, we randomized the order of appearance of the questions related to affectiveness, thus limiting a potential order effect.
Social desirability bias [26] may have influenced the answers in the sense that participants would attempt to appear in a positive light before the eyes of the enquirer. We limited the bias by informing the participants that the responses would be anonymous and evaluated in a statistical form, and addressing the ethical concerns of the study. In our view, the responses appear candid, indicating that participants have felt comfortable expressing their true views.
Generalizability-Transferability.
Our dataset of software developers using GitHub is limited with respect to representativeness of the average software developer. The data set used in this study (see Section 3.1) contains only accounts with public activity during a six-month period. However, it is likely that a significant portion of the inactive accounts are not of interest to this study, as we sought active developers.
The degree to which our conclusions are generalizable beyond the GitHub population may be limited. For instance, it is possible that the GitHub population is slightly younger than developers in general, and age may explain differences in the degree and nature of unhappiness. Also, the sample may be biased towards people that are more comfortable with displaying their personal performance in public, or face no other kinds of barriers to doing so (e.g., company policy). However, GitHub is a reliable source for obtaining research data, allowing replication of this study on the same or different populations. The GitHub community is large and diverse, with a claim of more than 30 million visitors each month [15], many developing open source and proprietary software, and ranging from solo work to companies and communities. Furthermore, as shown in Section 4.1, our sample is well balanced in terms of demographic characteristics, including participant role, age, experience, work type, company size, and student versus worker. By comparing confidence intervals, we did not observe significant differences in terms of the SPANE-B score when varying role (worker, student) or age. This further highlights the validity and reliability of the SPANE measurement instrument and the stability of our dataset.
Our sample is not evenly balanced in terms of gender, with males being in the vast majority. We believe, however, that our sample is representative to some extent in terms of gender as well, since males are overrepresented in software engineering jobs, likely due to gender bias [23,57,65]. However, our sample may be extreme in this respect; while exact data is difficult to obtain, some nonacademic surveys have shown, e.g., 7.6% 4 , 16% 5 , and 20% 6 females, but the numbers can depend on the definition of developer and the countries or cultures represented in the data. A possible explanation is that males are particularly overrepresented among GitHub developers, but more demographic data would be needed to ascertain this. In summary, we consider our sample to be large and diverse enough to warrant claims regarding software developers to the extent possible in a single study. Further replication is necessary to validate the findings and obtain details on demographic subgroups.
Recommendations for Practitioners
Our study has found a plethora of causes of unhappiness of developers that are of interest to practitioners regardless of their roles. We summarized the most prominent ones in the present paper, but practitioners could be interested in the complete list of factors and occurrences that is freely available online as open data [31].
Team members may be interested in the causes of unhappiness for enabling self-regulation and emotional capability mechanisms [1] for reducing personal and group unhappiness. Knowing what might cause unhappiness in the short and long term could encourage developers to be more considerate towards their peers. For example, it might be worth thinking twice about leaving others to clean up badly written code. For similar reasons, managers should carefully attempt to understand the unhappiness of developers using the present paper as support. Those in leadership positions should attempt to foster happiness by limiting unhappiness. Previous research (e.g., [11,32,33]) has shown that the benefits of fostering happiness among developers are substantial especially in terms of software development productivity and software quality. In a related paper on the consequences of unhappiness of developers, we found that addressing unhappiness could limit damage on different aspects of software development, including developers, artifacts, and development processes [30]. We believe that the results of the present study will potentially enhance the working conditions of software developers. This is corroborated by previous research [35] suggesting that intervening on the affect of developers may yield large benefits at low cost. We note that such interventions should consider issues of privacy and cultural differences. Whether to intervene in issues outside the work context is an open question, with possible legal constraints.
Furthermore, the vast majority of the causes of unhappiness are of external type. Since external causes may be easier to influence than internal causes, and since influencing them will impact several developers rather than only one at a time, this suggests that there are plenty of opportunities to improve conditions for developers in practice.
Implications for Researchers
We believe that the results of the present work can spawn several important future research directions. A limited set of the found causes of unhappiness has been investigated previously in the software engineering literature. However, while the previous work offers valuable results, it appears limited either because of being framed too generally in psychology research -resulting in findings regarding general job performance settings -or due to a narrow focus on a single emotion (e.g., frustration). The framing offered by the present study sheds new light on these previous studies by considering them in terms of happiness and affect. Here, we suggest three implications for research that we believe are of high importance and priority.
Our result regarding the distribution of happiness among developers suggests that happiness -in terms of the SPANE instrument score -is centered around 9.05, higher than what may be expected based on other studies using the instrument. Our question for future research is to understand whether a) a higher relativity should be embraced when analyzing the affect of developers and its impact on software engineering outcomes, or b) developers require tailored measurement instruments for their happiness as if they are a special population. Validating the score through replication, and, if it is found to be stable, investigating the reasons for it being higher than in several other populations, are important aims for future research.
As reported in the previous section, most causes of unhappiness are of external type and they may be easier to influence than internal causes. We see that much research is needed in order to understand the external causes and how to limit them. Further understanding of the underlying reasons for the ratio between external and internal causes is also needed.
Finally, the present study highlights how studies of human aspects in software engineering are important for the empirical understanding of how software development can be improved. Many questions in software engineering research require approaches from behavioral and social sciences; we perceive a need in academic discourse to reflect on how software engineering research can be characterized and conducted in terms of such paradigms.
Software engineering studies on human factors often call for further human aspects studies. Yet, we believe that the present study calls for much technical research as well, because the highest source of unhappiness among software developers is related to artifacts and working with artifacts. One example is related to debugging and bug fixing, as they appear often in the causes of unhappiness. This suggests that much research is needed for supporting humans in the maintenance of software, e.g., in terms of information needs and mechanisms for strategic coordination of the workforce and the software architecture. Furthermore, emotional support for the sometimes frustrating and tedious work with software maintenance might increase the quality of results.
CONCLUSION
In this paper, we presented a mixed method large-scale survey (1 318 complete and valid responses) to broaden the understanding of unhappiness among software developers. Our key contributions are as follows, and are publicly archived as open access and open data [31]: (C1) An estimate of the distribution of (un)happiness among software developers. (C2) An analysis of the experienced causes for unhappiness among software developers while developing software.
Our results show that software developers are a slightly happy population. The consequences of that result need to be explored in future studies. Nevertheless, the results do not remove the need for limiting the unhappiness of developers, who have repeatedly asked to be given a voice through research and in the design of studies.
The results of our study have also highlighted 219 fascinating factors about the causes of unhappiness while developing software. These should be further explored in future research and used as guidelines by practitioners in management positions and developers in general for fostering happiness on the job. We also call for replications of the study. | 2017-03-24T07:33:57.869Z | 2017-03-15T00:00:00.000 | {
"year": 2017,
"sha1": "e408fdce4a1043d2bea5338a9acc70eb7c2fd35a",
"oa_license": null,
"oa_url": "https://helda.helsinki.fi/bitstream/10138/307671/1/1703.04993.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e408fdce4a1043d2bea5338a9acc70eb7c2fd35a",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Computer Science"
]
} |
53404796 | pes2o/s2orc | v3-fos-license | Influence of Global Solar Radiation on Indoor Environment : Experimental Study of Internal Temperature Distribution in Two Test Cells with Different Roof Systems
This work is part of a large experimental study on the distribution of internal temperatures in two similar test cells, but with different systems of coverage. The main goal of this paper is to present results on an experimental field to determine the influence of solar radiation on the internal environmental conditions of different roof systems. Dry bulb temperature and internal surface temperatures were measured in two test cells with different roof systems (green roof and conventional ceramic roof). Their thermal performances were compared on days with differing air mass domain, based on dynamic climatic approach. This research was based on the spatial and temporal approaches of dynamic climatology, from the climatic regime of the city of Itirapina, São Paulo State, analysed as representative episodes. Climatic data were provided by an automatic weather station and verified by satellite imagery, and the internal temperatures of the cells were collected by thermocouples installed on the surfaces of ceilings, floors, walls, and suspended inside the buildings. The results indicate that the solar radiation is mainly responsible for the great variations in temperature and its impact on indoor environments, since there were great differences in temperature inside comparing the two days of the experiment. This refutes the notion that the outside temperature is responsible for daily variations in temperature inside buildings.
Introduction
Architecture has a fundamental role in creating built environments, and the relationship between buildings and their surrounding environment is a determining factor in the architectural design process, following housing standards, determined by the needs of individuals, particularly with respect to human comfort based on the principles of natural conditioning [1].However, the widespread deployment of building typologies needs to be undertaken with caution.Morillón [2] discussed the need for climatic adaptation of designs rather than imposing an "ideal model" for all buildings in different regions.In this sense, the appreciation of design stage becomes a preponderant Corresponding author: Grace Tibério Cardoso de Seixas, Ph.D. candidate, research field: climate dynamics applied to building.E-mail: gracetiberio@hotmail.com.consideration, which will allow the adoption of solutions to an architecture that increasingly integrates technology and environment within a particular environmental, cultural and socioeconomic context [3].
The logical process of modern construction is to work with natural forces not against them, in order to take advantage of their potential to inform the design of buildings more adapted for human comfort [4], also taking into account the climate conditioning factors (topography, geographic location, vegetation cover, etc.), which can influence the orientation of the project, the volumetric design of the building and the selection of construction materials, with the aim of designing a built environment that is most appropriate for its users.
The physical interface between the natural and built environments has been studied by research scholars, who clearly reaffirm the importance of architecture in the interaction between these two aspects, with the goal of creating comfortable and functional spaces for users.For Egan [5], thermal comfort is conditioned primarily for activities and the energy dissipated as heat generated by the activities and equipment used within indoor environments, and proposes comfort zones based on criteria of internal temperature and relative humidity.As a tool for analysis of human comfort, the authors use climate maps to determine the volumetric design of the buildings according to the region, and suggest avoiding solar radiation.
In architectural projects, two aspects should be studied and evaluated carefully, according to the region and climatic rhythm of the seasons: the sun and the wind.For colder regions, for example, the project must seek the maximum utilization of solar radiation, as opposed to warmer regions, where it is necessary to minimize direct sunlight exposure, according to the apparent path of the sun.In the latter situation, different cultures have used shading devices to control solar input to the indoor, but its efficiency directly depends on the project of building [6].Aroztegui [7] previously suggested limiting the consideration of climatic variables during the design phase for defining the minimum requirements for thermal comfort.In another study, the same research study emphasizes the importance of the design phase in decision making related to climate adaptation, in terms of seeking the best thermal performance of the building [8].
There is growing concern about the need to adopt more conscious forms of construction, which seek environmental compliance, improved energy efficiency in buildings, and therefore reduce the use of natural resources, while achieving better economic performance and user satisfaction.In this sense, considering the thermal performance and comfort, aligned to improved energy efficiency within the concept of sustainability, the architectural design must address the following issues during its development: orientation, prevailing winds, the apparent path of the sun and routine activities inside buildings.The emphasis should also be on geometry and spatial distribution of these spaces, and environmental characteristics around the building, such as vegetation, the presence of water bodies, etc. [3].In several countries, including Brazil, numerous studies have attempted to generalize recommendations for architectural design, aimed at improving passive thermal conditioning systems [9].
Among the many environmental factors that interact with the built environment, this paper aims to show experimentally that the primary influence on thermal conditions within buildings is solar radiation, since it triggers all the other processes such as heat exchange, change in humidity and air circulation.
This paper aims to highlight the importance of basic knowledge of the interactions between environment and buildings, which will mark design a project more appropriate to local climate.
The results of this work are complementary part of the study about distribution of internal temperatures in two test cells, already published by Seixas and Vecchia [10].
Methodology
This article has an investigative nature, since it conducts thermal analyses of the performance of two test cells with distinct roof systems on days representing two differing heating scenarios: a heat situation, and a cooler day representing domain of polar Atlantic mass.Data were collected for internal air temperature or DBT (dry bulb temperature) and IST (internal surface temperature) of the ceiling, walls and floor of the experimental cells.This research was based on the concepts of dynamic climatology, defining the typical day for experimental analysis of the results.For dynamic climatology, the succession of types of weather is a result of the air masses movement, specifically the polar masses, which allows the identification of the weather according to their origin, trajectory and dynamic properties.The air masses concept is not definite, because the atmosphere is not divided cle didactic rep portion of ai and for a pe properties o pressure [11 The tem thermocoupl two similar other with a for the main experimenta weather stat Applied to t São Carlos,
Localiza and Automa
The study the climatol Dam, at Itir region), São S, 43°57′38″ In terms conventiona 26%, suppo horizontal c (Fig. 1b).Th Type-T thermocouples are resistant to corrosion in humid environments and are suitable for measurements of air temperature (operating range is between -270 °C and 400 °C, oxidized in certain environments only above 370 °C).This type thermocouple comprises a positive thermoelement (Cu100%) and a negative thermoelement with Cu55%Ni45% (constantan).The resulting emf (electromotive force) ranges between -6.258 mV and 20.872 mV.The accuracy of the thermocouples is significant, i.e., temperature error ranges between ±0.1 °C and 0.2 °C, since the thermocouples are in perfect condition to use [14].Despite the experimental measurements have been made with the precision of hundredth unit, we chose to present rounded numbers, according to the "theory of errors" [15], for more realistic representation of the incident inherent in real-world data collection scenarios.The data of climate variables were collected and stored by an automatic weather station of Campbell Scientific Inc.Other equipments used were necessary to keep the automatic station running, such as a rechargeable 12 V battery, solar panel and a CR10X datalogger, which were exclusive and configured to the needs of the station.The data collection programming for test cells and automatic weather station was taken from Campbell's PC200W software for subsequent connection with used dataloggers.
The thermocouples were calibrated by placing them in a container with ice to check the temperature before their installation in the test cells, and were monitored periodically via a digital infrared thermometer with laser sight during the period of data collection.
All measurements in the test cells were performed with doors and windows closed in order to eliminate the influence of airflow.
Installation of Temperature Sensors
To measure DBT, the thermocouples were suspended at the centre of the cells, 1.70 m above the floor.To record the surface temperature of the surrounding, the sensors were placed at the geometric centre of the ceiling and floor plans and the axis of each wall, also 1.70 m above the floor, according to Fig. 2.
In each test cell, six sensors for IST data acquisition were placed in small holes and covered surfaces with thermal grease.A sensor for DBT with a shelter made of PVC pipe (white colour, length 0.30 m, 4" diameter) was surrounded by a blanket of plastic with metallized surface (foil) for better insulation of the thermocouple.
Climatic Analysis of the Data Series
According to Monteiro [16], the climate of central São Paulo state is controlled by equatorial and tropical air masses, resulting in two distinct periods: a dry season with warm and dry winter, between April and September; and a rainy season with hot and humid summer, from October to March.In the dry season, the tropical Atlantic air mass and polar Atlantic mass predominated, and this season is characterized by low rainfall, sparse cloud cover, low relative humidity and lower average temperature than the rainy season.The rainy season is dominated by the equatorial continental mass, and has higher average temperatures with abundant precipitation and high relative humidity.
In this work, the climatic regime of Itirapina was analysed as representative episodes, according to Vecchia's [17] adaptation of Monteiro's [18] definition of weather types.This comprises two basic steps: pre-front (the beginning of the process), characterised by foreshadowing and advancement of the polar Atlantic mass; and the post-front (the final step of this process), represented by the domain and transition or tropicalization phases of the polar air mass.From the recognition of climatic events recorded during the study, through analysis of meteorological variables and confirmation via satellite images, two typical experimental days were extracted for evaluating the thermal performance of test cells.
Data were collected from January to April 2013.The climatic episode recorded in March was selected to represent two typical experimental days: one represented heat, i.e., with maximum solar radiation and clear sky without clouds, according to reference values from the Climatological Normals 1960-1991 [19]; the other representing conditions for domain of the polar Atlantic mass, characterised by lower outdoor air temperature and greater cloud cover and relative humidity.These representative days were compared in order to determine the influence of solar radiation within the built environments.
Results and Discussion
March 4 (Julian day 63) was taken as representing the heat situation for analysis of thermal performance between the green roof and the conventional test cell.This state was chosen due to its remarkable warmth, exceeding the 27 °C mean maximum temperature for the San Carlos region [19].The temperature range for this day was 14 °C (minimum 18 °C, maximum 32 °C).The sky was clear, with global solar radiation reaching 779 W/m 2 (Fig. 3a).March 19 (Julian day 78) was chosen as the typical experimental day for the polar air mass domain.The temperature range for this day was 5 °C (minimum 15.5 °C, maximum 20.5 °C).It showed lower global solar radiation (256.5 W/m 2 ), increasing relative humidity, extensive cloud cover but no rain (Fig. 3b).The satellite images for Brazilian southeast region were provided by the National Institute for Space Research [20].A complete analysis for the period of collected data can be found in Ref. [10].
Tables 1 and 2 and Fig. 4 show the results for the test cell with green roof.
To help visualise the data presented in Tables 1 and 2, a perspective diagram was prepared from the volumetric data of the cell with green roof, considering only the interior in order to facilitate understanding of the image, with the sensors and their respective maximum and minimum temperatures for both experimental days (Figs.5a and 5b).
For March 4, 2013, the north and west walls showed the highest maximum temperatures (30.5 °C), followed by the east wall and the dry bulb sensor DBT 04 (30 °C).The lowest wall temperature was recorded by the sensor installed on the south surface (29.5 °C) due to the apparent path of the sun.The lowest maximum temperature was recorded by the ceiling sensor ( IST 14).At approximately 28.5 °C, this was 1.5 °C cooler than the value recorded by the DBT 04.This finding shows that internal temperature is mainly influenced by the surfaces that transmit more heat, which raises doubts about the applicability of the calculation of mean radiant temperature, since the value obtained has no physical meaning.
In the case of minimum temperatures, all walls recorded equal values (20.5 °C), and the highest minimum temperature was recorded by the ceiling sensor (IST 14), which demonstrates the best performance of the green roof in relation to night-time heat loss.The heat exchange process is slowed by the action of the green roof insulation, due to its thermal physics constitution, the mass and thermal resistance, shading action caused by the grass, among other beneficial thermal effects characteristic of this type of roof system.
On March 19, 2013, all sensors showed similar maximum and minimum temperatures, as illustrated in Fig. 5b.This was attributed to the predominance of the main meteorological conditions imposed by the polar Atlantic mass, i.e., low incidence of solar radiation due to increased cloud cover, falling external air temperature, and increased relative humidity.
To examine the findings for the test cell with conventional ceramic roof, Tables 3 and 4 and Fig. 6 show comparisons between typical experimental days.These data are also presented in Figs.7a and 7b, which provide better visualization of the data.
In the analysis for March 4, 2013, the maximum temperatures recorded by the walls, floor and dry bulb followed the same pattern identified in the green roof cell, except for the ceiling.In the conventional cell, the IST 14 sensor showed maximum temperature of 30.5 °C, which is approximately 2 °C higher than the ceiling sensor of the cell with green roof (28.5 °C).This temperature differential was limited by the design of the conventional cell, which has an attic with permanent ventilation.This helps to reduce the internal surface temperature of the ceiling in the conventional cell.The minimum temperatures were approximately equal (between 20 °C and 21 °C), except the ground sensor, which showed a minimum of 22 °C.
For March 19, 2013, the conventional cell recorded similar maximum and minimum temperatures for all sensors, similar to the results obtained for the test cell with green roof.
Comparing the two test cells for the typical heat situation, the maximum and minimum temperatures were nearly equal for all sensors, except the ceiling sensors (IST 14), which recorded a lower maximum temperature in the cell with green roof.However, on the cooler experimental day, both test cells had identical thermal performance.This finding demonstrates the important influence of global solar radiation incidence on the internal environment.
Conclusions
From the analyses, it is evident that incident solar radiation on surfaces influences both external air temperature and interior temperature, since the day representing polar mass domain showed a thermal range closest to that of the internal sensors, except for the floor sensor, which presented the lowest thermal range on both experimental days.Comparing data from two experimental days, it can be concluded that solar radiation is the determining factor of the thermal conditions in any environment.This refutes the notion that external temperature is responsible for daily temperature fluctuations within buildings.Another important conclusion of these analyses is that the green roof ensured the best performance on both experimental days.Therefore, this work will contribute significantly to future application of dynamic climatology to the built environment.However, it is important to recognize that thermal analysis is only one of the stages involved in adapting a construction project to local conditions. Fig.
type T copper-constantan (alloy of copper and nickel), 2 × 24 AWG (American wire gauge).The measurements at intervals of 30 min was recorded and stored by a CR10X datalogger.The sampling interval ensured a sufficient data series for the microclimatic-scale analyses conducted in this study.
Fig. 2 (a) Schematic section for green roof test cell; (b) schematic section for ceramic roof test cell.cell with green roof Thermocouple Schematic section test cell with green roof | 2018-10-16T22:22:30.372Z | 2015-01-30T00:00:00.000 | {
"year": 2015,
"sha1": "94057f2fe54de73381fd318f2e7fbe48da841f08",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/55069c2c6552f.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "94057f2fe54de73381fd318f2e7fbe48da841f08",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
119268035 | pes2o/s2orc | v3-fos-license | Kondo-lattice screening in a d-wave superconductor
We show that local moment screening in a Kondo lattice with d-wave superconducting conduction electrons is qualitatively different from the corresponding single Kondo impurity case. Despite the conduction-electron pseudogap, Kondo-lattice screening is stable if the gap amplitude obeys $\Delta<\sqrt{\tk D}$, in contrast to the single impurity condition $\Delta<\tk$ (where $\tk$ is the Kondo temperature for $\Delta = 0$ and D is the bandwidth). Our theory explains the heavy electron behavior in the d-wave superconductor Nd_{2-x}Ce_{x}CuO_{4}.
I. INTRODUCTION
The physical properties of heavy-fermion metals are commonly attributed to the Kondo effect, which causes the hybridization of local 4-f and 5-f electrons with itinerant conduction electrons. The Kondo effect for a single magnetic ion in a metallic host is well understood 1 . In contrast, the physics of the Kondo lattice, with one magnetic ion per crystallographic unit cell, is among the most challenging problems in correlated electron systems. At the heart of this problem is the need for a deeper understanding of the stability of collective Kondo screening. Examples are the stability with respect to competing ordered states (relevant in the context of quantum criticality 2 ) or low conduction electron concentration (as discussed in the so-called exhaustion problem 3 ). In these cases, Kondo screening of the lattice is believed to be more fragile in comparison to the single-impurity case. In this paper, we analyze the Kondo lattice in a host with a d-wave conduction electron pseudogap 4 . We demonstrate that Kondo lattice screening is then significantly more robust than single impurity screening. The unexpected stabilization of the state with screened moments is a consequence of the coherency of the hybridized heavy Fermi liquid, i.e. it is a unique lattice effect. We believe that our results are of relevance for the observed large low temperature heat capacity and susceptibility of Nd 2−x Ce x CuO 4 , an electron-doped cuprate superconductor 5 .
The stability of single-impurity Kondo screening has been investigated by modifying the properties of the conduction electrons. Most notably, beginning with the work of Withoff and Fradkin (WF) 6 , the suppression of the single-impurity Kondo effect by the presence of d-wave superconducting order has been studied. A variety of analytic and numeric tools have been used to investigate the single impurity Kondo screening in a system with conduction electron density of states (DOS) ρ (ω) ∝ |ω| r , with variable exponent r (see Refs. 6,7,8,9,10,11,12). Here, r = 1 corresponds to the case of a d-wave superconductor, i.e. is the impurity version of the problem discussed in this paper. For r ≪ 1 the perturbative renormalization group of the ordinary 13 Kondo problem (r = 0), can be generalized 6 a fixed point value J * = r/ρ 0 emerges for finite but small r. Here, ρ 0 is the DOS for ω = D with bandwidth D. Kondo screening only occurs for J * and the transition from the unscreened doublet state to a screened singlet ground state is characterized by critical fluctuations in time.
Numerical renormalization group (NRG) calculations demonstrated the existence of a such an impurity quantum critical point even if r is not small but also revealed that the perturbative renormalization group breaks down, failing to correctly describe this critical point 9 . For r = 1, Vojta and Fritz demonstrated that the universal properties of the critical point can be understood using an infinite-U Anderson model where the level crossing of the doublet and singlet ground states is modified by a marginally irrelevant hybridization between those states 10,11 . NRG calculations further demonstrate that the non-universal value for the Kondo coupling at the critical point is still given by J * ≃ r/ρ 0 , even if r is not small 8 . This result applies to the case of broken particle-hole symmetry, relevant for our comparison with the Kondo lattice. In the case of perfect particle hole symmetry it holds that 8 J * → ∞ for r ≥ 1/2.
The result J * ≃ r/ρ 0 may also be obtained from a large N mean field theory 6 , which otherwise fails to properly describe the critical behavior of the transition, in particular if r is not small. The result for J * as the transition between the screened and unscreened states relies on the assumption that the DOS behaves as ρ (ω) ∝ |ω| r all the way to the bandwidth. However, in a superconductor with nodes we expect that ρ (ω) ≃ ρ 0 is essentially constant for |ω| > ∆, with gap amplitude ∆, altering the predicted location of the transition between the screened and unscreened states. To see this, we note that, for energies above ∆, the approximately constant DOS implies the RG flow will be governed by the standard metallic Kondo result 1,13 with r = 0, renormalizing the Kondo coupling to J = J/ (1 − Jρ 0 ln D/∆) with the effective bandwidth ∆ (see Ref. 9). Then, we can use the above result in the renormalized system, obtaining that Kondo screening occurs for Jρ 0 r which is easily shown to be equivalent to the condition ∆ ∆ * with where is the Kondo temperature of the system in the absence of pseudogap (which we are using here to clarify the typical energy scale for ∆ * ). Setting r = 1 to establish the implication of Eq. (1) for a d-wave superconductor, we see that, due to the d-wave pseudogap in the density of states, the conduction electrons can only screen the impurity moment if their gap amplitude is smaller than a critical value of order the corresponding Kondo temperature T K for constant density of states. In particular, for ∆ large compared to the (often rather small) energy scale T K , the local moment is unscreened, demonstrating the sensitivity of the single impurity Kondo effect with respect to the low energy behavior of the host. Given the complexity of the behavior for a single impurity in a conduction electron host with pseudogap, it seems hopeless to study the Kondo lattice. We will show below that this must not be the case and that, moreover, Kondo screening is stable far beyond the single-impurity result Eq. (1), as illustrated in Fig. 1 (the dashed line in this plot is Eq. (1) with ρ 0 = 1/2D). To do this, we utilize a the large-N mean field theory of the Kondo lattice to demonstrate that the transition between the screened and unscreened case is discontinuous. Thus, at least within this approach, no critical fluctuations occur (in contrast to the single-impurity case discussed above). More importantly, our large-N analysis also finds that the stability regime of the Kondo screened lattice is much larger than that of the single impurity. Thus, the screened heavy-electron state is more robust and the local-moment phase only emerges if the conduction electron d-wave gap amplitude obeys with D the conduction electron bandwidth. Below, we shall derive a more detailed expression for ∆ c ; in Eq. (3) we are simply emphasizing that ∆ c is large compared to T K [and, hence, Eq. (1)].
In addition, we find that for ∆ < ∆ c , the renormalized mass only weakly depends on ∆, except for the region close to ∆ c . We give a detailed explanation for this enhanced stability of Kondo lattice screening, demonstrating that it is a direct result of the opening of a hybridization gap in the heavy Fermi liquid state. Since the result was obtained using a large-N mean field theory we stress that such an approach is not expected to properly describe the detailed nature close to the transition. It should, however, give a correct order of magnitude result for the location of the transition.
To understand the resilience of Kondo-lattice screening, recall that, in the absence of d-wave pairing, it is well known that the lattice Kondo effect (and concomitant heavy-fermion behavior) is due a hybridization of the conduction band with an f -fermion band that represents excitations of the lattice of spins. A hybridized Fermi liquid emerges from this interaction. We shall see that, due to the coherency of the Fermi liquid state, the resulting hybridized heavy fermions are only marginally affected by the onset of conduction-electron pairing. This weak proximity effect, with a small d-wave gap amplitude ∆ f ≃ ∆T K /D for the heavy fermions, allows the Kondo effect in a lattice system to proceed via f -electrondominated heavy-fermion states that screen the local moments, with such screening persisting up to much larger values of the d-wave pairing amplitude than implied by the single impurity result 6,7 , as depicted in Fig. 1 (which applies at low T ). A typical finite-T phase diagram is shown in Fig. 2.
Our theory directly applies to the electron-doped cuprate Nd 2−x Ce x CuO 4 , possessing both d-wave superconductivity 14,15 with T c ≃ 20K and heavy fermion behavior below 5 T K ∼ 2 − 3K. The latter is exhibited in a large linear heat capacity coefficient γ ≃ 4J/(mol × K 2 ) together with a large low-frequency susceptibility χ with Wilson ratio R ≃ 1.6. The lowest crystal field state of Nd 3+ is a Kramers doublet, well separated from higher crystal field levels 16 , supporting Kondo lattice behavior of the Nd-spins. The superconducting Cu-O-states play the role of the conduction electrons. Previous theoretical work on Nd 2−x Ce x CuO 4 discussed the role of conduction electron correlations 17 . Careful investigations show that the single ion Kondo temperature slightly increases in systems with electronic correlations 18,19 , an effect essentially caused by the increase in the electronic density of states of the conduction electrons. However, the fact that these conduction electrons are gapped has not been considered, even though the Kondo temperature is significantly smaller than the d-wave gap amplitude ∆ ≃ 3.7meV (See Ref. 20). We argue that Kondo screening in Nd 2−x Ce x CuO 4 with T K ≪ ∆ can only be understood in terms of the mechanism discussed here.
We add for completeness that an alternative scenario for the large low temperature heat capacity of Nd 2−x Ce x CuO 4 is based on very low lying spin wave excitations 21 . While such a scenario cannot account for a finite value of C (T ) /T as T → 0, it is consistent with the shift in the overall position of the Nd-crystal field states upon doping. However, an analysis of the spin wave contribution of the Nd-spins shows that for realistic parameters C (T ) /T vanishes rapidly below the Schottky anomaly 22 , in contrast to experiments. Thus we believe that the large heat capacity and susceptibility of Nd 2−x Ce x CuO 4 at low temperatures originates from Kondo screening of the Nd-spins.
Despite its relevance for the d-wave superconductor Nd 2−x Ce x CuO 4 , we stress that our theory does not apply to heavy electron d-wave superconductors, such as CeCoIn 5 (see Ref. 23), in which the d-wave gap is not a property of the conduction electron host, but a more subtle aspect of the heavy electron state itself. The latter gives rise to a heat capacity jump at the superconducing transition ∆C (T c ) that is comparable to γT c , while in our theory ∆C (T c ) ≪ γT c holds.
II. MODEL
The principal aim of this paper is to study the screening of local moments in a d-wave superconductor. Thus, we consider the Kondo lattice Hamiltonian, possessing local spins (S i ) coupled to conduction electrons (c kα ) that are subject to a pairing interaction: Here, J is the exchange interaction between conduction electrons and local spins and ξ k = ǫ k − µ with ǫ k the conduction-electron energy and µ the chemical potential. The pairing term is characterized by the attractive interaction between conduction electrons U kk ′ . We shall assume the latter stabilizes d-wave pairing with a gap ∆ k = ∆ cos 2θ with θ the angle around the conduction-electron Fermi surface.
We are particularly interested in the low-temperature strong-coupling phase of this model, which can be studied by extending the conduction-electron and local-moment spin symmetry to SU (N ) and focusing on the large-N limit 24 . In case of the single Kondo impurity, the large-N approach is not able to reproduce the critical behavior at the transition from a screened to an unscreeened state. However, it does correctly determine the location of the transition, i.e. the non-universal value for the strength of the Kondo coupling where the transition from screened to unscreened impurity takes place 8 . Since the location of the transition and not the detailed nature of the transition is the primary focus of this paper, a mean field theory is still useful.
Although the physical case corresponds to N = 2, the large-N limit yields a valid description of the heavy Fermi liquid Kondo-screened phase 25 . We thus write the spins in terms of auxiliary f fermions as To implement the large-N limit, we rescale the exchange coupling via J/2 → J/N and the conduction- The utility of the large-N limit is that the (mean-field) stationary-phase approximation to H is believed to be exact at large N . Performing this mean field decoupling of H yields with E 0 a constant in the energy that is defined below. The pairing gap, ∆ k , and the hybridization between conduction and f -electrons, V , result from the mean field decoupling of the pairing and Kondo interactions, respectively. The hybridization V (that we took to be real) measures the degree of Kondo screening (and can be directly measured experimentally 26 ) and λ is the Lagrange multiplier that implements the above constraint, playing the role of the f -electron level. The free energy F of this single-particle problem can now be calculated, and has the form: where T = β −1 is the temperature. The first three terms are the explicit expressions for E 0 in Eq. (7), and E k± is describing the bands of our d-wave paired heavy-fermion system. The phase behavior of this Kondo lattice system for given values of T , J and µ is determined by finding points at which F is stationary with respect to the variational parameters V , λ, and ∆ k . For simplicity, henceforth we take ∆ k as given (and having d-wave symmetry as noted above) with the goal of studying the effect of nonzero pairing on the formation of the heavy-fermion metal characterized by V and λ that satisfy the stationarity conditions with the second equation enforcing the constraint, Eq. (6). We shall furthermore restrict attention to µ < 0 (i.e., a less than half-filled conduction band). Before we proceed we point out that the magnitude of the pairing gap near the unpaired heavy-fermion Fermi surface (located at ξ = V 2 /λ) is remarkably small. Taylor expanding E k− near this point, we find giving a heavy-fermion gap Fig. 3, we plot the lower heavy-fermion band for the unpaired case ∆ k = 0 (dashed line) along with ±E k− for the case of finite ∆ k (solid lines) in the vicinity of the unpaired heavy-fermion Fermi surface, showing the small heavy-fermion gap ∆ f k . Thus, we find a weak proximity effect in which the heavyfermion quasiparticles, which are predominantly of fcharacter, are only weakly affected by the presence of d-wave pairing in the conduction electron band.
A. Normal conduction electrons
A useful starting point for our analysis is to recall the well-known 27 unpaired (∆ = 0) limit of our model. By minimizing the correpsonding free energy [simply the ∆ = 0 limit of Eq. (8)], one obtains, at low temperatures, that the Kondo screening of the local moments is represented by the nontrivial stationary point of F at V = V 0 and λ = λ 0 = V 2 0 /D, with Here we have taken the conduction electron density of states to be a constant, ρ 0 = (2D) −1 , with 2D the bandwidth. The resulting phase is a metal accommodating both the conduction and f -electrons with a large density of states ∝ λ 0 −1 near the Fermi surface at ǫ k ≃ µ + V 2 0 /λ 0 , revealing its heavy-fermion character. In Fig. 4, we plot the energy bands of this heavy Fermi liquid in the low-T limit.
With increasing T , the stationary V and λ decrease monotonically, vanishing at the Kondo temperature Here, the second line is meant to emphasize that T K is of the same order as the T = 0 value of the f -fermion chemical potential λ 0 , and therefore T K ≪ V 0 , i.e., T K is small compared to the zero-temperature hybridization energy V 0 . It is well established that the phase transition-like behavior of V at T K is in fact a crossover once N is finite 1,24 . Nevertheless, the large-N approach yields the correct order of magnitude estimate for T K and provides a very useful description of the strong coupling heavy-Fermi liquid regime, including the emergence of a hybridization gap in the energy spectrum.
B.
d-wave paired conduction electrons Next, we analyze the theory in the presence of d-wave pairing with gap amplitude ∆. Thus, we imagine continuously turning on the d-wave pairing amplitude ∆, and study the stability of the Kondo-screened heavy-Fermi liquid state characterized by the low-T hybridization V 0 , Eq. (12). As we discussed in Sec. I, in the case of a single Kondo impurity, it is well known that Kondo screening is qualitatively different in the case of d-wave pairing, and the single impurity is only screened by the conduction electrons if the Kondo coupling exceeds a critical value For J < J * , the impurity is unscreened. This result for J * can equivalently be expressed in terms of a critical pairing strength ∆ * , beyond which Kondo screening is destroyed for a given J: [equivalent to Eq. (1) for r = 1], which is proportional to the Kondo temperature T K . This result, implying The dashed line in Fig. 2 denotes the spinodal T s of the free energy F at which the quadratic coefficient of Eq. (8) crosses zero. The significance of T s is that, if the Kondoto-local moment transition were continuous (as it is for ∆ = 0), this would denote phase boundary; the T → 0 limit of this quantity coincides with the single-impurity critical pairing Eq. (17). An explicit formula for T s can be easily obtained by finding the quadratic coefficient of Eq. (8): with E k ≡ ξ 2 k + ∆ 2 k , and where we set λ = 0 [which must occur at a continuous transition where V → 0, as can be seen by analyzing Eq. (10b)]. As seen in Fig. 2, the spinodal temperature is generally much smaller than the true transition temperature; however, for very small ∆ → 0, T s (∆) coincides with the actual transition (which becomes continuous), as noted in the figure caption.
Our next task is to understand these results within an approximate analytic analysis of Eq. (8); before doing so, we stress again that the discontinuous transition from a screened to an unscreened state as function of T becomes a rapid crossover for finite N . The large N theory is, however, expected to correctly determine where this crossover takes place.
Low-T limit
According to the numerical data (points) plotted in Fig. 5, the hybridization V is smoothly suppressed with increasing pairing strength ∆ before undergoing a discontinuous jump to V = 0. To understand, analytically, the ∆-dependence of V at low-T , we shall analyze the T = 0 limit of F , i.e., the ground-state energy E. The essential question concerns the stability of the Kondo-screened state with respect to a d-wave pairing gap, characterized by the following ∆-dependent hybridization with ∆ typ an energy scale, to be derived, that gives the typical value of ∆ for which the heavy-fermion state is affected by d−wave pairing. To show that Eq. (19) correctly describes the smooth suppression of the hybrization with increasing ∆, and to obtain the scale ∆ typ , we now consider the dimensionless quantity that characterizes the change of the ground state energy with respect to the pairing gap. Separating the amplitude of the gap from its momentum dependence, i.e. writing ∆ k = ∆φ k , we obtain from the Hellmann-Feynman theorem that: For ∆ → 0 this yields Here, G cc (k,iω) is the conduction electron propagator. As expected, χ ∆ is the particle-particle correlator of the conduction electrons. Thus, for T = 0 the particleparticle response will be singular. This is the well known Cooper instability. For V = 0 we obtain for example where we used ∆ as a lower cut off to control the Cooper logarithm. Below we will see that, except for extremely small values of ∆, the corresponding Cooper logarithm is overshadowed by another logarithmic term that does not have its origin in states close to the Fermi surface, but rather results from states with typical energy V ≃ √ T K D.
In order to evaluate χ ∆ in the heavy Fermi liquid state, we start from: where E ± is given in Eq. (13) and the coherence factors of the hybridized Fermi liquid are: Inserting G cc (k,ω) into the above expression for χ ∆ yields (26) We used that E + > 0 is always fulfilled, as we consider a less than half filled conduction band.
Considering first the limit λ = 0, it holds E − (ξ) < 0 and the last term in the above integral disappears. The remaining terms simplify to Even for λ nonzero, this is the dominant contribution to χ ∆ in the relevant limit λ ≪ V ≪ D. To demonstrate this we analyze Eq. (26) for nonzero λ, but assuming λ ≪ V as is indeed the case for small ∆. The calculation is lengthy but straightforward. It follows: The last term is the Cooper logarithm, but now in the heavy fermion state. The prefactor λ/D ≃ T K /D is a result of the small weight of the conduction electrons on the Fermi surface (i.e. where ξ ≃ V 2 /λ) as well as the reduced velocity close to the heavy electron Fermi surface. Specifically it holds u 2 ξ ≃ V 2 /λ ≃ λ 2 /V 2 as well 27) is not originating from the heavy electron Fermi surface (i.e. it is not from ξ ≃ r 2 λ ). Instead, it has its origin in the integration over states where where E ± (ξ ≃ 0) = ±V and is large as long as |ξ| V . For . This peak at ξ ≃ 0 has its origin in the competition between two effects. Usually, u or v are large when E ± ≃ ξ. The only regime where u or v are still sizable while E ± remain small is close to the bare conduction electron Fermi surface at |ξ| ≃ V (the position of the level repulsion between the two hybridizing bands). Thus, the logarithm is caused by states that are close to the bare conduction electron Fermi surface. Although these states have the strongest response to a pairing gap, they don't have much to do with the heavy fermion character of the system. It is interesting that this heavy fermion pairing response is the same even in case of a Kondo insulator where λ = 0 and the Fermi level is in the middle of the hybridization gap.
The purpose of the preceding analysis was to derive an accurate expression for the ground-state energy E at small ∆. Using Eq. (20) gives: which, using Eq. (27) and considering the leading order in λ ≪ V and ∆ ≪ V , safely neglecting the last term of Eq. (28) according to the argument of the previous paragraph, and dropping overall constants, yields Using Eq. (10), the stationary value of the hybridization (to leading order in ∆ 2 ) is then obtained via minimization with respect to V and λ. This yields with the stationary value of λ = 2ρ 0 V 2 , which establishes Eq. (19). A smooth suppression of the Kondo hybridization from the ∆ = 0 value V 0 [Eq. (12)] occurs with increasing d-wave pairing amplitude ∆ at low T . This result thus implies that the conduction electron gap only causes a significant reduction of V and λ for ∆ ≃ ∆ typ ∝ √ T K D. In This shows that, even with nonzero ∆, the specific heat coefficient will appear to saturate at a large value at low T (thus exhibiting signatures of a heavy fermion metal), before vanishing at asymptotically low T ≪ ∆ f (= ∆(λ/V ) 2 = 10 −4 D) Each curve is normalized to the T = 0 value for the metallic case, γ0 ≃ 2 3 π 2 ρ0V 2 /λ 2 .
stays finite, the simple relation Eq. (31) gives an excellent description of the heavy electron state. Above the small f -electron gap ∆ f , these values of V and λ yield a large heat capacity coefficient (taking N = 2) γ ≃ 2 3 π 2 ρ 0 V 2 /λ 2 and susceptibility χ ≃ 2ρ 0 V 2 /λ 2 , reflecting the heavy-fermion character of this Kondo-lattice system even in the presence of a d-wave pairing gap. According to our theory, this standard heavy-fermion behavior (as observed experimentally 5 in Nd 2−x Ce x CuO 4 ) will be observed for temperatures that are large compared to the f -electron gap ∆ f . However, for very small T ≪ ∆ f , the temperature dependence of the heat capacity changes (due to the d-wave character of the f -fermion gap), behaving as C = AT 2 /∆ with a large prefactor A ≃ (D/T K ) 2 . This leads to a sudden drop in the heat capacity coefficient at low T , as depicted in Fig. 6.
The surprising robustness of the Kondo screening with respect to d-wave pairing is rooted in the weak proximity effect of the f -levels and the coherency as caused by the formation of the hybridization gap. Generally, a pairing gap affects states with energy ∆ k from the Fermi energy. However, low energy states that are within T K of the Fermi energy are predominantly of f -electron character (a fact that follows from our large-N theory but also from the much more general Fermi liquid description of the Kondo lattice 28 ) and are protected by the weak proximity. These states only sense a gap ∆ f k ≪ ∆ k and can readily participate in local-moment screening.
Furthermore, the opening of the hybridization gap coherently pushes conduction electrons to energies ≃ V from the Fermi energy. Only for ∆ ≃ V ≃ √ T K D will the conduction electrons ability to screen the local mo-ments be affected by d-wave pairing. This situation is very different from the single impurity Kondo problem where conduction electron states come arbitrarily close to the Fermi energy.
First-order transition
The result Eq. (31) of the preceding subsection strictly applies for ∆ → 0, although as seen in Fig. 5, in practice it agrees quite well with the numerical minimization of the free energy until the first-order transition. To understand the way in which V is destroyed with increasing ∆, we must consider the V → 0 limit of the free energy.
We start with the ground-state energy. Expanding E [the T → 0 limit of Eq. (8)] to leading order in V and zeroth order in λ (valid for V → 0), we find (dropping overall constants) where we defined the quantity ∆ c at which the minimum value of V in Eq. (32) vanishes continuously, with the formula for V (∆) given by near the transition. According to Eq. (33), the equilibrium hybridization V vanishes (along with the destruction of Kondo screening) for pairing amplitude ∆ c ∼ √ T K D, of the same order of magnitude as the T = 0 hybridization V 0 , as expected [and advertised above in Eq. (3)].
Equation (33) strictly applies only at T = 0, apparently yielding a continuous transition at which V → 0 for ∆ → ∆ c . What about T = 0? We find that, for small but nonzero T , Eq. (33) approximately yields the correct location of the transition, but that the nature of the transition changes from continuous to first-order. Thus, for ∆ near ∆ c , there is a discontinuous jump to the local-moment phase that is best obtained numerically, as shown above in Figs. 5 and 2. However, we can get an approximate analytic understanding of this firstorder transition by examining the low-T limit. Since excitations are gapped, at low T the free energy F K of the Kondo-screened (V = 0) phase is well-approximated by inserting the stationary solution Eq. (34) into Eq. (32): for F K at ∆ → ∆ c . The discontinuous Kondo-to-local moment transition occurs when the Kondo free energy Eq. (35) is equal to the local-moment free energy. For the latter we set V = λ = 0 in Eq. (8), obtaining (recall where we dropped an overall constant depending on the conduction-band interaction. The term proportional to T in Eq. (36) comes from the fact that E k− = 0 for V = λ = 0, and corresponds to the entropy of the local moments. At low T , the gapped nature of the d-wave quasiparticles implies the last term in Eq. (36) can be neglected (although the nodal quasiparticles give a subdominant power-law contribution). In deriving the Kondo free energy F K , Eq. (35), we dropped overall constant terms; re-establishing these to allow a comparison to F LM , and setting F LM = F K , we find that can be solved for temperature to find the transition temperature T K for the first-order Kondo screened-tolocal moment phase transition: that is valid for ∆ → ∆ c , providing an accurate approximation to the numerically-determined T K curve in Fig. 2 (solid line) in the low temperature regime (i.e., near ∆ c = 0.14D in Fig. 2). Equation (38) yields the temperature at which, within mean-field theory, the screened Kondo lattice is destroyed by the presence of nonzero d-wave pairing; thus, as long as ∆ < T K (∆), heavy-fermion behavior is compatible with d-wave pairing in our model. The essential feature of this result is that T K (∆) is only marginally reduced from the ∆ = 0 Kondo temperature Eq. (2), establishing the stability of this state. In comparison, according to expectations based on a single-impurity analysis, one would expect the Kondo temperature to follow the dashed line in Fig. 2.
Away from this approximate result valid at large N , the RKKY interaction between moments is expected to lower the local-moment free energy, altering the predicted location of the phase boundary. Then, even for T = 0, a level crossing between the screened and unscreened ground states occurs for a finite V . Still, as long as the ∆ = 0 heavy fermion state is robust, it will remain stable at low T for ∆ small compared to ∆ c , as summarized in Figs. 1 and 2.
IV. CONCLUSIONS
We have shown that a lattice of Kondo spins coupled to an itinerant conduction band experiences robust Kondo screening even in the presence of d-wave pairing among the conduction electrons. The heavy electron state is protected by the large hybridization energy V ≫ T K . The d-wave gap in the conduction band induces a relatively weak gap at the heavy-fermion Fermi surface, allowing Kondo screening and heavy-fermion behavior to persist. Our results demonstrate the importance of Kondo-lattice coherency, manifested by the hybridization gap, which is absent in case of dilute Kondo impurities. As pointed out in detail, the origin for the unexpected robustness of the screened heavy electron state is the coherency of the Fermi liquid state. With the opening of a hybridization gap, conduction electron states are pushed to energies of order √ T K D away from the Fermi energy. Whether or not these conduction electrons open up a d-wave gap is therefore of minor importance for the stability of the heavy electron state.
Our conclusions are based on a large-N mean field theory. In case of a single impurity, numerical renormalization group calculations demonstrated that such a mean field approach fails to reproduce the correct critical behavior where the transition between screened and unscreened impurity takes place. However the mean field theory yields the correct value for the strength of the Kondo coupling at the transition. In our paper we are not concerned with the detailed nature in the near vicinity of the transition. Our focus is solely the location of the boundary between the heavy Fermi liquid and unscreened local moment phase, and we do expect that a mean field theory gives the correct result. One possibility to test the results of this paper is a combination of dynamical mean field theory and numerical renormalization group for the pseudogap Kondo lattice problem.
In case where Kondo screening is inefficient and ∆ > √ T K D, i.e., the "local moment" phase of Figs. 1 and 2, the ground state of the moments will likely be magnetically ordered. This can have interesting implications for the superconducting state. Examples are reentrance into a normal phase (similar to ErRh 4 B 4 , see Ref. 29) or a modified vortex lattice in the low temperature magnetic phase. In our theory we ignored these effects. This is no problem as long as the superconducting gap amplitude ∆ is small compared to √ T K D and the Kondo lattice is well screened. Thus, the region of stability of the Kondo screened state will not be significantly affected by including the magnetic coupling between the f -electrons. Only the nature of the transition and, of course, the physics of the unscreened state will depend on it. Finally, our theory offers an explanation for the heavy fermion state in Nd 2−x Ce x CuO 4 , where ∆ ≫ T K . | 2008-01-31T23:18:57.000Z | 2007-04-13T00:00:00.000 | {
"year": 2007,
"sha1": "91c503bf18b52f40800c6af31ea28dec23ab54bd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0704.1815",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "91c503bf18b52f40800c6af31ea28dec23ab54bd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234057499 | pes2o/s2orc | v3-fos-license | A Water quality monitoring system: design for Dian Lake sewage treatment plants in towns
Since sewage treatment plant consumes much of electric energy in the city, the optimization and improvement of the equipment and energy consumption monitoring management system is conducive to its development and even driving an energy-consuming society. There are a total of 20 sewage treatment plants of towns that directly affect the water environment of Dianchi Lake. Due to the dispersion of sewage treatment data, the lack of regular testing of water quality, poor accuracy and reliability of data, difficulty in evaluating the effect of sewage treatment and the lack of data management standards and norms, the water quality and quantity will have an impact on the ecosystem of Dianchi Lake if the water from plants is discharged directly or indirectly into the riverways, tributaries or channels of the Lake. Therefore, building a water quality monitoring system for 20 sewage treatment plants of Kunming is an important measure to ensure the water quality and improve the ecosystem of Dianchi Lake. This paper discusses the design of the water quality monitoring system for the sewage treatment plants of towns near Dianchi Lake from the perspective of relevant technologies.
Introduction
According to the survey, there are 20 sewage treatment plants of towns of the Dianchi Lake area that directly affect the water environment. The main power-consuming equipment of those plants includes draft fans, water pumps, disinfection equipment, electric valves, agitators, backwashing filter equipment, etc. However, these plants are scattered in space, with a wide range of sites and various types, and the water quality varies greatly. Most of them are barely supervised. Now the thorny problem is how to realize the centralized control and scientific treatment of these sewage treatment plants.
Combined with modern technology, environmental monitoring can detect pollution problems in the environment in time and determine the degree of pollution and future trend. Sewage monitoring is the main application of IoT technology in water quality [1,2]. In order to ensure the water quality, devices such as cameras and analytical sensors are set at the source of water for monitoring. Through the Internet of things,various parameters, including pH, COD, heavy metal content and so on, can be uploaded in real time.
Internet of Things in water quality monitoring
Internet of Things in environmental protection is an important part in the development of environmental monitoring. The main application of the Internet of Things in water quality monitoring is to place such sensing devices as electronic sensors or video monitors at the inlet or outlet of detected water in order to monitor the contents of various pollution factors in the water in real time and upload the data it collected to the Internet. When the monitoring data indicates that sewage parameters exceed certain limits, the system will promptly give feedback of the pollution information to the pollutant discharge unit and monitoring center to avoid major pollution incidents. The system will make such response as preventing, controlling or warning through the information it collected. This series of processes constitute the monitoring system, so as to realize the monitoring and comprehensive supervision of water quality and water pollution sources.
The Internet of Things connects various sensors to the Internet and processes data through sensing devices and network transmission. IoT consists of device layer, network layer and application layer. Device layer connects the physical world to the information world by hardware devices, and information from this layer will be transmitted to application layer through network layer. Application layer collects and processes the data before they serve the relevant industries. The Water Quality Monitoring System is designed according to the structure of the Internet of things and gives full play to the advantages of the Internet of things in water quality monitoring. The system is a set of intelligent, real-time, and online water quality monitoring system, which can automatically realize many functions related to the connection of IoT devices such as data collection, data processing and analysis, device control and so on [3]. It can operate reliably for a long time without supervision.
Design of the water quality monitoring system
According to the structure of the Internet of things, the system is divided into three layers: Field Ends, Data Transmission Layer and Application Layer.
Field ends
The field end includes the device layer and two subsystems.
Device layer.
The key devices in the Device Layer mainly includes Meters for Testing the Water, Switch, Camera, Digital Video Recorder, Mobile Terminal and Programmable Logic Controller (PLC) System. Programmable Logic Controller (PLC) is the core of industrial control. PLC control system is the core system of the whole device layer.
Programmable Logic Controller (PLC) is an electronic system with digital operation, designed for industrial application [4]. It uses a type of programmable memory for its internal storage program to perform user-oriented instructions such as logical operations, sequence control, timing, counting and arithmetic operations. In addition, various types of mechanical or production processes can be achieved through digital or analog I/O. Sewage treatment plants of towns should be equipped with PLC control cabinets to collect production data and on-site water quality and quantity monitoring data. On-site on-line testing instruments, including water quality and quantity instruments, should also be set at the inlet and outlet. Meanwhile, control instructions issued by upper configuration software will be executed.
3.1.2. Water quality monitoring system. Aimed at serving the needs of on-line analytical instruments and laboratory research, the on-line water quality monitoring system, whose core task is to provide representative, timely and reliable sample information, is a complete system from sampling, pretreatment, analysis to data processing and storage by using automatic control technology, computer technology and special software, so as to realize the on-line detection of the sample. Automatic monitoring system, in general, includes sampling system, pretreatment system, data acquisition and control system, online monitoring and analysis instruments, data processing and transmission system, According to the scale, different on-site management systems were designed for these 20 sewage treatment plants in towns.
1) The plants with a scale of less than 1000m 3 /d will adopt micro automatic water quality monitoring station, which is an outdoor mini-station, integrated with COD UV full-spectrum analyzers, ammonia nitrogen analyzers and two-parameter analyzers. Water samples will be collected to the pool of the micro-station through self-priming water pumps. First, let the water stand and reach the specified level. Then the water quality instruments and two-parameter probes start to sample, collect and analyze data. The data will be transmitted to touch-screen computers for summary and be uploaded to the digital management platform through wired or wireless network to indicate the water quality of these plants in real time.
The whole monitoring system is divided into four units: flow path unit, monitoring unit, testing unit and security unit. a) the flow path unit The flow path unit is mainly composed of five-parameter instruments, water quality instruments, self-priming water pumps, magnetic valves, water sample pools, liquid level switches, filter devices and so on. b) the monitoring unit The monitoring unit is composed of touch-screen computers, configuration software, PLCs and wired/wireless transmission modules. Instrument control is realized through the configuration software embedded in the touch-screen computer. Sampling control is realized through the configuration software and the PLC program. Signals detected by instruments will be uploaded to the digital water management platform through touch-screen computers using wired network or GPRS wireless network. c) the testing unit The testing unit integrates COD instruments, ammonia nitrogen instruments with two-parameter instruments to realize the function of data analysis and transmission. d) the security unit The security unit is composed of arresters, code locks and indoor temperature and humidity sensors to prevent loss of devices, destruction and human interference during testing.
2) For plants with a scale of more than 1000m 3 /d, continuous online monitoring should be conducted. The monitoring data includes parameters such as COD concentration, ammonia nitrogen, TP, TN, pH, suspended solids and flow of the sewage, all of which will be managed effectively. The whole monitoring system consists of the water quality testing instrument, sampling and pretreatment subsystem, data acquisition and processing subsystem, monitoring station subsystem, discharge building subsystem, etc. All of the results will be transmitted to the on-site PLC control system and SCADA system through standard analog signals so that the data can be stored and analyzed graphically. The system adopts modular design, which can realize fully automatic monitoring: start or stop automatically; store and upload data automatically. Hereby, unattended operations in the workplace can be truly achieved.
3.1.3. Video monitoring system. The video monitoring system is composed of five parts, including camera, transmission, control, display and record. The camera transmits the video to the host through coaxial video cables or network cables. Then the video signal will be distributed to different monitors and recorders by the host. If needed, the voice signal can be recorded into the recorder simultaneously. The operation staff can give instructions through the host to control the movement of the tripod head, adjust the focal length of the lens and switch between different cameras and tripod heads. Using specialized video processing mode, the video can be recorded, replayed and processed, so as to achieve the best video effect. The video monitoring system of the sewage treatment plant of towns can monitor the important production process in real time by installing cameras in key locations such as biological reaction pool, water inlet and outlet and main process sections, or by using panoramic view. Meanwhile, a digital video recorder will also be installed to collect on-site short video signals, which will be uploaded to the digital water management platform through the data transmission system. Figure 1 shows the data transmission network. Virtual private network transmits various data to application layer [5]. As an important part of the Internet of things system, the data transmission layer is mainly responsible for data transmission. Data collected through the Device Layer is connected to PLC automatic control system industrial Ethernet network in sewage treatment plant nearby. Through the public network, the network finally summarizes the data to the application layer information platform for display and other applications. Figure 2 shows the panorama of a sewage treatment plant. We can easily monitor the situation of the plant. Figure 3 shows a partial application of the system. From the figure, we can intuitively see the equipment operation, process parameters and so on. Through various modern information technologies, the system can basically realize water quality monitoring and data transmission concerning equipment operation, pollutant reduction and energy consumption for 20 sewage treatment plants of towns. Installing cameras at the key process of production ensures the water quality of plants and the scientific and information-based management of sewage treatment in Dianchi Lake area. It also provides comprehensive and accurate basic data and scientific decision support for the prevention and control of water pollution in Dianchi Lake area.
Summary
Why do we conduct a Water Quality Monitoring System? As we all know, the Age of Big Data has come, and massive data is even regarded as the core assets of enterprises. Data collection, summary, processing and analysis are particularly important. With the Internet of things, we are able to aggregate scattered data. After the system generates the monitoring data, we display it in a visual way. By combining computer network technology, we process and analyze the key data of sewage treatment so that we can realize intelligent supervision, early warning and decision support. We have even made full preparations for the intelligent and refined management of the enterprise in the near future. | 2021-05-10T00:03:38.402Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "dde78bf256dda017d04b83c3df8b0d113bd8a80b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/675/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e9b7fac13bb28e4ac9ad78f7b6cca93746c66dc8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
258204677 | pes2o/s2orc | v3-fos-license | Exploring the Causal Relationship Between Arterial and Venous Thromboembolism: A Case Series With Review of Literature
Venous thromboembolism (VTE) occurs due to venous stasis or low flow state within the blood vessels, resulting in subsequent fibrin and platelet aggregation leading to thrombosis. Arterial thrombosis affects various arteries including coronaries and is primarily due to platelet aggregation with little fibrin deposition leading to thrombosis. Although both arterial and venous thrombosis are considered as separate entities, some studies have suggested an association between them despite having distinctive causative factors. We retrospectively reviewed patients at our institution who were admitted with acute coronary syndrome (ACS) and underwent cardiac catheterization over a decade between 2009 and 2020 to look for patients who had both venous thromboembolic events and ACS. Here, we report a case series of three such patients who were found to have both VTE and coronary arterial thrombosis. However, it is unclear if having one of venous vs arterial clot increases the risk of having other vascular conditions, and further studies are needed to evaluate this hypothesis in the near future.
Introduction
Historically, arterial and venous thromboembolism (VTE) are regarded as separate entities due to the nature of the disease involving arteries and veins, respectively. Whereas arterial thrombosis is due to platelet aggregation with little fibrin deposits, venous thrombosis occurs due to stasis or low flow state, fibrin, and platelet aggregation as per Virchow's triad. A few studies have found a connection between these two vascular entities, and this connection may be caused by the fact that both diseases share some risk factors even though they each have unique pathophysiology. emergency room with a complaint of chest pain that started on the day of presentation while descending the stairs. The chest pain was substernal, constant, squeezing, radiating to the back, and associated with diaphoresis and clammy sensation. The patient was a former smoker and quit smoking 10 years ago. The vital signs on presentation were a blood pressure of 163/90 mmHg, a heart rate of 95 beats per minute, a respiratory rate of 16 breaths per minute, and 97% oxygen saturation on room air. Physical examination otherwise was unremarkable. Electrocardiogram (ECG) showed T wave inversion (TWI) in leads V1 and V2 ( Figure 1).
FIGURE 1: ECG showing normal sinus rhythm, a ventricular rate of 87 bpm, ST depression in lead II, and T wave inversion in leads V1-V3.
The initial troponin I level was within normal limits; however, the repeated level after six hours was elevated. Repeat ECG showed ST segment depression in lead II and more pronounced TWI in leads V1-V3. The patient received aspirin 325 mg with improvement in pain and was admitted to the coronary care unit (CCU) with the diagnosis of NSTEMI. The patient received a loading dose of clopidogrel and was started on therapeutic anticoagulation with a heparin drip. Subsequent lab results showed decreasing troponin levels (0.087 > 0.687 ng/mL). The patient's urine toxicology was negative and the chest X-ray was normal.
The patient underwent cardiac catheterization, which showed 100% stenosis of the first septal branch of the left anterior descending (LAD) artery and 50% stenosis of the proximal right coronary artery ( Figure 2). Additionally, small distal embolization was noted at the very terminal part of LAD, which was treated with wire dottering. Echocardiogram showed a left ventricular ejection fraction (LVEF) of 56% with hypokinesis of the basal anteroseptal segment. A contrast CT scan of the chest ruled out aortic dissection but showed bilateral PE ( Figure 4).
FIGURE 4: CT scan of the chest showing bilateral pulmonary embolism (red arrows).
Ultrasound of lower extremities was also done, which showed DVT in the left popliteal vein ( Figure 5).
FIGURE 5: Deep vein thrombosis of the left popliteal vein (red arrow).
Hypercoagulability workup was unremarkable. The patient was started on warfarin with enoxaparin bridging and was later discharged on warfarin with cardiology outpatient follow-up.
Case 2
A 50-year-old African American male with a medical history of hypertension for 10 years on amlodipine and dyslipidemia for two years on atorvastatin presented to the emergency room with a complaint of chest pain. He described chest pain as midsternal and pressure-like while on his way to work, and it was associated with lightheadedness, nausea, and diaphoresis. The vital signs on presentation were a blood pressure of 150/84 mmHg, a heart rate of 70 beats per minute, a respiratory rate of 24 breaths per minute, and 99% oxygen saturation at 2 l/min of oxygen via nasal cannula. Physical examination was otherwise unremarkable.
ECG was consistent with atrial fibrillation (AF) and ST segment elevation in inferior leads with reciprocal changes ( Figure 6). The patient was loaded with aspirin, clopidogrel, and heparin bolus. He immediately underwent cardiac catheterization and coronary angiogram, which showed 100% stenosis of RCA (Figure 7), and he underwent successful percutaneous coronary intervention (PCI) of the proximal and mid-RCA. However, the patient developed ventricular fibrillation (VF) intermittently and received six shocks, atropine, and amiodarone as per ACLS protocol. The patient went into asystole, and cardiopulmonary resuscitation was initiated with the return of spontaneous circulation achieved in five minutes. Intra-aortic balloon pump was inserted as the patient was in cardiogenic shock, and he was started on a dobutamine drip and an amiodarone drip for VF and was transferred to another facility for possible extracorporeal membrane oxygenation (ECMO).
Two months later, the patient had acute onset of chest pain, and he was brought to the emergency room as a STEMI alert. ECG was consistent with ST elevations in inferior leads ( Figure 8).
FIGURE 8: ECG showing ST elevation in inferior leads.
He was non-compliant with aspirin and clopidogrel. Cardiac catheterization showed 40% stenosis of the mid-LAD and 100% in-stent restenosis of the proximal RCA. Guidewire could not be passed to the mid-RCA due to heavy thrombus burden. A proximal RCA balloon angioplasty was done, and TIMI II flow was achieved; however, the patient had VF during the procedure and received DC cardioversion. Later, the patient was started on an amiodarone drip and subsequently transferred to a tertiary care center where he underwent automatic implantable cardioverter-defibrillator (AICD) placement.
A week later, the patient presented to the emergency room with complaints of left arm swelling since the past one day. He was admitted with suspicion of provoked DVT since he had a recent AICD placement. A heparin drip was initiated. Ultrasound of the left upper extremity showed DVT of left subclavian, axillary, and basilic veins ( Figure 9). Cardiac catheterization this time showed double vessel disease with a patent stent in the proximal RCA with an occluded stent in the mid-RCA and grade II collateral flow from the LAD to distal RCA. He was discharged on warfarin, and hypercoagulability workup was significant for the elevated homocysteine level.
Case 3
An 84-year-old African American female with a medical history of hypertension and urinary incontinence presented to the emergency room with a complaint of sudden onset chest pain while using the restroom. She might have had vagal stimulation from a bowel movement, which could have been contributory to this presentation. She described the pain as substernal heaviness, 10/10 in intensity, non-radiating, associated with diaphoresis, lasted about an hour, and resolved spontaneously. The patient reported no active chest pain on presentation to the emergency room. She was an active smoker with a 25-pack-year history of smoking.
She had a blood pressure of 121/70 mmHg, a heart rate of 110 beats per minute, a respiratory rate of 20 breaths per minute, and 94% oxygen saturation on room air. The physical examination otherwise was significant for pitting edema over both lower extremities. ECG showed sinus rhythm with a ventricular rate of 114 beats per minute, 0.5-1 mm ST segment elevation in lead III, TWIs in leads III and aVF, and minimal ST depression in leads I and aVL ( Figure 10).
TWI, T wave inversion
High sensitivity troponin T level was 237 (normal, <12 ng/L), and proBNP level was 1711 (normal, <125 pg/ml). The patient was diagnosed with NSTEMI; received aspirin, clopidogrel, and heparin bolus; and underwent cardiac catheterization. Coronary angiogram showed 90% stenosis of the proximal left circumflex artery, 80% stenosis of the first obtuse marginal, 70% stenosis of the distal RCA, and 100% stenosis of the mid-RCA (Figure 11).
FIGURE 11: Coronary angiogram showing 100% obstruction of the midright coronary artery (red arrow).
The patient had successful PCI of the mid-RCA with a drug-eluting stent. Echocardiogram showed an LVEF of 55.9%, abnormal septal motion consistent with right ventricular volume and pressure overload, abnormal diastolic filling, and right ventricular free wall hypokinesis with sparing of the apex (McConnell's sign). The patient was empirically started on therapeutic enoxaparin for suspected PE. CT chest with contrast showed saddle pulmonary embolus, large distal right and left main pulmonary arterial emboli with extension to the upper and lower lobe peripheral branches (Figure 12), and evidence of right heart strain.
FIGURE 12: CT chest showing saddle pulmonary embolism (red arrow).
Ultrasound of the lower extremities showed acute thrombus in right superficial femoral and popliteal veins ( Figure 13).
FIGURE 13: Deep vein thrombosis of the right superficial femoral vein (red arrow).
The patient was started on coumadin with enoxaparin bridging during the hospital course and subsequently was discharged on coumadin, aspirin, clopidogrel, and high-intensity statin. Upon follow-up, a month later, coumadin was switched to direct acting oral anticoagulant (apixaban). Hypercoagulability workup was significant for elevated homocysteine levels.
Discussion
VTE is an umbrella term used to describe blood clot within the vasculature that provides blood return to the heart. It includes DVT and PE. The common causes of VTE include any major surgery, active cancer, trauma, bone fracture, prolonged immobilization, cigarette smoking, and among women, pregnancy or the puerperium, oral contraceptive use, estrogen use, and progestin use. Patients with hypercoagulable or genetic risk factor predisposition are at increased risk.
ACS refers to a group of myocardial ischemic conditions that includes UA, NSTEMI, and STEMI [1]. It occurs due to plaque disruption with superimposed thrombosis [2] in the coronary artery. The risk factors are cigarette smoking, obesity, hypertension, hyperlipidemia, diabetes mellitus, and male sex.
VTE and arterial thrombosis are historically considered as two distinct disease entities with distinct pathologies and etiologies. Venous thrombi mainly consist of red blood cells and fibrin [3], while arterial thrombi mainly consist of platelets [4]. This is the reason why the anticoagulant agent is prescribed in VTE, while anti-platelets are prescribed in arterial thromboembolic diseases. The risk factors for VTE and arterial thrombosis are also distinctly different; however, many patients have shown overlapping risk factors. Acute arterial thrombosis is usually due to atherosclerotic plaque rupture [5,6], while VTE events mainly occur due to a low flow state or venous stasis [3]. As described above, since VTE and arterial thrombosis are different disease lines, we wished to find an associating risk factor that connects the two given the cases of dual presentation.
Prandoni found that in 244,865 patients, cigarette smoking was an independent risk factor for arterial thrombosis and is an already well-established risk factor for VTE [6]. The risk of VTE was higher with more cigarettes smoked per day, according to a meta-analysis of 21 studies that found current smoking to be associated with an elevated risk of the condition (RR, 1.24; 95% CI, 1.14-1.35) as well as previous smoking (RR, 1.05; 95% CI, 1.01-10.10) [7]. VTE risk was 6.2 times higher in people who were obese. Patients over 50 and cases falling into classes II and III of obesity were at the highest risk of VTE related to obesity [8]. Marcucci et al. found that for 603 patients with VTE, there were high levels of lipoprotein(a). Lipoprotein(a) was an independent risk factor for idiopathic VTE (OR, 2.1; 95% CI, 1.4-3.2); meanwhile, it is a known marker for atherosclerosis of the arterial system [9].
For 89 patients with proven VTE (51.7%), coronary artery calcium was found to be more common than ageand gender-matched controls with VTE (28.1%) (OR, 4.3; 95% CI, 1.9-10.1) [10]. Diabetes and hypertension were also found to be statistically significant for findings of VTE-positive status.
Becattini et al. performed a prospective study of 360 patients with first-ever PE. Patients with unprovoked PE appeared to have a greater rate of arterial events as well (RR, 7.2; 95% CI, 1.71-30.45). Index PE is an independent risk factor for future arterial events per the age-controlled data [11]. A prolonged 10-year follow-up (patients were randomized from April 1988 to April 1991 and followed for 10 years) of the DURAC research in patients with VTE [12] corroborated these findings. In this investigation, death from AMI and stroke was greater in patients with previous VTE than in the general population (standardized incidence ratio, 1.28; 95% CI, 1.00-1.56). Prandoni et al. conducted a prospective follow-up study of 1919 patients with a first episode of VTE for any incidence of symptomatic arterial disease. After a median follow-up of 4 years, they found that 15.1% of patients with idiopathic VTE had at least one arterial event, compared to 8.5% of patients with secondary VTE [13].
One study has demonstrated that patients with a history of arterial cardiovascular events were at increased risk of VTE events in the first three months following the index event [14]. On the other hand, a longitudinal cohort study in patients aged 20-39 years of age presenting with unprovoked VTE showed that they had an increased risk of myocardial infarction as compared to controls [15]. Apart from having acute coronary events, patients with prior VTE are also shown to be at an increased risk for hospitalization due to MI, stroke, and transient ischemic attack within a year after the episode of VTE [16,17]. The results of these studies suggest that patients with VTE are at increased risk of subsequent arterial cardiovascular events. Although, studies have not described the pathophysiology of such an occurrence. The underlying mechanism of thrombotic states could be hypercoagulability such as homocysteinemia, lupus anticoagulant, antiphospholipid antibodies, or an intracardiac shunt such as patent foramen ovale or provoked state such as prolonged immobility. This implies that arterial and venous thrombosis may share common mechanisms or risk factors. We can hence conclude that venous and arterial thrombosis are two aspects of the same disease (i.e., thrombosis), which may electively affect genetically predisposed individuals and manifest as either venous thrombotic events or arterial thrombotic events depending on the presence of underlying risk factors.
The key limitations in this case series are the small size and retrospective nature. Also, the patients had risk factors like HIV, AF, and hyperhomocysteinemia. As researchers, we were interested in discovering an association between the simultaneous occurrence of arterial and venous thrombosis.
Conclusions
After a detailed review of the literature, we can say that there is likely a causal association between venous and arterial thromboembolic events based on the above case series and discussion. However, the amount of evidence to support this possibility is very limited, and further studies are needed to evaluate if any definite association exists between the two entities. This case series highlights the importance of acute coronary arterial thrombosis having a correlation with VTE. Understanding the relationship is crucial because prophylaxis and/or treatment of one condition may benefit patients as well as medical professionals by preventing hospitalizations caused by the other condition, lowering the overall incidence of such events, and lowering the cost of care.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2023-04-19T15:06:50.267Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "07f6b41820a851d0bb9956c2435d91b155a9a76a",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/149943/20230417-11074-zehi22.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24c77909d2b1e29b369ad60e57cb9057dfe77eda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251911414 | pes2o/s2orc | v3-fos-license | Prior episode of colitis impairs contextual fear memory
Accumulating evidence has shown that intestinal inflammations in inflammatory bowel disease (IBD) also drive pathological responses in organs outside the intestine, including the brain. Previous studies using the dextran sodium sulfate (DSS)-induced colitis model have shown that colonic inflammation contributes to the development of anxiety- and depression-related behaviors; however, little is known about whether memory function is affected. Here, we subjected male and female C57BL/6J mice to DSS-induced colitis for 6 days, followed by Pavlovian conditioned fear (CF) tests 15 days after the start of inflammation, when local colonic inflammation has receded. The contextual and cued CF tests were used to assess associative fear memory. We found that DSS-induced colitis led to significant impairment in contextual fear memory in both male and female mice; on the other hand, auditory cued fear memories were comparable between control and DSS-treated mice. There were marked signs of astrogliosis in the hippocampal regions 17 days (D17) after colitis induction. Furthermore, molecular characterization of hippocampi showed marked but transient increases in the expression of inflammatory genes Nfkb, Trem2 (microglial marker), GFAP (astrocyte marker), Il1b, and S100a8 in DSS-treated mice. While the expression of Nfkb, Trem2, and GFAP showed a peak on day 10, the S100a8 expression was high on days 10 and 17 and subsided on day 42. Interestingly, expression of Bdnf remained elevated in the times assessed (D10, 17, 42). Together, these results demonstrated that DSS-induced colitis could induce prolonged neuroinflammation and impaired contextual fear memory. Supplementary Information The online version contains supplementary material available at 10.1186/s13041-022-00961-4.
Main text
The prevalence of inflammatory bowel disease (IBD), a chronic inflammatory condition of the gastrointestinal tract, continues to rise [1]. IBDs, including Crohn's disease and ulcerative colitis, are chronic conditions that cycle between periods of active flare and remission. In addition to primary pathologies affecting the intestine, IBD has been linked to neuroinflammation and affects emotional functions including depression and anxiety [2].
To model IBD in rodents, dextran sulfate sodium (DSS)-induced colitis has been widely used, which elicits intestinal pathologies similar to human ulcerative colitis [3]. Previous studies have shown that DSS-induced colon inflammation led to increased brain excitability and neuroinflammatory phenotype, including the transcriptional increase of pro-inflammatory genes, and infiltration of monocytes and neutrophils [4][5][6]. While DSS-induced colitis has been shown to affect stress-related behaviors and increase anxiety-and depression-like behaviors [5][6][7][8], little is known about whether a prior episode of colitis affects memory function.
In the present study, we subjected male and female C57BL/6J mice to DSS-induced colitis for 6 days, followed by Pavlovian conditioned fear (CF) tests 15 days after the start of colitis induction (Fig. 1a), when local colonic inflammation had receded. The contextual and cued fear conditioning tests are widely used to evaluate associative fear learning and memory [9]. Given the Open Access importance of the gut microbiome in contributing to the pathogenesis of ulcerative colitis, experimental mice were maintained on semi-purified OpenSource diets D12450J (Research Diets Inc.) to ensure consistency and reproducibility of the induced disease course. Mice received normal drinking water (control group) or DSS (2%) (MP Biomedicals; 36-50 kDa) in drinking water for 6 consecutive days to induce acute colitis, then switched back to normal drinking water. All mice were assessed for body weight, fecal consistency, and macroscopic fecal blood scores; detailed methods are described in Additional file 1. Experimental procedures were approved by the animal care committee of Texas A&M University.
Mice given 2% DSS for 6 days exhibited significant disease activities (composite score of fecal consistency and macroscopic fecal blood scores) and recovered gradually after cessation of DSS (Fig. 1b). Female C57BL/6J mice developed more severe disease symptoms at days 4-6 compared to male mice, but recovered to similar levels after cessation of DSS. On training day, mice received a mild foot shock conditioned with tone ( Fig. 1a). 24 h later, mice were tested for contextual fear recall and placed in the same chamber for 5 min. Freezing behavior is used as an index of fear memory recall. We found that a prior episode of DSS-induced colitis significantly reduced contextual fear memory in both male and female mice, even when colitis-associated disease symptoms such as diarrhea and rectal bleeding have subsided (treatment: F 1, 35 = 29.1, p < 0.0001, gender: F 1, 35 = 8.939, p = 0.0051, treatment × gender: F 1, 35 = 3.015, p = 0.0913) (Fig. 1c). In the auditory cue test, mice were placed into a novel-shaped chamber with altered coloring, flooring and odor, and the freezing behavior in response to the tone was assessed (percent freezing during the tone presentation minus percent freezing in the pre-stimulus phase). Some male mice showed cued fear response below the 5% threshold and were excluded from data analysis, while all female mice responded. Overall, freezing behaviors for auditory cues were comparable between control and DSStreated mice (Fig. 1d).
It has been shown that contextual information is encoded by neurons in the hippocampus and conveyed directly to the amygdala, which generates conditioned fear responses [10,11]. Next, we assessed astrogliosis as an indication of neuroinflammation [12], performing immunostaining of the astroglial protein GFAP (glial fibrillary acidic protein) on brain tissues collected on day 17. Astrogliosis was determined by increased GFAP expression with hypertrophic morphology (enlarged cytoplasmic area and thickness of processes) [12]. We found increased GFAP-positive cells with hypertrophic morphology, resembling reactive astrocytes, in the hippocampus of DSS-treated mice (Fig. 1e). To further explore the temporal changes of the neuroinflammatory response to colonic inflammation, we collected hippocampi from control and DSS-treated mice at day 10, 17 and 42 after the colitis induction. Quantitative PCR data demonstrated that expression of inflammatory genes Nfkb, Trem2 (microglial marker), Gfap, and Il1b were significantly increased on day 10 and decreased thereafter to basal levels by day 42, except for Il1b (Fig. 1f-i, respectively). Similarly, a previous study has demonstrated persistent elevation of Il1b mRNA in the hippocampus 4-weeks after acute colitis [6]. Interestingly, expression of the S100 calcium-binding proteins S100A8 (S100a8) peaked on day 17 and subsided to basal level on day 42 (Fig. 1j). S100A8 is a ligand for RAGE (receptor for advanced glycation end product); it has been reported that S100A8 accumulates in the brain before the appearance of Aβ plaques in mice overexpressing the precursor of Aβ [13]. Furthermore, the expression level of the neurotrophic factor Bdnf remained elevated compared to control (Fig. 1k), suggesting an ongoing repairing process in the hippocampus long after the resolution of clinical symptoms of colon inflammation.
(See figure on next page.) Fig. 1 Prior exposure to DSS-induced colitis led to neuroinflammation. a Schematic diagram of experimental design. Mice were given normal (control) or 2% DSS in drinking water for 6 days (Day 0-6), then switched to normal drinking water and allowed to recover, mimicking clinical remission. Mice were then subjected to conditioned fear (CF) tests on days [15][16]. For in vivo study, n = 10, 12, 9, and 8 for male-control, male-DSS, female-control, and female-DSS groups, respectively. b Disease activities including fecal consistency and rectal pathologies were monitored. Data were analyzed with two-way ANOVA (treatment × repeated measures) followed by Tukey's multiple comparisons test. # p < 0.05 male-DSS vs. female-DSS. c, d DSS-exposed mice showed significantly impaired contextual fear memory in both male and female mice (c), but comparable auditory fear memory (d). Two-way ANOVA (treatment and gender as independent factors) followed by Tukey's multiple comparisons test. e Representative images of the hippocampal regions of mice on Day 17. DSS-exposed mice showed increased astrogliosis (increase in abundance and cell size of GFAP-labeled astrocytes (green). Sections were counterstained with DAPI (blue). Scale bar 50 μm. Images were taken from 3 stained sections per brain, 3 brains per group. Images were processed and the areas of GFAP-labeled cells were quantified with ImageJ. Quantitative PCR analyses of expression of Nfkb, Trem-2, Gfap, IL-1b, S100a8, and Bdnf in the hippocampus collected from control and DSS-treated mice on Day 10, 17, and 42 (f-k respectively; n = 4 per group). One-way ANOVA followed by Tukey's multiple comparisons test, *P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001. All data were presented as mean ± SD, except for 1b where disease scores were presented as mean ± SEM, for clarity In conclusion, our study showed that contextual memory function was negatively impacted by an episode of colitis, with prolonged neuroinflammation in the hippocampal regions. Further work is required to determine mechanistically the interactions between innate neuroinflammatory response and neurons encoding
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: specific inputs to hippocampal-amygdala neurocircuit to affect contextual fear memory [11]. Of note, clinical functional MRI data showed that patients with active-stage ulcerative colitis exhibited decreased hippocampal/parahippocampal activity that correlated with memory loss [14]. Overall, our data suggest that in addition to clinical management of the symptoms of IBD, other strategies to monitor and reduce neuroinflammation may need to be considered to prevent potential progression to chronic disease conditions such as dementia or neurodegenerative diseases, given that IBD patients are at increased risk for neurodegenerative diseases including Parkinson's disease and dementia [15]. | 2022-08-30T14:04:42.829Z | 2022-08-29T00:00:00.000 | {
"year": 2022,
"sha1": "4724fc6ea8b05439e8ab013092ddbd3cdf40b911",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2a3ce3ca4327e0a95d68774f8b9ecffaa9a6da06",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239750425 | pes2o/s2orc | v3-fos-license | Expression of matrix metalloproteinase-9 in histological grades of oral squamous cell carcinoma: An immunohistochemical study
Context: Oral squamous cell carcinoma (OSCC) is characterized by a high degree of local invasiveness and metastasis to cervical lymph nodes and distant sites. Degradation of extracellular matrix (ECM) requires the concerted action of several extracellular enzymes, the most prominent of which are matrix metalloproteinases (MMPs). Proteolytic degradation of ECM components by (MMP-9) facilitates carcinoma cell invasion, enhances angiogenesis and tumor progression. Objective: To assess and correlate the immunohistochemical expression of MMP-9 with clinicopathological parameters and histological grades of OSCC. Settings and Design: Thirty histopathologically diagnosed cases of OSCC including 12 cases of well-differentiated squamous cell carcinoma, 12 cases of moderately differentiated squamous cell carcinoma and 6 cases of poorly differentiated squamous cell carcinoma were included in the study group. Materials and Methods: The samples were subjected to staining using monoclonal antibodies against MMP-9 and visualized using the polymer-HRP detection system. Expression of MMP-9 was assessed in tumor epithelium/parenchyma and connective tissue stroma separately, and the mean of both was considered as average MMP-9 expression. Statistical Analysis: The parametric independent samples “t” test, one-way ANOVA test and Pearson's correlation test were used for the statistical analysis. Results: Immunoexpression of MMP-9 increased with advancing stage and histological grade of OSCC with statistically significant results. Conclusion: MMP-9 plays an important role in invasion and metastasis and can serve as an independent prognostic marker.
INTRODUCTION
Globally, oral cancer is a major health hazard, with an incidence rate of about 5% of all malignant tumors. The global annual incidence rate has been reported as 8.2/100,000 for males and 2.8/100,000 for females. [1,2] More than 90% of all oral cancers are oral squamous cell carcinomas (OSCCs). [3] The highest prevalence and incidence of OSCC is found in the Indian subcontinent, where it ranks among the top three types of cancer in the country. [4] Cancer incidence and mortality are rapidly growing worldwide. The reasons are complex but reflect changes in the prevalence and distribution of the main risk factors for cancer, several of which are associated with socioeconomic development. [5,6] The 5-year survival rate for patients with head and neck squamous cell carcinoma (HNSCC) is approximately 50% and has not improved significantly over the past five decades, despite advances in treatment techniques and modalities. [7] Oral cancer is characterized by a high degree of local invasiveness and metastasis to cervical lymph nodes. Metastasis is a complex process that promotes the dissemination of cancer cells from the primary tumor to distant sites. Cervical lymph node metastases (LNMs) is an essential malignancy criterion in oral cancer, and nearly 40% of patients with oral cancer suffered from lymph node metastatic tumors. [8] Tissue invasion and metastasis require extensive remodeling and degradation of extracellular matrix (ECM) which requires the concerted action of several extracellular enzymes, the most prominent of which are matrix metalloproteinases (MMPs). [9] MMP belongs to a family of zinc-dependent endopeptidases which can degrade several types of collagen in the ECM. Hence, they play an important role in tissue repair and ECM remodeling and turn to promote cancer invasion. [10] MMP-9 is known as a multifunctional modulator that is involved in very complex cell-signaling cascades. Proteolytic degradation of ECM components (including types III, IV and V collagens, as well as gelatin) by MMP-9 facilitates carcinoma cell invasion and results in the discharge of growth factors such as vascular endothelial growth factor that enhance angiogenesis and tumor progression. At the same time, antiangiogenic endostatin, angiostatin and tumstatin are released. MMP 9 has a fluctuating role in cancer, which not only affects carcinoma cells but also other cell populations. MMP-9 can act as either a carcinoma protector or promoter depending on the specific situation, which is related to patient characteristics, including the stage, grade, and location of the tumor. [11] The independent prognostic significance of MMP-9 has been shown in carcinomas of breast, [12] pancreas, [13] bladder, [14] colorectal [15] and HNSCC. [16,17] However, there are no consistent results between MMP-9 expression, disease progression, prognosis or metastasis in OSCC. Therefore, the present study is aimed to evaluate the MMP-9 expression in different clinical stages and histological grades of OSCC.
MATERIALS AND METHODS
The study was conducted in the Department of Oral Pathology and Microbiology at the institute hospital. Thirty histopathologically diagnosed cases of OSCC were included in the study group. Ethical clearance from the institutional ethical committee and informed consent from the patients was obtained for the present study. Demographic data of the cases, habit history, duration and frequency of habit and clinical diagnosis were recorded.
Staging of OSCC was done according to the staging system by the American Joint Committee for Cancer Staging and End Result Reporting. [18] The OSCC cases were graded according to the histologic malignancy grading system given by Bryne et al. [19] The clinicopathological parameters of these patients are summarized in Table 1.
Immunohistochemical staining
Formalin-fixed paraffin-embedded tissues were sectioned at The antibody reaction was visualized by using fresh substrate/ chromogen solution of 3,3-diaminobenzidine (DAB) in the provided buffer (by mixing 25 µl concentrated DAB in 500 µl of substrate buffer) for 10 min. The sections were counterstained with Hematoxylin, dehydrated and mounted using DPX-Dibutyl phthalate Polystyrene Xylene.
Breast cancer tissue was used as a positive control. For the negative control, the primary antibody for MMP-9 was replaced by a solution of bovine serum albumin in PBS solution and each set of staining always included a separate known positive control.
Evaluation of immunoexpression of matrix metalloproteinase-9
For quantitative analysis, MMP-9 positive cells were counted in 10 high-power fields (magnification: ×400) of a light microscope (Olympus CH 20i). Expression of MMP-9 was assessed in tumor epithelium/parenchyma and connective tissue stroma separately, and the mean of both was considered as average MMP-9 expression. The slides were analyzed by the observer blinded to clinical data. Expression was analyzed semi-quantitatively and scored according to the method proposed by Franchi et al. [20] Scores were interpreted as 0 -no stained cells, 1 -≤25% stained cells, 2 ->25% and ≤50% stained cells, 3 ->50% and ≤75% stained cells and 4 ->75% stained cells.
Statistical analysis
All statistical analyses were performed using the SPSS software system version 19 (IBM Inc, Chicago, Illinois, USA). Descriptive statistics were used for demographic data and summarized as mean with standard deviation and as a number with percentage for discrete variables.
The parametric independent samples "t" test and one-way ANOVA test were applied to evaluate significant differences among the mean values in different groups. Pearson's correlation test was applied to study the correlation between MMP-9 scores with clinicopathological parameters. P <0.05 was considered to indicate statistical significance at 95% of the confidence interval.
RESULTS
On comparison of MMP-9 expression with demographic data, we found that the mean MMP-9 score increased with advancing age. The results were statistically significant (P = 0.05). Females exhibited higher mean MMP-9 scores compared to males. The mean MMP-9 score was significantly higher in OSCC of the tongue followed by the floor of the mouth, buccal mucosa and other sites [ Table 2]. However, the comparison in the expression of MMP-9 with gender (P = 0.188) and site (P = 0.259) of OSCC was not found to be statistically significant.
We observed the expression of MMP-9 with the nodal status of OSCC and found that the mean MMP-9 score increased with the regional lymph node involvement and was also highly statistically significant (P = 0.00) [ Table 4].
A comparison of MMP-9 expression with the stage of OSCC showed that the Mean MMP-9 score was higher in advanced stages of OSCCs (P < 0.05). Pairwise comparison showed that mean MMP-9 score was significantly lower in Stage I as compared to Stage III (P = 0.01) and Stage IV (P = 0.02) OSCC and also significantly lower in Stage III compared to Stage IV.(P = .001) [Graph 1 and Table 5].
On comparison of MMP-9 expression with histological grades of OSCC, we found a higher mean MMP-9 score in poorly differentiated carcinomas squamous Figures 1-3]. The difference was statistically significant. Comparisons of mean MMP-9 score within grades showed statistically significant difference (P < 0.05) between Grade I and II (P = 0.018), Grade I and III (P = 0.001), Grade II and III (P = 0.003) [Graph 2 and Table 6].
DISCUSSION
Oral cancer is one of the most common cancers in the world. An estimated 378,500 new cases of intraoral cancer are diagnosed annually worldwide. In parts of India, oral cancer represents more than 50% of all cancers and is the most common cancer among males and the third most common cancer among females. [21] Indian statistics of cancer mortality was estimated to a frequency of 71% in people aged 30-69 years for whom oral cancer was the most prevalent a fatal form of malignancy which accounted for 22·9% deaths. [22] Cervical lymph node metastasis or distant organ metastasis, while being a potential prognostic indicator, is responsible for the poor survival rates in patients suffering from oral cancer. Epidemiological data indicated that the 5-year survival rates of oral cancer patients were 80%, 70%, 56.9% and 36.8% with Stages I, II, III and IV, respectively. [8] Tumor metastasis is facilitated by a highly coordinated tandem of increased migratory ability coupled with increased proteolytic activity toward ECM components. Proteolytic degradation of ECM is an essential part of this process and several enzyme systems like serine proteinases, cysteine proteinases and MMPs are involved. The first step in metastasis formation involves the degradation of the underlying basement membrane which mainly consists of type IV collagen. MMP-9 plays an important role in its degradation because of its ability to destroy this type of collagen. [23] In this study, assessment of MMP-9 expression was done by the semi-quantitative scoring method described by Franchi et al. [20] Our study showed that MMP-9 expression was present in all OSCC cases ranging from weak to strong expression. We found that the intensity of MMP-9 staining in the parenchyma was stronger than in the tumor stroma. It is believed that MMP-9 produced by stromal cells potentiates the action of MMPs produced by the parenchyma. This fact supports the view of an interaction between neoplastic cells and the adjacent stroma as demonstrated in some experiments. [24] This strategic interaction permits neoplastic cells to induce stromal cells to produce proteolytic enzymes that act in synergism with tumor enzymes, opening a tissue space for tumor invasion, migration and metastasis.
On comparison of demographic data with MMP-9 expression, we found a statistically significant difference between patient's age but not in sex and site. Our results are in accordance with the studies done by O-Charoenrat et al., [17] Ruokolainen et al., [25] Dunne et al., [26] Zhou et al. [27] and Mäkinen et al. [28] On the contrary, Dai et al. [29] found higher MMP-9 expression in male OSCC patients than female OSCC patients (P < 0.05). Mohtasham et al. [30] found a positive correlation between MMP 9 and E-cadherin expression with the primary site of tumors.
In the present study, MMP-9 expression increased as the tumor size (T) increased (from T1 to T3) and was also found to be statistically significant. We found a statistically significant difference between the MMP-9 expression in the presence (N1, N2) and absence (N0) of cervical LNM with the increased intensity of staining in nodal-positive cases compared to node-negative cases [Graph 3]. Our results are in concordance with the studies done by O-Charoenrat et al., [17] Franchi et al., [20] de Vicente et al., [24] Dunne et al., [26] Zhou et al., [27] Kurahara et al., [31] Hong et al., [32] Katayama et al. [33] and Ogbureke et al. [34] All these studies found a significant correlation of MMP-9 expression with the T stage and regional lymph node involvement. On the contrary, Ruokolainen et al., [25] Ikebe et al., [35] Riedel et al. [36] and Guttman et al. [37] did not find a correlation between MMP-9 expression and primary tumor size and neck node metastasis.
On the assessment of MMP-9 expression in different clinical stages of OSCC, strong MMP-9 expression was noted in advanced stages of OSCC with statistically significant results. The pairwise intragroup comparison showed MMP-9 expression score was significantly lower in Stage I as compared with Stage III and stage IV OSCC patients. Furthermore, the MMP-9 score was significantly lower in Stage II as compared with Stage IV. Thus, MMP-9 expression adds a predictive power of the outcome of pathological stages. Our results are in concordance with the studies done by O-Charoenrat et al., [17] Dunne et al., [26] Dai et al. [29] and Riedel et al. [36] who found a statistically significant MMP-9 expression with advanced stages of HNSCC. Riedel et al. [36] concluded in their study that MMP-9 may be a useful marker for clinical monitoring of HNSCC patients. On the contrary Ruokolainen et al., [25] Mäkinen et al., [28] Guttman et al. [37] and Kato et al. [38] did not find a correlation between MMP-9 expression and tumor nodes metastasis staging of OSCC.
On the correlation of MMP-9 expression with histological grades of OSCC, we observed MMP-9 expression gradually increased as the tumor progressed from Grade I to Grade II to Grade III and was also found to be statistically highly significant (P = 0.00) [ Table 6]. On intragroup assessment, we found a significant difference in MMP-9 expression score between Grade I and II, Grade I and III, Grade II and III. Graph 1: Comparison of mean values of matrix metalloproteinase-9 expression score with tumor-node-metastasis stages Graph 3: Correlation of mean values of matrix metalloproteinase-9 expression score in regional nonmetastatic (Stage I + II) and regional metastatic groups (Stage III + IV) Graph 2: Comparison of mean values of matrix metalloproteinase-9 expression score with Histological grades We observed the expression of MMP-9 largely in tumor cells and also in the adjacent stromal cells and inflammatory cells. It is conceivable that dynamic host-tumor interactions modulate MMPs levels and influence the progression of human tumors and tumor stroma is also a determinant factor for tumor progression.
We found that overexpression of the MMP-9 was strongly associated with nodal metastasis and advanced stages of OSCC, so MMP-9 expression can be considered as a strong prognostic factor for the locoregional spread and clinical behavior of OSCC. MMP-9 overexpression in higher grades of OSCC closely correlated with carcinoma invasion and progression. Thus, MMP-9 may be useful in determining the prognosis of patients with OSCC.
CONCLUSION
Immunohistochemical analysis of MMP-9 in tumor and stromal cells at the tumor invasion front demonstrated an overall high expression of these proteins in all the cases of OSCC studied, suggesting that these molecules play an effective role in the tumor invasion and progression. This observation may be important in determining appropriate strategies to target MMP-9 in cancer which may require the use of inhibitors of its catalytic activity and also the development of new tools to inhibit its protein binding functions. Taken together, these observations suggest the importance of targeting MMP-9 and opens new perspectives for the therapeutic inhibition of protease function in cancer.
Financial support and sponsorship
Nil.
Conflicts of Interest
There is no conflict of interest. | 2021-10-26T00:08:17.810Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "d9e466dcd4698cf7df4604a27997cc7fce2f54dc",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fba363732ea118cbca979a0dbfc2f1dcb8db62c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266923933 | pes2o/s2orc | v3-fos-license | Correlation between Peptacetobacter hiranonis, the baiCD Gene, and Secondary Bile Acids in Dogs
Simple Summary Dysmetabolism of bile acids has been linked to chronic enteropathy in dogs. Peptacetobacter (Clostridium) hiranonis has been described as the major species responsible for converting primary into secondary bile acids in dogs. Moreover, decreased P. hiranonis abundance has been linked to chronic enteropathy and antibiotic-induced dysbiosis in dogs and cats. Therefore, this study aimed to investigate further the correlation between P. hiranonis, the bacterial gene (baiCD) involved in bile acid conversion, and the conversion process per se. Our findings indicate a strong and significant correlation between P. hiranonis, baiCD, and the relative concentration of secondary bile acid in dogs. Abstract Bile acid metabolism is a key pathway modulated by intestinal microbiota. Peptacetobacter (Clostridium) hiranonis has been described as the main species responsible for the conversion of primary into secondary fecal unconjugated bile acids (fUBA) in dogs. This multi-step biochemical pathway is encoded by the bile acid-inducible (bai) operon. We aimed to assess the correlation between P. hiranonis abundance, the abundance of one specific gene of the bai operon (baiCD), and secondary fUBA concentrations. In this retrospective study, 133 fecal samples were analyzed from 24 dogs. The abundances of P. hiranonis and baiCD were determined using qPCR. The concentration of fUBA was measured by gas chromatography–mass spectrometry. The baiCD abundance exhibited a strong positive correlation with secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001). Similarly, there was a strong correlation between P. hiranonis and secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001). Animals displaying conversion of fUBA and lacking P. hiranonis were not observed. These results suggest P. hiranonis is the main converter of primary to secondary bile acids in dogs.
Introduction
Bile acid (BA) metabolism is an important pathway involved in regulating metabolic homeostasis and is modulated by intestinal microbiota.These regulations include lipid and glucose metabolism, energy production, and inflammatory signaling [1,2].In the context of BA metabolism, cholic acid (CA) and chenodeoxycholic acid (CDCA) are generated in the liver through the catabolism of cholesterol.Subsequently, they are conjugated to the amino acids glycine and taurine by the enzyme amino acid N-acyltransferase [3,4].In dogs, most bile acids are conjugated to taurine [5,6].Once conjugated, BAs are actively secreted from the liver, passing through the canicular membrane into the gall bladder and ultimately into the intestinal lumen [7].Alongside dietary lipids and lipid-soluble nutrients, BAs form micellar structures that facilitate absorption by the enterocytes.This process is essential for the proper digestion and absorption of nutrients, reinforcing the maintenance of overall gastrointestinal health.Approximately 95% of BAs undergo reabsorption via the portal vein in a process referred to as enterohepatic circulation [8,9].
In the distal portion of the gastrointestinal tract, the microbiota plays a crucial role in two essential metabolic functions related to conjugated and unconjugated BAs that escape reabsorption: the deconjugation of glycine-and taurine-conjugated bile acids; and the conversion of primary to secondary fecal unconjugated bile acids (fUBA) [9].Several bacterial species deconjugate glycine-and taurine-conjugated BAs through bile salt hydrolase (BSH) activity [10,11].However, only a few bacterial species have been identified as responsible for converting primary fUBAs (i.e., CA and CDCA) into secondary fUBA, deoxycholic acid (DCA), and lithocholic acid (LCA) [7] through the 7α-dehydroxylation pathway.Among these bacteria, specialized anaerobic species found in the Peptacetobacter genus, previously known as Clostridium hiranonis, possess the bile acid-inducible (bai) operon, which encodes enzymes responsible for the conversion of primary to secondary fUBA [12][13][14][15].
Peptacetobacter hiranonis, also known as Clostridium hiranonis, is an anaerobic, sporeforming, and Gram-positive bacterium [15,16].In dogs, P. hiranonis is described as a biomarker for gastrointestinal functionality and is closely linked to maintaining balanced gastrointestinal health [17].P. hiranonis occupies a pivotal role in the conversion of primary to secondary fUBA and is described as the main species with this ability within the canine and feline gastrointestinal microbiome [18][19][20][21].This conversion involves a multi-step biochemical pathway encoded by the bai operon.Initially, the primary fUBA enters the cell through a transporter encoded by the baiG gene.Subsequently, it is conjugated to coenzyme A (CoA) by a CoA-ligase encoded by the baiB gene.The fUBA, bound to the coenzyme A, undergoes oxidation by a dehydrogenase encoded by the baiA gene.Lastly, the baiH and baiCD encode the enzymes responsible for the dehydroxylation of those molecules [22,23].
Dysmetabolism of BAs has been linked to chronic enteropathy (CE) in dogs [24,25], as well as other chronic inflammatory diseases in humans, including obesity, type 2 diabetes, and liver diseases [26,27].Disruption of BA metabolism has been associated with reduced P. hiranonis abundance in several studies [21] and is commonly observed after antibiotic usage and in dogs with CE [19,24,25].Although the link between P. hiranonis and BA metabolism in dogs has been described [19,25], our understanding of the more complex correlation between them is still evolving and remains not completely elucidated.In this study, we aimed to assess the correlation between (1) P. hiranonis and baiCD gene abundances, (2) P. hiranonis abundance and the relative concentration of secondary fUBA, (3) the baiCD gene abundance and relative concentration of secondary fUBA in fecal samples, and (4) to determine the likelihood of other unidentified bai operon-carrying bacterial species being responsible for BA conversion in dogs.
Animals
In this retrospective study, leftover fecal samples from a previously published study were used [19].Canine samples were collected from 24 clinically healthy dogs enrolled under the study protocol approved by the Institutional Animal Care and Use Committee at Louisiana State University (Protocol No. 14-027).Health history (no past gastrointestinal abnormalities or use of antibiotics for the 12 past months), physical examination, complete blood count, and serum chemistry were evaluated.The study design was described in detail by Pilla et al. [19], and crucial information and additional analysis are also stated below.
Sampling
As previously described by Pilla et al. [19], 136 fecal samples were collected at different time points over 84 days, aliquoted, and frozen within 4 h of the collection.Samples were kept at −80 • C for further analysis.The control group (group 1) did not receive intervention, and fecal samples were collected on days 0, 7, 21, and 42.Group 2 received a hydrolyzed protein diet and metronidazole administration orally (15 mg/kg every 12 h) for two weeks, between days 42 and 56, and fecal samples were collected on days 0, 7, 21, 42, 49, 56, 70, and 84.Group 3 only received metronidazole orally (15 mg/kg every 12 h), for two weeks, between days 0 and 14, and samples were collected on days 0, 7, 14, 28, and 42.In this study, we re-categorized the assessed time points from all animals into three groups: those in the absence of antibiotic administration, those during antibiotic administration, and those post antibiotic administration.The categorization into these three groups allowed us to assess the changes in the relative concentration of secondary fUBA, P. hiranonis, and baiCD gene abundances before, during, and after metronidazole-induced intestinal dysbiosis.
Specificity of the baiCD Primer and Calibration Curve
The specificity of the baiCD primer was assessed via agarose gel electrophoresis.The baiCD amplicon was extracted from the gel using QIAquick ® Gel Extraction Kit (QIAGEN, Hilden, Germany) and ligated to a pCR ® 4-TOPO ® vector (Invitrogen TM Life Technologies).Subsequently, the vector transformed the competent DH5α-T1 R Escherichia coli TOPO TM TA Cloning TM Kit (Invitrogen TM Life Technologies, Carlsbad, USA).The plasmid containing the amplicon was purified using a QIAprep Spin Miniprep Kit (QIAGEN) and confirmed through a conventional PCR assay and by Sanger Sequencing at Eton Bioscience, Inc. (San Diego, CA, USA).
Using a ten-fold dilution of the purified plasmid, the standard curve for the DNA quantification was conducted.The log amount of DNA (number of copies) per 10 ng isolated from total DNA was used to express the qPCR results.The amplicon length, melting peak temperature, efficiency of the qPCR assay, and the coefficient of determination (R 2 ) of the calibration curve are summarized in Supplementary Table S1.Additionally, besides detecting the baiCD amplicon and confirming its size, we assessed the specificity of our primers against different bacterial species including bile acid converter C. scindens (wildtype strain) and other intestinal bacteria such as E. coli (ATCC ® 25922), Faecalibacterium duncaniae (DSM 17677), Akkermansia muciniphila (ATCC ® BAA-835), and Clostridium difficile (wild-type strain).The experimental data are summarized in Supplementary Table S2.
Fecal Bile Acid Analysis
The concentration of primary fUBA (CA and CDCA) and secondary fUBA, (LCA, DCA, and ursodeoxycholic acid (UDCA)) were measured using the gas chromatography with mass spectrometry method (GC-MS), as previously described by Blake et al. [25].Data were primarily reported in micrograms per milligram of lyophilized feces (µg/mg of fecal dry matter) and transformed to a percent of the total fUBA measured.Total primary fUBA was defined as the sum of CA and CDCA, and total secondary fUBA as the sum of LCA, DCA, and UDCA.Finally, the total fUBA constitutes the sum of all evaluated fUBA (CA, CDCA, LCA, DCA, and UDCA), and the percentages (%) of primary and secondary fecal unconjugated bile acids were measured based on the total concentration of fUBA in the fecal dry matter (Supplementary Table S3).
Statistical Analysis
Shapiro-Wilk's test was used for the normality assessment of the data.Spearman's test was employed to assess the correlation between the abundance of P. hiranonis and baiCD gene, as well as the relative concentration of secondary fUBA in all evaluated samples from groups 1, 2, and 3 combined into one single group (GraphPad Prism 9.4.1).The effects of the hydrolyzed protein diet on the assessed variables (secondary fUBA relative concentration and abundances of both P. hiranonis and the baiCD gene) were measured by comparing multiple time points to day 0 (baseline).Additionally, the effects of the metronidazole administration on the assessed variables (secondary fUBA relative concentration and abundances of both P. hiranonis and baiCD gene) were measured by comparing multiple time points (days 49, 56, 70, and 84) to day 42 in group 2, and to day 0 in group 3.The effects of the hydrolyzed protein diet and metronidazole administration were assessed through Friedman's test adjusted for Dunn's multiple comparison (GraphPad Prism 9.4.1).A p-value of less than 0.05 was considered statistically significant.
Abundances of P. hiranonis, baiCD Gene, and C. scindens
The abundance of P. hiranonis and the baiCD gene decreased during and after antibiotic administration.A drastic reduction was observed during the period when the animals were taking metronidazole orally (Table 1).From 133 fecal samples, 113 had leftover extracted DNA available for the quantification of C. scindens via qPCR.C. scindens were not detected in any of the evaluated samples (0%; 0/113)
Fecal Unconjugated Bile Acids
The relative concentration of secondary fUBA (LCA and DCA) decreased during the period when animals were receiving the antibiotic (Figure 1A), following a pattern observed in the abundances of P. hiranonis and baiCD gene (Figure 1B,C, respectively).
Correlation Analysis between P. hiranonis, the baiCD Gene, and the Relative Concentration of Secondary Fecal Unconjugated Bile Acids
Spearman's rank correlation was computed to assess the relationships between P. hiranonis, the baiCD gene, and the relative concentration of secondary fUBA; all groups were combined for the correlation analysis.Positive correlations were observed between P. hiranonis and baiCD abundances (ρ = 0.8230, 95% CI (0.7570, 0.8724), p < 0.0001), as well as between P. hiranonis and the relative concentration of secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001).Also, baiCD gene abundance showed a positive correlation with the relative concentration of secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001) (Figures 2 and 3).Animals displaying high levels for the conversion of primary to secondary fUBA and lacking P. hiranonis were not observed (red-colored area in Figure 2B).Multiple time points from 24 dogs demonstrating the correspondence of the relative concentration of secondary fUBA (A) and the abundances of P. hiranonis (B) and baiCD (C) in healthy dogs in the absence of, during, and after antibiotic administration.P. hiranonis is the main species responsible for the conversion of primary into secondary fecal unconjugated bile acids in dogs.As shown in these figures, the decrease in P. hiranonis abundance is followed by a reduction in baiCD abundance and the relative concentration of secondary fUBA.The red lines on the graph represent the median value for each one of the evaluated groups, and the shaded areas indicate the reference intervals for P. hiranonis abundance in fecal samples from dogs.
Correlation Analysis between P. hiranonis, the baiCD Gene, and the Relative Concentration of Secondary Fecal Unconjugated Bile Acids
Spearman's rank correlation was computed to assess the relationships between P. hiranonis, the baiCD gene, and the relative concentration of secondary fUBA; all groups were combined for the correlation analysis.Positive correlations were observed between P. hiranonis and baiCD abundances (ρ = 0.8230, 95% CI (0.7570, 0.8724), p < 0.0001), as well as between P. hiranonis and the relative concentration of secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001).Also, baiCD gene abundance showed a positive correlation with the relative concentration of secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001) (Figures 2 and 3).Animals displaying high levels for the conversion of primary to secondary fUBA and lacking P. hiranonis were not observed (red-colored area in Figure 2B).Multiple time points from 24 dogs demonstrating the correspondence of the relative concentration of secondary fUBA (A) and the abundances of P. hiranonis (B) and baiCD (C) in healthy dogs in the absence of, during, and after antibiotic administration.P. hiranonis is the main species responsible for the conversion of primary into secondary fecal unconjugated bile acids in dogs.As shown in these figures, the decrease in P. hiranonis abundance is followed by a reduction in baiCD abundance and the relative concentration of secondary fUBA.The red lines on the graph represent the median value for each one of the evaluated groups, and the shaded areas indicate the reference intervals for P. hiranonis abundance in fecal samples from dogs.
Correlation Analysis between P. hiranonis, the baiCD Gene, and the Relative Concentration of Secondary Fecal Unconjugated Bile Acids
Spearman's rank correlation was computed to assess the relationships between P. hiranonis, the baiCD gene, and the relative concentration of secondary fUBA; all groups were combined for the correlation analysis.Positive correlations were observed between P. hiranonis and baiCD abundances (ρ = 0.8230, 95% CI (0.7570, 0.8724), p < 0.0001), as well as between P. hiranonis and the relative concentration of secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001).Also, baiCD gene abundance showed a positive correlation with the relative concentration of secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001) (Figures 2 and 3).Animals displaying high levels for the conversion of primary to secondary fUBA and lacking P. hiranonis were not observed (red-colored area in Figure 2B).Scatter plots of multiple time points from 24 dogs demonstrating the correlation between the abundances of P. hiranonis and baiCD and the relative concentration of secondary fUBA.Spearman's rank correlation analysis revealed strong positive correlations between abundances of P. hiranonis and baiCD (ρ = 0.8230, 95% CI (0.7570, 0.8724), p < 0.0001) (A), abundance of P. hiranonis and relative concentration of secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001) (B), and baiCD and secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001) (C) in healthy dogs.Dotted lines represent reference intervals for healthy dogs.Animals displaying high levels for conversion of primary to secondary fUBA and lacking P. hiranonis were not observed (red-colored area in (B)), except for a single sample that had P. hiranonis abundance of 4.8 log DNA, just below the lower reference interval of 5.1 log DNA.The blue-colored area displays animals with a high abundance of P. hiranonis but lacking in fUBA conversion.
and relative concentration of secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001) (B), and baiCD and secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001) (C) in healthy dogs.Dotted lines represent reference intervals for healthy dogs.Animals displaying high levels for conversion of primary to secondary fUBA and lacking P. hiranonis were not observed (red-colored area in (B)), except for a single sample that had P. hiranonis abundance of 4.8 log DNA, just below the lower reference interval of 5.1 log DNA.The blue-colored area displays animals with a high abundance of P. hiranonis but lacking in fUBA conversion.
Figure 3.
Scatter plots of multiple time points (days; "D") from 24 dogs randomly assigned to three groups: group 1 (control) did not receive any intervention, group 2 received a hydrolyzed protein diet followed by metronidazole orally, and group 3 received only metronidazole orally.Animals were color-coded individually within their group.The red lines under the graphs represent the period that animals were taking metronidazole orally, while the blue lines represent the period that animals were receiving the hydrolyzed protein diet.The shaded areas indicate the reference intervals for healthy dogs.
Figure 3.
Scatter plots of multiple time points (days; "D") from 24 dogs randomly assigned to three groups: group 1 (control) did not receive any intervention, group 2 received a hydrolyzed protein diet followed by metronidazole orally, and group 3 received only metronidazole orally.Animals were color-coded individually within their group.The red lines under the graphs represent the period that animals were taking metronidazole orally, while the blue lines represent the period that animals were receiving the hydrolyzed protein diet.The shaded areas indicate the reference intervals for healthy dogs.
Effects of Metronidazole Administration and Hydrolyzed Protein Diet on P. hiranonis, the baiCD Gene and Relative Concentration of Secondary Fecal Unconjugated Bile Acids
The effects of the oral metronidazole and hydrolyzed protein diet on the assessed variables (secondary fUBA relative concentration and abundances of both P. hiranonis and baiCD gene) were measured by comparing the multiple time points to the baseline time point using Friedman's test adjusted for Dunn's multiple comparison.This analysis was performed at multiple time points for dogs within the same group.There were no significant changes observed in group 1 or in group 2 during the dietary trial, except on day 7 when the relative concentration of secondary fUBA was slightly higher compared to day 0 (day 0 median: 91.40%; day 7 median: 97.81%), probably influenced by an outlier (data shown in Figure 3).During metronidazole administration in group 2, the secondary fUBA relative concentration and abundances of both P. hiranonis and the baiCD gene were significantly reduced during days 49 and 56 compared to the day before metronidazole administration (day 42; p < 0.05).Similarly, metronidazole administration significantly reduced secondary fUBA relative concentration, P. hiranonis, and baiCD on days 7 and 14 when compared to day 0 in group 3 (p < 0.05).
Discussion
The correlation between P. hiranonis, baiCD, and secondary fUBA demonstrates a strong association between the conversion of BA and the abundance of P. hiranonis.Recent studies highlighted the role of P. hiranonis as the main species in dogs and cats presenting the bai operon, as evidenced by a strong correlation between P. hiranonis abundance and the relative concentration of secondary fUBA [30].This is similar to humans, where C. scindens appears to be the major player in BA conversion [31,32].
Unlike many other metabolic pathways, where distinct bacterial species share the same function (for example, the bile salt hydrolase activity), the bai operon seems to be unique and scarcely found in the bacteria kingdom.The BSH, also known as choloylglycine hydrolase, catalyzes the hydrolysis of amino acid-conjugated BAs into unconjugated BAs.Lactobacillus, Bifidobacterium, Enterococcus, Clostridium, and Bacteroides are known to express BSH activity, exhibiting variability in catalytic efficiency, substrate preference, and the number of the gene copies owned by each bacterial strain [10].On the other hand, the bai operon has only been described in a few species from the Clostridiaceae, Lachnospiraceae, Peptostreptococceae, and Ruminococcaceae families; however, it still exerts a significant effect on both healthy and diseased states in animals, including humans, due to playing an important role in the bile acid metabolism [22,33].
In our study, dogs exhibiting higher levels of secondary fUBA relative concentration and lacking P. hiranonis and baiCD were not observed (red-colored area in Figure 2B), except for a single sample that had P. hiranonis abundance of 4.8 log DNA, just below the lower reference interval of 5.1 log DNA.This finding may suggest that the true threshold for bile acid conversion is likely slightly lower than the published reference interval [34].However, P. hiranonis abundances within the reference interval were found in animals without the conversion of BAs (blue-colored area in Figure 2B).Similar to other metabolic functions, the baiCD gene may be detected, but it may be inactivated or display lower activity levels.It is also possible that bile acid conversion was not detected due to the limitations of the GC-MS assay or that secondary bile acid levels were lower due to faster transit times.Indeed, in the original clinical trial, some owners reported the presence of diarrhea in dogs receiving metronidazole, but unfortunately, fecal scores were not recorded for any of the samples.
Regarding the specificity of the baiCD primers, they were designed based on P. hiranonis reference sequences available on NCBI (GenBank: AF210152.2).No amplification was observed when tested with a wild-type C. scindens strain, which is known to carry the bai operon, confirming the species-specificity of our primer set.However, while in silico analysis using a BLASTN search was unable to identify nonspecific annealing, nonspecific amplification can occur in vitro experiments.
Although the hypothesis of conversion from other bacterial species was not excluded in our study, the strong correlation suggests that P. hiranonis is the main source of the conversion of BAs through the 7α-dehydroxylation pathway in dogs.Both the baiCD gene and the bai operon appear to be consistently present across different Clostridium species.The bai operon is defined as an orthologous operon and has been extensively studied in species such as P. hiranonis, C. scindens, and C. hylemonae [16,23,35].Despite its significant sequence dissimilarities when compared across different species, its core function remains conserved [23].In our study, we were unable to detect C. scindens in any of the samples tested.All samples presenting secondary fUBA within the reference interval were positive for P. hiranonis within or slightly below its reference interval (Figure 2B, red area).However, not all samples presenting P. hiranonis within the reference interval had secondary fUBA within the reference interval (Figure 2B, blue area).It is still unknown whether every strain of P. hiranonis possesses a complete, functional bai operon.Therefore, while the role of other bacterial species carrying the bai operon cannot be ruled out, our results show no evidence that they play a significant role in bile acid conversion in dogs.
Reduced secondary fUBA levels have been found in puppies with physiological dysbiosis due to an immature microbiome in the first few months of life, in dogs with CE, and in dogs receiving antibiotics such as tylosin and metronidazole [19,24,30].In all those studies, P. hiranonis abundance was found to be reduced.Furthermore, in humans, an increased abundance of primary fUBA has been reported in irritable bowel syndrome [36,37].In our study, we chose to include animals undergoing metronidazole administration, a common trigger that induces severe long-lasting dysbiosis.All dogs exhibited a significant reduction in the relative concentration of secondary fUBA over the two weeks of antibiotic administration.As the conversion of primary to secondary fUBA is a metabolic pathway present exclusively in bacteria, particularly in Clostridium species, those species sensitive to metronidazole experienced a substantial decline in their abundance and metabolic functions.Consequently, some of these animals did not regain P. hiranonis or recover the BA conversion by the last time point of this study due the antibiotic use (Figure 1A).
The hydrolyzed protein diet did not affect secondary fUBA relative concentration or P. hiranonis and baiCD abundances.Hydrolyzed protein diets are recommended for dogs exhibiting food sensitivities or allergies, and they did not impact the gut microbiome of the dogs included in this study (previously published results [19]).Similarly, other types of hypoallergenic diets have also been shown to not affect the gut microbiome of healthy dogs [38].Inversely, metronidazole administration had a major impact on microbiome composition in the animals included in this study, which included decreased alpha-diversity, major shifts in beta-diversity, and changes in abundance of taxa in all taxonomic levels (previously published results by Pilla et al. [19].A significant reduction in fUBA conversion, P. hiranonis, and baiCD was observed in our study, in agreement with previously published data in which animals were treated with antibiotics [19,24,30].
Among the limitations present in our study, we chose to use healthy dogs with antibiotic-induced dysbiosis due to the consistent and severe changes in the microbiota observed in those animals treated with metronidazole.However, antibiotic-induced dysbiosis may lead to slightly different changes in the intestinal microbiota composition when compared to CE.In dogs, CE is typically subdivided into phenotypes based on the animal's response to the therapy.Typically, these animals exhibit more pronounced degrees of changes-for example, increased permeability and increased production of proinflammatory cytokines-due to the chronic nature of this disease, which is not observed in antibiotic-induced dysbiosis in healthy animals [39].Furthermore, the sample size can be considered a limitation in this study.While the strong correlation between P. hiranonis, baiCD, and secondary fUBA was confirmed through statistical methods, it is important to note that multiple time points from the same animals were used to assess the correlation between our variables.Moreover, the in vitro bile acid conversion ability of P. hiranonis, previously described in [12,14,23], was not assessed in this study due to the difficulty in isolating P. hiranonis from frozen samples after long periods of storage.
Bile acid conversion seems to be a field that requires further development in veterinary medicine.Bile acid conversion in vivo is an intricate and complex metabolic process that can involve several different pathways, such as the 3α-hydroxysteroid dehydrogenase, 7α-hydroxysteroid dehydrogenase, 12α-hydroxysteroid dehydrogenase, and 7 α-dehydroxylation pathways [40].This study allowed us to have a look into the conversion of bile acids performed by the 7 α-dehydroxylation pathway and the correlation between the conversion of bile acid and P. hiranonis abundance.Moreover, the implications of the potential impacts of different forms of bile acids on canine health need further investigation.Since the conversion of secondary bile acids exhibited a strong correlation with the abundance of P. hiranonis, P. hiranonis seems to be an indicator for bile acid conversion performed by the 7 α-dehydroxylation pathway in vivo.However, more studies are needed to elucidate the predictive value of P. hiranonis and baiCD abundance in bile acid conversion in vivo for dogs and other species.
Although the bai operon has been described in P. hiranonis [23], a more comprehensive understanding of its functional role and its relationship with healthy and diseased states in animals is needed.The secondary fUBA production pathway seems to be conserved in dogs, cats, and humans [30,41]; however, understanding its effects on gut homeostasis in different species is necessary.Differences in the conversion of primary into secondary fUBA and other BA-associated compounds have been identified in animals and a better comprehension of its physiological functions and effects on intestinal health needs further research.While this study has provided valuable insights into the role of P. hiranonis in the conversion of primary to secondary fUBA and suggests that P. hiranonis is the main source of conversion of BAs for this metabolic pathway in dogs, future research could help us to develop a better understanding of the roles of microbiota and BA metabolism in diseases.
Conclusions
In summary, our study confirms the strong correlation between P. hiranonis and baiCD abundances, suggesting their ability to predict the conversion of primary to secondary fUBAs in dogs through the 7 α-dehydroxylation pathway.Given the strong correlation between P. hiranonis and baiCD abundances, the absence of detectable C. scindens in our samples, and the absence of samples presenting secondary fUBA within the reference interval but lacking P. hiranonis, it is unlikely that other bacterial species play a significant role in BA conversion through the 7 α-dehydroxylation pathway in dogs.Finally, considering the correlation between decreased secondary fUBA, intestinal dysbiosis, and chronic enteropathy in dogs, further studies are needed to evaluate its potential as a target for microbiome-and metabolite-modifying therapeutics.
Figure 1 .
Figure 1.Multiple time points from 24 dogs demonstrating the correspondence of the relative concentration of secondary fUBA (A) and the abundances of P. hiranonis (B) and baiCD (C) in healthy dogs in the absence of, during, and after antibiotic administration.P. hiranonis is the main species responsible for the conversion of primary into secondary fecal unconjugated bile acids in dogs.As shown in these figures, the decrease in P. hiranonis abundance is followed by a reduction in baiCD abundance and the relative concentration of secondary fUBA.The red lines on the graph represent the median value for each one of the evaluated groups, and the shaded areas indicate the reference intervals for P. hiranonis abundance in fecal samples from dogs.
Figure 1 .t i b i o t i c I n a b s e n c e o f a n t i b i o t i c D u r i n g a n t i b i o t i c P o s t a n t i b i o t i c 0 I n a b s e n c e o f a n t i b i o t i c D u r i n g a n t i b i o t i c P o s t a n t i b i o t i c 0 Figure 1 .
Figure 1.Multiple time points from 24 dogs demonstrating the correspondence of the relative concentration of secondary fUBA (A) and the abundances of P. hiranonis (B) and baiCD (C) in healthy dogs in the absence of, during, and after antibiotic administration.P. hiranonis is the main species responsible for the conversion of primary into secondary fecal unconjugated bile acids in dogs.As shown in these figures, the decrease in P. hiranonis abundance is followed by a reduction in baiCD abundance and the relative concentration of secondary fUBA.The red lines on the graph represent the median value for each one of the evaluated groups, and the shaded areas indicate the reference intervals for P. hiranonis abundance in fecal samples from dogs.
Figure 2 .
Figure 2.Scatter plots of multiple time points from 24 dogs demonstrating the correlation between the abundances of P. hiranonis and baiCD and the relative concentration of secondary fUBA.Spearman's rank correlation analysis revealed strong positive correlations between abundances of P. hiranonis and baiCD (ρ = 0.8230, 95% CI (0.7570, 0.8724), p < 0.0001) (A), abundance of P. hiranonis and relative concentration of secondary fUBA (ρ = 0.6658, 95% CI (0.5555, 0.7532), p < 0.0001) (B), and baiCD and secondary fUBA (ρ = 0.7377, 95% CI (0.6461, 0.8084), p < 0.0001) (C) in healthy dogs.Dotted lines represent reference intervals for healthy dogs.Animals displaying high levels for conversion of primary to secondary fUBA and lacking P. hiranonis were not observed (red-colored area in (B)), except for a single sample that had P. hiranonis abundance of 4.8 log DNA, just below the lower reference interval of 5.1 log DNA.The blue-colored area displays animals with a high abundance of P. hiranonis but lacking in fUBA conversion.
Table 1 .
Descriptive statistics of P. hiranonis and baiCD gene abundances expressed in log DNA (median [range]) assessed at multiple time points in dogs in the absence of, during, and after antibiotic administration. | 2024-01-11T16:15:03.454Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "5bf7d638b46ec865e39ebcacf3ef29b20f899620",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/14/2/216/pdf?version=1704791547",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e62c19e5fde87f187f39e87c65e0d09d6ee2862",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246451262 | pes2o/s2orc | v3-fos-license | Classifying Near-Miss Tra ffi c Incidents through Video, Sensor, and Object Features
SUMMARY Front video and sensor data captured by vehicle-mounted event recorders are used for not only tra ffi c accident evidence but also safe-driving education as near-miss tra ffi c incident data. However, most event recorder (ER) data shows only regular driving events. To utilize near-miss data for safe-driving education, we need to be able to easily and rapidly locate the appropriate data from large amounts of ER data through labels attached to the scenes / events of interest. This paper proposes a method that can automatically identify near-misses with objects such as pedestrians and bicycles by processing the ER data. The proposed method extracts two deep feature representations that consider car status and the environment surrounding the car. The first feature representation is generated by considering the temporal transitions of car status. The second one can extract the positional relationship between the car and surrounding objects by processing object detection results. Experiments on actual ER data demonstrate that the proposed method can accurately identify and tag near-miss events.
Introduction
Recently, the event recorder has become an almost obligatory car accessory. Modern recorders can capture a front video, several sensor streams, and driving operation. The event recorder permanently stores all data dozens of seconds on either side of the trigger of longitudinal/lateral acceleration/deceleration exceeding a certain level. In this paper, we call such data event recorder (ER) data. ER data is being effectively used as traffic accident/violation evidence. In addition, ER data that demonstrates near-miss traffic incidents ("near-miss"), such as near collisions between the car and other obstacles, is being considered for use in reducing traffic accidents. Actual examples of near-miss scenes captured by ERs are shown in Fig. 1. The ER data of near-misses is best utilized pro-active education that targets safer driving. An example of safe-driving education is to have drivers watch actual ER footage of near-miss traffic incidents [2]. In addition, near-miss incidents in ER data are attracting the attention of fleet management companies that need to control scores of commercial motor vehicles such as vans and trucks. For example, car leasing and commercial trucking companies can evaluate each driver's skills by processing Manuscript received January 18, 2021. Manuscript revised July 30, 2021. Manuscript publicized November 1, 2021. † The authors are with NTT Human Informatics Laboratories, NTT Corporation, Yokosuka-shi, 239-0847 Japan. * This paper is an extended version of PAKDD2020 "Identifying Near-Miss Traffic Incidents in Event Recorder Data" [1].
a) E-mail: shuhei.yamamoto.ea@hco.ntt.co.jp DOI: 10.1587/transinf.2021EDP7017 the front video captured by Internet-connected cameras [3]. A car insurance company is detecting dangerous areas in town and creating hazard maps based on traffic accidents or near-miss as found in ER data [4]. As just described, various services/applications are using the near-miss events present in ER data; they represent new opportunities for eliminating or minimizing the risks associated with vehicle operation. However, most ER data doesn't include near-miss incidents ("no near-miss"). One report [5] claimed that about 70% of ER data contains no near-miss incident. This is because the acceleration limits used to trigger the ER can be exceeded by rough roads and abrupt driving inputs. Moreover, actual safe-driving education organizers expect the ER data to be tagged and sorted according the type of incident (e.g. pedestrian and bicycle) because they want to extract the best possible videos as safe-driving education material for each incident type. Unfortunately, manually identifying and labelling all near-miss incidents from the large amount of ER data available is too time consuming, expensive, and error prone. Therefore, the automating the process is essential to reducing the cost of safe-driving education and strengthening the effective use of ER data. The objective of this paper is to automatically detect the presence of near-miss incidents and then accurately identify near-miss type.
To achieve this objective, the straightforward approach is to build a multi-class classification model. ER data is multi-modal data consisting of video and sensor readings, and it is considered necessary to use all the data in combination for identifying near-miss incidents. The state of own vehicle and its surroundings is mainly determined from sensor readings and video. Both are key information for determining whether an ER data segment contains a near-miss or not. Thanks to advances in deep neural networks (DNNs), we can now handle such data by convolutional neural networks Copyright c 2022 The Institute of Electronics, Information and Communication Engineers (CNNs) [6] as well as recurrent neural networks (RNNs) [7]. Passing the image frame data through a CNN will yield feature vectors, and the feature vectors of image frames and sensor streams can be integrated using a full connect neural network; the resulting time-series data can be modelled by an RNN. Although this approach can detect near-miss incidents (i.e., determine the presence or absence of a near-miss event), it is not accurate in terms of classifying incidents the according to its type. There are two reasons for this failure.
Issue 1:
The near-miss detection task doesn't require detailed information of the obstacle captured by the front video because it is sufficient that just some kind of obstacle is detected. This involves using a CNN to extract basic visual features. However, the task of classifying the near-miss incidents requires an understanding of the kind of object and its position relation to the car. Simple CNNs can't extract visual features with sufficient detail. Issue 2: The task of identifying near-miss incidents can be treated as a two-level hierarchy classification task. First, each ER segment is classified into near-miss or no near-miss. Second, the near-miss object in each ER segment is identified. However, general multi-class classification frameworks don't provide such a hierarchical architecture, and instead attempt to solve the two classification tasks simultaneously (i.e. treat the task as a one-level classification task). This makes the task more complex which degrades classification accuracy.
To resolve these two issues, this paper proposes a classification method that combines a supervised DNN to process object detection results with multi-task learning. The proposed method has three main components. The first component, the Temporal Encoding Layer, generates a feature vector by encoding frame images, sensor streams, and object detection results as time-series data. The second component, the Grid Embedding Layer, creates a feature vector by embedding object detection results into a grid space by determining the positions of each object relative to the car. The third component, the Multi-task Layer, splits the main task into two sub-tasks to classify near-miss type. We conduct experiments on an actual ER dataset to evaluate the effectiveness of the proposal. Our result shows that the proposed method can well handle ER data with improved performance.
This paper is an extended version of our previous paper [1]. The enhancements are as follows. First, we conducted new experiments to clarify classification performance versus the number of information sources in Temporal Encoding Layer and at different hyper-parameters of Grid Embedding Layer and Multi-task Layer (Sect. 5.2). Second, we added a case study with actual ER data using several frame images and sensor streams (Sect. 5.3). Finally, we showed its effectiveness in the aspect of understanding the estimation results output by the proposed method (Sect. 5.4).
Related Works
Several studies have focused on near-miss traffic incident detection (i.e., determine the presence or absence of a nearmiss event) from dashboard camera (dashcam) data. Suzuki et al. [8] estimate the risk level for each frame image in front video by using CNN, which is a highly effective DNN architectures. Their model demonstrated improved accuracy in near-miss detection by introducing pedestrian detection task as sub-function. Ke et al. [9] detect near-miss scenes using pedestrian detection from the front video captured by a vehicle-mounted camera; the distance between the car and the pedestrian is used to calculate the risk level of each frame. Kim et al. [10] analyze front video of car crashes captured during car simulator trials on variety of roads for clarifying traffic characteristics of dangerous scene. While their model detect near-miss scenes using front video, they do not consider the classification of the near-miss incidents.
Dashcam data has been used for various tasks other than near-miss detection. By extracting driver operations from dashcam data, Yokoyama et al. [11] use feature engineering to detect the drivers with dangerous driving styles. Front video is a significant part of autonomous vehicle driving technology. To permit autonomous control of vehicle movement, Jain et al. [12] predict driving movements such as straight, left/right turn, lane change, and stop based on front video information using in-vehicle cameras; their prediction model analyzes the features of the driver's face. To avoid traffic accidents, Chan et al. [13] proposed a method that anticipates accidents among vehicles. Our work differs from theirs as regards the goals and model proposed.
Our approach is motivated by the success achieved by using DNNs to analyze video data. The DNN components of CNN and RNN are widely used for human activity recognition. Baccouche et al. [14] proposed a standard approach to human activity recognition based on DNN. Their method uses CNN to extract a set of human movement features from each frame image and RNN to model their temporal transitions. Sharma et al. [15] introduced a visual attention mechanism based on DNN for extracting characteristic regions in each frame image; they used it to encode feature vectors extracted by CNN. Simonyan et al. [16] proposed a spatiotemporal approach that uses both optical flow and normal images with the intention of capturing the movements of objects present in videos. Our experiments, shown in Sect. 5, evaluates the effectiveness of human activity recognition schemes for identifying near-miss incidents.
Data Format
Each ER segment consists of a sequence of frame images combined with the data streams output by several sensors. Sequence length is taken to be the number of frames in the ER sequence, T . The sensor data at each time-step is a vector consisting of several dimensions such as longitudinal/lateral acceleration and speed. We normalized the sensor data in each dimension to z-score because the dimensions have different value scales.
Object Detection
To correctly identify near-miss type, our approach uses the object detection results of image {I t } T t=1 . For this we employ YOLO [17], which is one of the most effective DNNbased object detection algorithms. The object detection result of image I t consists of N t objects. Each detected object, n, consists of the triple {o t,n , l t,n , p t,n }. The onehot vector o t,n = {o t,n,v } V v=1 is the object type where V is the number of object types, and the bounding box vector, l t,n = {x lef t,n , y top t,n , x rig t,n , y bot t,n }, specifies the object's coordinates (left, top, right and bottom) in the image; the detection probability vector p t,n = {p t,n,v } V v=1 .
Annotation Label and Its Re-Organization
The application of supervised machine learning is assumed to yield the correct label for near-miss target y m ∈ R C , which is one-hot vector consisting of the number of label types C. We extract two additional kinds of correct labels by reorganizing the near-miss target label y m . The first additional label, y s1 , identifies near-miss (y s1 = 1) or no near-miss (y s1 = 0). The second one, one-hot vector y s2 ∈ R C−1 , identifies the near-miss incidents for each ER sequence other than those identified as no near-miss.
Proposed Method
In this section, we describe the proposed method; it uses DNN to classify the occurrence targets of "near-miss". The proposed method is composed of three main components (Fig. 2)
Temporal Encoding Layer (TEL)
The objective of this layer is to generate a feature vector by considering the temporal transitions present in the timeseries data. Image encoder: To obtain holistic features such as the surrounding environment from front video, we encode each video image into a feature vector by using CNN. Here, to extract visual features from each image, we prepare two types of GoogLeNets [18] pretraind by ImageNet [19] and Places365 [20]. The GoogLeNets of this paper encode each image I t into two feature vectors. Next, these feature vectors are encoded by a full connect neural network (FC) into a feature vector with dimension of U. The feature vector extracted by this process from frame number t is denoted as h img t .
Sensor encoder: To obtain features that describe the car status, we use FC to encode the sensor data into a feature vector with U dimensions. The feature vector extracted by this process from frame number t is denoted as h sen t . Object encoder: To extract in detail features such as obstacles and traffic signs present in the front video, we use the object detection results after translating them into a simple vector representation. Here, we focus on the appearance degree of each object and generate vector e t , which refers to the number of object types, V. The score is calculated by e t = N t n=1 o t,n · p t,n . If several identical objects are detected in an image, in order to enhance the appearance degree of the object, the score is calculated by summing object detection probability p t,n . Next, the generated feature vector e t is encoded into a feature vector with U dimensions by FC; this yields, for frame number t, h obj t ]) and encodes the results into a feature vector with U dimensions using by FC. Next, we use the LSTM unit to model the fea-ture vectors in each time-step [21] and derive here a new feature vector that is more direct as it assesses the feature vectors in all time-steps by fusing with the soft attention mechanism [22]. Denoting the sequence of feature vectors obtained by the LSTM unit as {h τ t } T t=1 , the soft attention mechanism calculates a new feature vector, h te , as follows:
Grid Embedding Layer (GEL)
Grid embedding: The objective of this layer is to derive a feature vector that can be used to identify near-miss targets; it does so by considering the bounding box information of each object in each frame image. In this paper, we propose a grid embedding method for utilizing bounding box information; we focus on the position of each object in the image and consider the position relationship between the car and each object. This method prepares grid space G ∈ R G h ×G w ×V by setting appropriate vertical and horizontal grid dimensions (G h and G w ); it then embeds the objects into grid space G. The embedded grid feature matrix G is generated by Algorithm 1. An example of the grid embedding flow is shown in Fig. 3.
As the embedding score for each cell, we employ the 2D area ratio r because we prioritize the distance between the car and each object. We think that the area ratio can represent the distance between the car and each object in the image because the area ratio of an object is inversely proportional to its distance from the car, i.e., objects close to the car have larger area ratios than far objects.
Encoding grid features: We can obtain grid features g i, j by the above processes. Not all cells are important in the task of identifying near-miss incidents because the image captured strongly depends on the setting position of the ER. For example, as shown in Fig. 1, the car's bonnet occupies significantly different parts of the image if the ER's direction and position are changed. Moreover, such cells don't contribute to achieving our goal because objects will not appear there. Therefore, it is not appropriate to directly use grid importance.
In this paper, we employ the soft attention mechanism to calculate a new feature vector h gr as follows: are the DNN model parameters. These formulas mean that attention weight α g i, j is dynamically estimated from grid feature g i, j as grid importance, and the feature vector h g is calculated based on attention weights and grid features.
Multi-Task Layer (MTL)
The objective of this layer is to identify the near-miss target using the two feature vectors obtained in Sects. 4.1 and 4.2, respectively. First, we concatenate the feature vectors h tg = [h te ; h gr ]. Here, we utilize a multi-task learning framework by setting two simple sub-tasks as part of the main task. The first sub-task determines the presence or absence of a near-miss event for each ER sequence. We encode h tg to scalar valueŷ s1 , which is the output of this sub-task, by using FC and sigmoid function and calculating cross entropy error L s1 between the correct label y s1 and resultŷ s1 as follows:
ALGORITHM 1: Grid Embedding
where D and d are the number of training data and the index used in scanning the training data, respectively; d is used to link y s1 andŷ s1 , but is omitted in this paper. The second sub-task identifies the near-miss incidents for each ER sequence other than those identified as no nearmiss. We encode h tg into vectorŷ s2 , the result of this subtask, by FC and softmax function, and then calculate cross entropy error L s2 between the correct label y s2 and resultŷ s2 as follows: k logŷ s2 k . We concatenate the results using the form h = [h tg ;ŷ s1 ;ŷ s2 ]. We can now consider the results of these simple sub-tasks. We encode h into vectorŷ m which represents the result of the main task, by FC and softmax function and calculate cross entropy error L m between the correct label y m and resultŷ m as follows: We optimize the objective function L = L m + β · (L s1 + L s2 ) which includes the errors of these three tasks. β denotes the hyper-parameter used for controlling sub-task errors. The label yielded by the main task is given by extracting the index with maximum score from the resultŷ m .
The general aim of multi-task learning is to leverage useful information contained in multiple related tasks to improve the generalization performance. Learning multiple tasks jointly can lead to significant performance improve-ments compared with learning them individually as has been several related works [23], [24]. For example, [23] jointly learns representations of words, entities, and meaning representations via multi-task learning. [24] shows the effectiveness of this approach with regard to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. Following the success of these multi-task learning approaches, the innovation of our multi-task learning is to learn a classifier specific to each sub-task; we extract effective features and obtain new feature vectors for performing each sub-task. All features including frame image, sensor streams, and object detection results can be useful for determining whether each video contains a near-miss incident or not. However, once we know that a video contains a nearmiss, object detection results constitute the most helpful information for determining the near-miss targets because they allow us to understand the kind of objects around the car. While single task learning must learn such features implicitly, our multi-task learning can learn them explicitly as isolated features.
Dataset and Parameter Setting
The experimental evaluation uses the Near-Miss Incident Database provided by the Smart Mobility Research Center of Tokyo University of Agriculture and Technology in Japan. The dataset is a collection of data captured by ERs mounted in Japanese taxis. Each ER data sequence is 15 seconds long; 10 seconds before the trigger and 5 seconds after the trigger. Each sequence was manually assigned one of five risk levels {high, middle, low, bit, no near-miss} and six near-miss incident types {car, bicycle, motorcycle, pedestrian, self, other} by experts. The experiment focused on five near-miss incidents {car, bicycle, motorcycle, pedestrian, self † }. 700 sequences were randomly extracted for each near-miss incident type. 700 sequences tagged no near-miss were also randomly extracted. Therefore, the experiment examined 4,200 sequences with 6 labels (C = 6). We randomly split the dataset into 2,940 (70%) sequences as training data and 1,260 (30%) as test data.
Each sequence was recorded at 30 frames per second and so consisted of 450 frames. In this paper, we sampled T = 30 frames at intervals of 15 frames. Each image had resolution of W = 640 and H = 400 in RGB format. The original images were processed by YOLO for object detection. This yielded V = 69 object types. For visual feature extraction, linearly transformed images (224 × 224 byte resolution) were processed by two GoogLeNets. For the sensor data, we extracted three sensor streams: speed and longitudinal/lateral acceleration.
For the DNN in the proposed method, we set the num- † self refers to a dangerous or illegal movement involving only the car Table 1 Classification performance versus number of information sources in Temporal Encoding Layer and SVM. "V", "S", and "O" mean "Video", "Sensor", and "Objects", respectively. ber of hidden units in each FC to U = 256, and the output vector after each FC is non-linearly transformed by the ReLu function [25] with Dropout p = 0.7 [26]. We optimized the DNN according to Adam [27] based on the objective function of L gradient as calculated by the back propagation method. Here, we set the mini-batch size to 50 and the back propagation iteration number to 100. With regard to GoogLeNet, we used the Caffe [28] model pretrained by ImageNet and Places365, and updated the parameters in the output layer by fine-tuning.
Results
We conducted four experiments. Experiment 1 and 2 address issue 1, while experiment 3 addresses issue 2. In addition, experiment 4 is intended to verify the effectiveness of combining the three proposed components. To examine the effectiveness of the proposed method in identifying nearmiss incidents, we use four evaluation metrics: accuracy, precision, recall, and F1-score [29].
Experiment 1: Table 1 shows the classification performance versus the number of information sources (front video, sensor streams, and objects) as processed by TEL. To focus on the improvement in classification performance by the addition of information sources, we did not apply GEL or MTL. To classify the near-miss target without using MTL, we employ a full connect neural network (FC) as the output layer. FC decodes feature (h te ) to a vector with the number of labels (C = 6). After that, we extract the label having maximum value in the vector as prediction label. Also, to confirm the effectiveness of feature encoding and the temporal transitions modeling by CNN and RNN, we make comparisons using Support Vector Machine (SVM) with different information sources (features) SVM is implemented by LIBSVM. The best hyper-parameters of SVM such as kernel type, cost parameter, and RBF kernel's γ were selected by grid-search. In order to use the ER sequence as SVM input, we transformed each information source into a vector space and concatenated them over all frames. In the case of a single information source, the highest evaluation values for all metrics was achieved with the use of detected objects. For the case of two information sources, the best performance was achieved by using sensor streams and detected objects. This confirms the effectiveness of using detected objects as an information source.
On the other hand, although the method of using front video and sensor streams is the straightforward approach when using DNN as explained in Sect. 1, its evaluation values are lower than those of the proposed method using detected objects only. Excepting this case, we can confirm that performance generally improves with the addition of information sources. For each combination of features (for each line in Table 1), the proposal always yields better evaluation scores than SVM. This indicates the effectiveness of TEL using CNN and RNN, which is one of the components of the proposed method. Experiment 2: Table 2 (a) shows the classification performance for three GEL grid sizes (G h , G w ). To focus on the impact of grid size on classification performance, we used three information sources in TEL and did not use MTL. To classify the near-miss target without using MTL, we employ a full connect neural network (FC) as output layer. The FC decodes feature (h tg ) to a vector with the number of labels (C = 6). After that, we extract the label having maximum value in the vector as prediction label. We selected the grid sizes that maintained the aspect ratio of the original image (400 : 640) = (5 : 8). Also, row (−, −) in the table shows the evaluation values without GEL. Regardless of grid size, using GEL yielded higher evaluation values, so it does improve the classification performance. Moreover, we can confirm that the performance increases with the grid resolution. This result suggests that GEL can extract detailed features from the object detection results. The following sections use the grid size of (G h , G w ) = (8, 10). However, we confirmed that the classification performance did not increase even if the grid resolution was increased from (8,10) to (16,20). We consider that there are two reasons for this. The first is that high resolution makes training difficult by increasing the number of attention parameters. To be specific, the number of attention parameters α g i, j is quadrupled if grid resolution (8, 10) is increased to (16,20). This suggests that high resolution grids need more complex training than low resolution ones. The second is that high resolution grids make it harder to utilize the co-occurrence relation of objects in each cell feature g i, j . In the example of the nearmiss shown in Fig. 9, GEL can identify the near-miss target of motorcycle if person and motorcycle occur in the same cell feature g i, j . However, high resolution grids weaken such co-occurrence relations. Therefore, our consideration is that grid resolution triggers a trade-off problem: high resolution grids (e.g. (16,20)) can capture the detail position of each object, but make it difficult to handle the position relation among objects. For these reasons, GEL could not enhance F1-score even with a high resolution grid (16,20). In order to resolve this problem, we will consider to handle the spatial relationship among objects by capturing the distance between cells in future work. Experiment 3: Table 2 (b) shows the classification performance for three β values {0.1, 1.0, 10.0} in MTL. To focus on the ability of MTL to improve the classification performance and assess the effect of varying β, we used three information sources in TEL and did not use GEL. Note Table 3 Classification performance of each method. "V", "S", and "O" mean "Video", "Sensor", and "Objects", respectively. that the entries on the row marked "-" indicate the evaluation values attained without MTL. The results show that, regardless of the β value, using MTL yields higher performance. Therefore, MTL does improve the classification performance. Also, we can confirm that the β value of 10.0 yields lower F1-score than the other values. This suggests that the estimation accuracy of the main task is degraded if sub-task error is excessively weighted. The following sections use β = 0.1 because it achieved the highest F1-score. Experiment 4: We show the classification performance with the proposed and five baseline methods in Table 3. The baselines are as follows.
Method
DNN: Straightforward approach using DNN (i.e., TEL without objects). SVM: SVM using three different information sources, which performs best in Experiment 1. IDT [30]: It was proposed for recognizing human activity in video and is one of the SOTA methods for extracting video features; IDT identifies several visual key points and uses the trajectories of key points to characterize each video. We set the trajectory length to 15 and and the sampling stride to 5. Each video is then converted into a K-dimensional feature vector by Kmeans clustering all videos (we set K to 200). We use the IDT-based features to train the SVM classifier. The best hyper-parameters of SVM are selected by same manner to Experiment 1.
ST-CNN [16]:
It was proposed for human activity recognition in video and is another SOTA method. The method combines two types of DNNs; one is the spatial convolution network for capturing scenes and objects depicted in the video, and the other is a temporal convolution network for capturing motions between frames. ST-CNN calculates average scores for these two feature vectors. Their label is given by extracting the index with maximum score from the result. We set the number of stacked images to 10. DSA [13]: It was proposed for anticipating traffic acci-dents among vehicles from front video. The method extracts two visual features; one is the object feature for capturing object movement between continuous frames, and the other is a horistic feature for each frame image (i.e., h img t ). We extract DSA-based features from front video and train a DNN composed from temporal encoder (LSTM) and classifier layer. We set the number of candidate objects to 10.
In addition to these methods, we prepared three variants of the proposed method. The first and second methods are Proposed(V) and Proposed(V,O), each of which inputs video only and video and objects to allow comparison, in fair conditions, to ST-CNN/IDT and DSA, respectively. The final one is Proposed, which is the full model using video, sensor, and objects.
For all evaluation metrics, the proposed method achieved the highest values among the compared methods.
The results indicate the effectiveness of our approach in terms of the near-miss traffic identification task for ER data.
Note that we conducted the χ 2 test based on cross tabulation (joint frequency distribution of cases/tests) with two categorical variables (i.e., proposed and each baseline); each variable can be correct or incorrect. The results confirmed that the proposed method is significantly better than the baselines (p-value < 0.01). Figure 4 shows the precision, recall, and F1-score in each class for proposed, SVM, and ST-CNN. Of particular interest, for the four labels of car, bicycle, motorcycle, and pedestrian, the precision and recall scores were higher than the two other methods. Also, we can confirm that the proposed method achieved the highest score in all labels of F1score. Figure 5 uses a confusion matrix to show the detailed classification results of Proposed, SVM, and ST-CNN. In this figure, the true labels and predicted labels are plotted on the horizontal and vertical axes, respectively. The number in each cell shows the number of tests with each label. The proposed method accurately identified more objects than the other two methods, except for no near-miss and self labels, which confirms the superior performance of the proposed method.
When we focus on the results in fair condition between the proposed and baselines, we confirmed that Proposed(V,O) yielded higher evaluation scores than DSA. The main difference between the proposed method and DSA is that the proposed method can consider the position relationship between the car and each object by using GEL. This result suggests that the task of identifying near-miss targets is important to capture each object's position. Regarding to the results attained when inputting video only, Proposed(V) had lower evaluation score than ST-CNN. Although ST-CNN can process pixel-level movements among sequential images as a feature through optical flow, the proposed method processes holistic features of each image by CNN independently. Due to this difference, the proposed method cannot obtain features effective for identifying near-miss targets from video only. However, when we use the proposed Fig. 4 Precision, recall, and F1-score values in each class for Proposed, SVM, and ST-CNN methods. X-axis plots the labels "N", "C", "B", "M", "P", and "S" mean "No near-miss", "Car", "Bicycle", "Motorcycle", "Pedestrian", and "Self", respectively.
Fig. 5
Confusion matrix for classification results of Proposed, SVM, and ST-CNN methods. "N", "C", "B", "M", "P", and "S" mean "No near-miss", "Car", "Bicycle", "Motorcycle", "Pedestrian", and "Self", respectively. method in practical situations, we can use object detection for video images and handle detected objects by TEL and GEL. In this case, the proposed method attained higher scores than ST-CNN (i.e., Proposed(V,O) vs. ST-CNN in Table 3). Therefore, we consider that the proposed method is more effective than ST-CNN in the task examined.
Qualitative Analysis
The proposed method uses soft attention for temporal and grid space processing in TEL and GEL, respectively. By calculating mean values of each soft attention of α τ t and α g i, j for the correct labels in the test data, we can compare the time and space attributes emphasized by the proposed method.
The mean attention scores α τ t calculated for each correct label are shown in Fig. 6. The vertical and horizontal axes are averaged attention scores α τ t over test data and frame number t. The trigger frame number is t = 20. The scores of the near-miss targets of car, bicycle, motorcycle, and pedestrian peaked at around frame number t = 25. On the other hand, the self label attained highest attention score toward the last frame, t = 30, while no near-miss attained its peak score at frame number t = 21. As demonstrated by these results, the labels of self and no near-miss have different characteristics from the other labels; the four other labels demonstrate a similar tendency in terms of α τ t . The mean attention scores α g i, j calculated for each correct label are shown in Fig. 7. In this figure, the color intensity represents the mean attention score value in each cell. Cells on the left side of all figures are higher than those on the right. We think this is because vehicles and bicycles drive on the left side in Japan. Cells in the low center region have lower values as this region is often occupied by the car's bonnet. The pedestrian label has high attention scores in the vertical column of center cells. This result suggests that pedestrians frequently appeared in this region. We consider that GEL contributes the improvement of estimation performance by considering grid importance when processing ER data.
Our proposed method well supports safe-driving education. One its greatest advantages lies in is risk prediction training [2]. This involves drivers watching ER data containing near-miss traffic incidents and predicting the causes of the near-miss incidents. Figure 8 shows an example of a visualization tool that supports risk prediction training. This tool encourages drivers to focus on the precursors of near-miss events. In the frame image of Fig. 8, objects detected by the tool are shown by bounding boxes, and attention scores α g i, j are visualized by the red tint in each cell. In this example, a near-miss event occurred because the car on the left turned right too sharply. The proposed method can estimate and highlight dangerous areas/objects for drivers as shown in this example. We believe that such information will greatly enhance the effectiveness of safe-driving education by more intuitively indicating what traffic targets should be focused on while driving.
We confirm the proposal's performance on actual ER data using several frame images and sensor streams. The example given in Fig. 9 shows a near-miss incident involving a motorcycle. The proposed method correctly determined the label, while the baseline method output the wrong label of pedestrian. In this example, the car stopped temporarily (t ≤ 15), and restarted after the motorcycle crossed the intersection (16 ≤ t ≤ 19); the car braked suddenly because of the motorcycle (t = 20). In this example, a motorcycle, which is the near-miss target, appeared in the front video at frame number t = 25, a few frames after the trigger time t = 20. Identifying motorcycle as a near-miss target is difficult for two reasons. The first is that object-based methods such as DSA classify pedestrians with high probability because YOLO detects person and motorcycle simultaneously. The second is that video processing methods such as ST-CNN classify car or bike because their movement is similar to motorcycle. However, the proposed method can handle multi-modal information of image features, sensor data, and detected objects, and so can correctly classify the example to motorcycle. Also, we think that the task of identifying near-miss incidents requires an analysis of not only the trigger time frame but also all frames. This is also suggested from an analysis of the soft attention scores shown in Fig. 6. The proposed method can correctly determine labels because it considers the object detection results output by TEL and GEL.
Conclusion
This paper proposed a classification method that can well utilize in a coherent manner the data provided by front video, sensor streams, and object detection results, to accurately label near-miss events in the data captured by ERs (dashcams). The proposed method has three components. Temporal Encoding Layer; feature encoding for multi-modal and time-series data. Grid Embedding Layer; feature embedding to place detected objects into the grid space set relative to the vehicle. Multi-task Layer; multitask learning utilizing sub-tasks developed from the main task. An experiment using actual ER data confirmed the performance improvements attained by the proposed components. | 2022-02-02T16:18:07.190Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "1f87dcd2666e3f94a48f5da6e933d89943b077e3",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/transinf/E105.D/2/E105.D_2021EDP7017/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "36a490b26897276fa4e155a679666bac35a361d6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119325265 | pes2o/s2orc | v3-fos-license | Simulation of McKean Vlasov SDEs with super linear growth
We present two fully probabilistic numerical schemes, one explicit and one implicit, for the simulation of McKean-Vlasov Stochastic Differential Equations (MV-SDEs) with drifts of super-linear growth and random initial condition. We provide a pathwise propagation of chaos result and show strong convergence for both schemes on the consequent particle system. The explicit scheme attains the standard $1/2$ rate in stepsize. From a technical point of view, we successfully use stopping times to prove the convergence of the implicit method although we avoid them altogether for the explicit one. The combination of particle interactions and random initial condition makes the proofs technically more involved. Numerical tests recover the theoretical convergence rates and illustrate a computational complexity advantage of the explicit over the implicit scheme. Comparative analysis is carried out on a stylized non Lipschitz MV-SDE and the neuron network model proposed in [J. Baladron et al., Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons, The Journal of Mathematical Neuroscience, 2 (2012)]. We provide numerical tests illustrating \emph{particle corruption} effect where one single particle diverging can `corrupt' the whole system. Moreover, the more particles in the system the more likely this divergence is to occur.
1. Introduction. The aim of this paper is to develop a numerical scheme for simulating a McKean-Vlasov Stochastic Differential Equations (MV-SDEs) with drifts of super-linear growth and Lipschitz diffusion coefficients (with linear growth). MV-SDEs differ from standard SDEs by means of the presence of the law of the solution process in the coefficients.
where µ X t denotes the law of the process X at time t. Similar to standard SDEs, MV-SDEs have been shown to have a unique strong solution in the super-linear growth setting in spatial parameter setting, see [9]. Of course, many mean-field models exhibit non globally Lipschitz growth, for example, mean-field models for neuronal activity (e.g. stochastic mean-field FitzHugh-Nagumo models or the network of Hodgkin-Huxley neurons) [1], [2], [3] appearing in biology or physics [12], [11]. We refer to the review in [1] for further motivation of the problem.
In general closed form solutions for such equations are rare, hence to fully utilize MV-SDEs as a modeling tool, one needs a reliable way in which to simulate them. It is well known for standard SDEs that the explicit Euler scheme runs into difficulties in the super-linear growth setting, see [16], even though the SDE is known to have a unique strong solution. The original solution to this problem was to consider an implicit (or backwards) Euler scheme developed in [15]. Although implicit schemes allowed one to tackle more general SDEs they are slower especially in higher dimensions. The reason for this boils down to the fact that one is required to solve a fixed point equation at every time-step which can be computationally expensive. To solve this problem an explicit scheme was then developed in [17], a so-called Tamed Euler scheme. Since then several authors have built on this result and developed algorithms to deal with coefficients that grow super-linearly, see [8], [27], [13] for example. There has been some work on improved Monte Carlo methods for MV-SDEs with super-linear drift, see e. g. [10]. An extra complication MV-SDEs offer over standard SDEs is the requirement to approximate µ at each time step. Although there are other techniques (see [14]) the most common is a so-called interacting particle system, where µ X,N t (dx) := 1 N N j=1 δ X j,N t (dx) and δ X j,N t is the Dirac measure at point X j,N t , and the independent Brownian motions W i , i = 1, . . . , N . Under Lipschitz type conditions this particle system is known to converge pathwise to the true solution of the MV-SDE. However, this convergence (with corresponding rate) in super-linear growth setting has thus far not been considered in full generality.
Closer to our work, we highlight: [5] develop an explicit Euler scheme to deal with a specific MV-SDE type equation; convergence is given but under Lipschitz conditions and constant diffusion coefficient. [21] studies an implicit Euler scheme in order to approximate a specific equation and requires constant diffusion coefficient, symmetry and uniform convexity of the interaction potential.
Our contribution. Firstly, we show that the above particle scheme converges in the super-linear growth case without coercivity/dissipativity (propagation of chaos). This result is crucial in showing convergence of the numerical scheme to the particle system rather than to the original MV-SDE, with corresponding rate.
The second contribution is the development and strong convergence of the explicit scheme to the MV-SDE, inspired by the explicit scheme originally developed in [17], [27]. We also obtain the classical 1/2 rate of convergence in the stepsize. Combining this with the propagation of chaos result gives an overall convergence rate for the explicit scheme.
The final contribution is to show strong convergence of an implicit scheme. This turns out to be a challenging problem since results involving implicit schemes rely on stopping time arguments. This causes several issues when generalizing results to the MV-SDE setting and we have had to make stronger assumptions on the coefficients in this setting in order for the arguments to continue to hold. On the other hand, we allow for random initial conditions and time dependent coefficients that to the best of our knowledge have not been fully treated in the standard SDE setting. We discuss these issues in Remarks 3.4 and 5. 10. We only focus on strong convergence of this scheme and not the rate, mainly because the explicit scheme is in general superior (as our numerical testing shows) and such proof would lead to lengthy statements below without substantially enhancing the scope of our work.
From a technical point of view, we highlight the successful use of stopping time arguments in combination with McKean-Vlasov equations and associated particle systems to show the convergence of the implicit scheme. The paper is structured in the following way. In Section 2 we introduce the notation and our tamed particle scheme. In Section 3, we state our main result, namely, propagation of chaos and convergence results for the two schemes. Following that, in Section 4 we provide several numerical examples and highlight the particle corruption phenomena. This analysis implies one cannot hope to build a reliable scheme based on a standard Euler scheme. We further show the increased computational complexity associated with a MV-SDE makes the implicit scheme a less viable option than the explicit (tamed) scheme. Finally, the proofs are given in Section 5 and Appendix.
2. Preliminaries. Throughout the paper we work on a filtered probability space (Ω, F , (F t ) t≥0 , P) satisfying the usual conditions, where F t is the augmented filtration of a standard multidimensional Brownian motion W . We will work with R d , the ddimensional Euclidean space of real numbers, and for a = (a 1 , · · · , a d ) We consider some finite terminal time T < ∞ and use the following notation for spaces, which are standard in the McKean-Vlasov literature (see [6]). We define S p for p ≥ 1, as the space of R d -valued, F · -adapted processes Z, that satisfy Given the measurable space (R d , B(R d )), we denote by P(R d ) the set of probability measures on this space, and write µ ∈ P 2 (R d ) if µ ∈ P(R d ) and for some x ∈ R d , R d |x − y| 2 µ(dy) < ∞. We then have the following metric on the space P 2 (R d ) (Wasserstein metric) for µ, ν ∈ P 2 (R d ) (see [9]), π ∈ P(R d × R d ) with marginals µ and ν .
2.1. McKean-Vlasov stochastic differential equations. Let W be an ldimensional Brownian motion and take the progressively measurable maps b : MV-SDEs are typically written in the form, where µ X t denotes the law of the process X at time t, i.e. µ X t = P • X −1 t . We make the following hypothesis on the coefficients throughout.
Hypothesis 2.1. Assume that σ is Lipschitz in the sense that there exists L > 0 such that for all t ∈ [0, T ] and all x, x ′ ∈ R d and ∀µ, µ ′ ∈ P 2 (R d ) we have that and let b satisfy 1. One-sided Lipschitz in x and Lipschitz in law: there exist L b , L > 0 such that for all t ∈ [0, T ], all x, x ′ ∈ R d and all µ, µ ′ ∈ P 2 (R d ) we have that 2. Locally Lipschitz with polynomial growth in x: there exists q ∈ N with q > 1 such that for all t ∈ [0, T ], ∀µ ∈ P 2 (R d ) and all x, If the law µ X is konwn beforehand, then the MV-SDE reduces to a "standard" SDE with added time-dependency. Typically this is not the case and usually the MV-SDE is approximated by a particle system. The interacting particle system approximation. We approximate (2.1) (driven by the Brownian motion W ), using an N -dimensional system of interacting particles. Let i = 1, . . . , N and consider N particles X i,N satisfying the SDE with X i,N 0 = X i 0 (since the initial condition is random, but independent of other particles) is the Dirac measure at point X j,N t , and the independent Brownian motions W i , i = 1, . . . , N (also independent of the BM W appearing in (2.1); with a slight abuse of notation to avoid re-defining the probability space's Filtration).
Propagation of chaos. In order to show that the particle approximation is of use, one shows a pathwise propagation of chaos result. Although different types exist we are interested in strong error hence require a pathwise convergence result where we consider the system of non interacting particles which are of course just MV-SDEs and since the X i s are independent, then µ X i t = µ X t for all i. Under global Lipschitz conditions, one can then prove the following convergence result (see [6, Theorem 1.10] for example) All SDEs appearing below have initial condition X i 0 and we work on the interval [0, T ]. Standard Euler scheme particle system. In general one cannot simulate (2.2) directly and therefore turns to a numerical scheme such as Euler. We partition the time interval [0, T ] into M steps of size h := T /M , we then define t k := kh and recursively define the particle system for k ∈ {0, . . . , M − 1} as, Under Lipschitz regularity it is well known that this scheme converges, see [4] or [19] (here a weak rate of convergence is shown under an additional regularity assumption).
Euler particle system for the super-linear case: Explicit and Implicit. However, as discussed in works such as [16], [17], [27] one does not have convergence of the Euler scheme when we move away from the global Lipschitz setting. The goal of this paper is to therefore construct a suitable numerical schemes which converges. Inspired by the above works we consider a so-called tamed Euler scheme. With the notation above consider the following schemē Of course, explicit schemes are not the only method one can deploy to solve this problem, we also consider the following implicit schemẽ 3. Main Results. We state our main results and hypothesis here, the proofs are postponed to Section 5. Recall that we want to associate a particle system to the MV-SDE and show its convergence, so-called propagation of chaos. We have the following result that holds under weaker assumptions than those in Theorem 3.3.
. Therefore, to show convergence between our numerical scheme and the MV-SDE, we only need to show that the "true" particle scheme and numerical version of the particle scheme converge.
Explicit scheme. We first introduce the continuous time version of the explicit scheme. Denote by κ(t) := sup{s ∈ {0, h, 2h, . . . , M h} : This then leads to our main explicit scheme convergence result. Theorem 3.3 (Strong Convergence of Explicit). Let Hypothesis 2.1 and 2.2 hold, further let X 0 ∈ L m (R d ) for m ≥ 4(1 + q) (note q > 1). Let X i be the solution to (2.3), and X i,N,M be (3.1). Then we obtain the following convergence result | ≥ R} to control the particles is suboptimal and several problems appear by introducing them. Namely, one can only consider stopping times that stop one particle since otherwise the convergence speed would decrease with a higher number of particles. However, applying a stopping time to a single particle does not allow us to fully bound the coefficients and moreover destroys the result of all particles being identically distributed.
The stopping times arguments used for the implicit scheme below require stronger assumptions in order to make the theory hold. Implicit scheme. We have shown convergence of the explicit scheme for non Lipschitz coefficients, although this is indeed not the only method, there is another popular method known as implicit or backward Euler scheme. That being said, the implicit scheme has some well documented disadvantages, namely it is expensive compared to its explicit counterpart, we discuss this issue further in Section 4. One can consult, [22] for example on the implicit scheme (and extensions) for standard SDEs.
Standard implicit scheme convergence results rely on the so called monotone growth condition, we therefore proceed with the following hypothesis.
(H2). σ is only a function of time and space (does not have a measure dependence).
Although the main convergence theorem requires both H1 and H2, we only use H2 at the end of the proof of convergence. We present our auxiliary results requiring only H1 as we believe them to be of general independent interest. Remark 3.6 (Monotone Growth). The combination of Hypothesis 2.1, 2.2 and H1, imply the monotone growth condition. Namely, there exist constants α and β such ∀ t ∈ [0, T ], µ ∈ P 2 (R d ) with l being the dimension of the BM, We now state the strong convergence of the implicit scheme (2.5) to (2.2). Proof. The proof of this result follows by combing Proposition 3.1 and 3.7 and noting that the assertion in Proposition 3.7 is independent of N .
Numerical testing and Examples.
We illustrate immediately our results with numerical examples. We highlight the issues of using the standard Euler scheme in this setting and also compare the computational time and complexity of the explicit and implicit scheme. We juxtapose our findings to those in [1].
Particle Corruption.
It is well known that the Euler scheme fails (diverges) when one moves outside the realm of linear growing coefficients, see [16]. We claim that this divergence is worse in the setting of MV-SDEs and associated particle system due to an effect we refer to as particle corruption.
The basic idea is that one particle becomes influential on all other particles, thus we are no longer in the setting of "weakly interacting". This is of course not a problem for standard SDE simulation. We show two aspects of particle corruption in a simple example, firstly it exists i.e. one particle can cause the whole system to crash. Secondly and perhaps more profoundly, the more particles one has the more likely this is. This is of course a devastating issue when simulating a MV-SDE since accurately approximating the measure depends on having a large number of interacting particles.
To show this example we take a classical non-globally Lipschitz SDE, the stochastic Ginzburg Landau equation (see [28]) and add a simple mean field term to it, This MV-SDE clearly satisfies the hypothesis to have a unique strong solution in S p for all p > 1, hence in theory one could calculate ϕ(t) := E[X t ] and have a standard SDE with one-sided Lipschitz drift. The analysis carried out in [16] then implies that the Euler scheme diverges here. Showing particle corruption exists. For our example we simulate N = 5000 particles with a time step h = 0.05, T = 2 and X 0 = 1, we also take σ = 3/2 and c = 1/2. We rerun this example until we observed a blow up and plotted the particle paths in Figure 1. Figure 1 show the first part of the divergence, namely all particles are reasonably well behaved until one starts to oscillate rapidly. We have stopped plotting before the time boundary since this particle diverges shortly after this. We refer to this particle as the corrupt particle and it is fairly straightforward to see it will diverge. However, due to the interaction this single particle influences all the remaining particles and the whole system diverges shortly after.
Remark 4.1 (Why is particle corruption so pronounced?). The reason this effect is so dramatic is a simple consequence of the mean-field interaction. Typically, one
Realisations in the particle system
Other Particles Corrupt Particle Figure 1: Showing the realizations of the particles in the system. We note that the particle given by the dashed line is starting to oscillate and is taking larger values than it surrounding particles.
observes divergence of the Euler scheme via a handful of Monte Carlo simulations that return extremely large (or infinite) values. When one then looks to calculate the expected value of the SDEs at the terminal time for example, these few events completely dominate the other results. This is summed up in a statement of [16], where an exponentially small probability event has a double exponential impact. The difference in the MV-SDE (weakly interacting particle) case is that the expectation appears inside the simulation, hence a divergence of a single particle influences multiple particles simultaneously during the simulation and not just at the final time.
Convergence of Euler and propagation of chaos is impossible. The above shows that one particle diverging can cause the whole system to diverge, one may argue that using more particles would reduce the dependency between them and hence influence the system less. In fact as we shall see the opposite is true, the more particles the more likely a divergence is. To test this we use the same example as above but use N = [1000, 5000, 10000, 20000] particles and rerun each case 1000 times and record the total number of times we observe a divergence over the ensemble. Table 1: Number of divergences recorded at each particle level out of 1000 simulations.
The results in Table 1 show conclusively that the more particles the more likely a divergence is to occur. This is a real problem in this setting since in order to minimize the propagation of chaos error one should take N as large as possible, but in doing so makes the Euler scheme approximation (likelier to) diverge.
Remark 4.2 (Euler cannot work). We have shown that naively applying the standard Euler scheme in the MV-SDE setting with non globally Lipschitz coefficient has issues. However, for standard SDEs there are some simple fixes one can apply and still obtain convergence e.g. removing paths that leave some ball as considered in [23].
Methods like this cannot work here since, we either take the ball "small" and therefore our approximation to the law is poor. Or we take a large ball, but then as the particles head towards the boundary they can "drag" other particles with them which again makes the system unstable.
The dependence on the measure (other particles) implies that the more crude approximation techniques cannot yield the strong convergence results we obtain with the more sophisticated techniques presented in this paper. In [1] the authors have a non-globally Lipschitz MV-SDE and simulate using standard Euler scheme. Since no divergence was observed in their simulations they conjectured that the Euler scheme works in their setting, however, they used a "small" diffusion coefficient (σ ∈ [0, 0.5]) and small particle number (in the order of hundreds), which makes divergence unlikely to be observed (but not impossible) and yields poorer approximation results. Again, our methods provide certainty in terms of convergence (and convergence rate).
Timing of Implicit vs Explicit: Size of cloud and spatial dimension.
It is well documented that implicit schemes are slower than explicit ones, mainly because one must solve a fixed point equation at each step. This operation is not "cheap" and moreover scales d 2 in dimension, see [17]. Of course this analysis is carried out for standard SDEs, what we wish to consider is how the particle system affects the timing of both methods.
We consider the same example as previous (but take T = 1), we then consider a set of dimensions from 1 to 200 and number of particles from 100 to 20000. Plotting the time taken for both methods is given in Figure 2. Firstly, we observe that the explicit scheme is two to three orders of magnitude faster than the implicit scheme. At the highest dimensional and particle number this difference is very apparent with the tamed scheme taking approximately 1 minute and the implicit 10 hours. Another note to make is the scaling of each method, both methods scale similarly with particle number , but the tamed scheme scales linearly with dimension, this is superior to the d 2 scaling of the implicit scheme.
Even for the case d = 1, N = 20000 the tamed scheme takes approximately 7 seconds while the implicit scheme takes approximately 23 minutes. For many practical applications N = 20000 is not enough for an acceptable level of accuracy, with this in mind and the dimension scaling, this makes the implicit scheme a very expensive method in this setting.
Explicit Vs Implicit Convergence: the Neuron Network Model.
We compare the convergence of the explicit and the implicit scheme. To this end we use the system in [1] where the authors develop a non globally Lipschitz MV-SDE to model neuron activity. In our notation their system with b : T = 2 is chosen as the final time and where the parameters have the values As the true solution is unknown to compare the convergence rates, we use as proxy the output of the explicit scheme with 2 23 steps. Since the explicit scheme has convergence rate √ h we know that 2 16 steps and below yields one order of magnitude larger errors. The simulation for 1000 particles and average root mean square error of each particle is given in Figure 3.
One can observe that although initially the implicit scheme has a better rate of convergence, it levels off to yield the expected 1/2 rate. Making the explicit scheme the more computationally efficient. Of course our "true" was calculated from the explicit scheme, hence we additionally carried out a similar test with a "true" from the implicit, and the results were almost identical. Remark 4.3 (Small Diffusion Setting). Above, we have taken σ ext = 0.5, this goes against the example in [1] where σ ext = 0. As it turns out, in the case σ ext = 0, the implicit scheme has a convergence rate close to 1 (up to an error of around 10 −4 ), while the explicit scheme maintains the standard 1/2 rate. It is our belief that this is due to the fact that when σ ext = 0 the diffusion coefficient makes little difference, hence both scheme revert close to their deterministic convergence rate. The explicit scheme of course still rate of order 1/2, while the implicit is order 1. It may therefore be that in the setting of small diffusion terms the implicit can yield superior results, of course though this is a special case and is not true in general. Obtaining the Density. In some applications as well as the value of the MV-SDE at the terminal time, one may also be interested in the density (law). In [1, Section 4] the authors compare density estimation using both the Fokker-Plank equation and the histogram from the particle system. The approach using PDEs becomes computationally expensive here if one consider multiple populations of MV-SDE and hence the authors take a simple case (see [1,Section 4.3]). There are of course other drawbacks such as dimension scaling which often make stochastic techniques more favorable in this setting. Moreover, using the PDE one will only obtain the density, if one is further interested in calculating a "payoff" i.e. E[G(X T )] for some function G.
Then we would require an additional integral approximation or Metropolis Hastings style sampling scheme to calculate this expectation. While [1] apply a basic histogram approach when using MV-SDEs, this does not yield particularly nice results, namely, the resultant density is not a smooth surface. There are however, many statistical techniques one can use to improve this, see [18,Chapter 18.4] for further results and discussion. Taking the example in [1] (with σ ext = 0) and applying MATLAB's ksdensity function we obtain Figure 4.
One can observe the similarity between our result using SDEs and the one obtained in [1, pg 31] using the (expensive) PDE approach.
Conclusions and future work. We have shown how one can apply the techniques from SDEs to the MV-SDE setting and some of its pitfalls and challenges that arise. The numerical testing carried out shows that the explicit scheme yields superior results (over the implicit scheme) in general.
Although we have been able to obtain convergence for the implicit scheme it is under stronger assumptions than the explicit scheme (the implicit scheme works very well in Section 4.3). The reason for these assumptions is that the implicit scheme is more challenging to bound than the explicit. The standard approach around this problem is to use stopping time arguments, however, as described in Remark 3.4 stopping times are harder to handle in the MV-SDE framework. Caution is needed to account for the extra technicalities that arise.
It is our belief that Hypothesis 3.5 although sufficient, is not necessary to guarantee the implicit scheme converges. As research is carried out into stopping times and MV-SDEs, future theoretical developments in this direction may allow this hypothesis to be weakened. We also leave open a proof for the convergence rate of the implicit scheme. Showing such a convergence rate in our framework is clearly possible but adds little in scope given the gains of the explicit over the implicit scheme. We leave the question open until a time a more resourceful implicit scheme can be designed.
Another interesting area which we have not discussed is sign preservation and the impact it has on the law. For example a MV-SDE may be known to be positive, however, if the numerical scheme takes the solution into the negative region how does the law dependence influence the remaining particles? One can consider the special case of L b < 0 in Hypothesis 2.1, even though the MV-SDE could have a nonnegative solution, the numerical scheme may not preserve this feature.
Proof of Main Results.
We shall use C to denote a constant that can changes from line to line, but only depend on known quantities, T , d, the one-sided Lipschitz coefficients etc.
Propagation of Chaos. Let us show the propagation of chaos result.
Proposition 3.1. Let us fix 1 ≤ i ≤ N , we then approach the proof in the usual way for dealing with one-sided Lipschitz coefficients, namely we apply Itô's formula to the difference (note X i 0 cancel), where σ a is the ath column of matrix σ, hence σ a is a d-dimensional vector. Considering the first integral in (5.1), Applying the one-sided Lipschitz property in space and W (2) in measure along with Cauchy-Schwarz we obtain, As is done in [6], we introduce the empirical measure constructed from the true solution i.e. µ N s := 1 N N j=1 δ X j s . Since W (2) is a metric (see [29,Chapter 6]), we have Since µ N s ,μ N s are empirical measures a standard result for Wasserstein metric is We leave the other W (2) term for the moment and consider the diffusion coefficient in the time integral. Since σ is globally Lipschitz and W (2) for each a (by definition σ a = σe a , with e a the basis vector, global Lipschitz follows from our norm).
One can note this is independent of a. The final term to bound is the stochastic integral term, to do this though we take supremum and expectation to (5.1) where we have applied Burkholder-Davis-Gundy to remove the stochastic integral. Using Young's inequality ab ≤ a 2 /2 + b 2 /2 we can bound this term by, Taking the 1 2 sup t∈[0,T ] |X i t −X i,N t | 2 to the other side, noting that the supremum value over the integrals is t = T and using the bound for the difference in σ we obtain, To deal with the summation term, observe that since all j are identically distributed, Therefore, applying Young's inequality to |X i s − X i,N s |W (2) (µ s ,μ N s ) and taking supremum over i, where the final step follows from Grönwall's inequality. At this point, one could conclude a pathwise propagation of chaos result, see [6, Lemma 1.9], however, here we are interested in the rate of convergence. This is well understood for W (2) . We use the improved version [7, Theorem 5.8] of the classical convergence result [26,Chapter 10.2]. Provided X i · ∈ L p · (R d ) for any p > 4, then for any s, Using the result in Theorem 2.3 with our hypothesis then completes the proof.
ds.
Observe that Putting this together and using Hypothesis 2.1 and 2.2 we obtain and hence by Grönwall's lemma where C is a constant which is independent of N and M .
which gives us combined with (5.5) that , by the estimate (5.5) and the Burkholder-Davis-Gundy inequality. Since furthermore, we get the desired result here as well. Finally, holds for any t ∈ [0, T ] and 1 ≤ i ≤ N by the former result.
where we use the convention sup{∅} = −∞. Also we assume thatp ≥ 2 since otherwise there is nothing to prove. Note that we can already apply Lemma 5.2 for p ≤ 2.
We use an inductive argument and start with p = 2. In every step we set q = 2p∧p.
By Itô's formula we have Taking the sup over i on both sides we get and thus the application of Grönwall's lemma yields that holds for some positive constant C which is independent of N and M .
Since (5.6) is proven for q we can set p = q and use this result in the next step of the iteration. Since the new q is at most twice as much as p, Lemma 5.2 can again be applied for q/2. This iteration gets repeated until q =p. and By putting this together we obtain
5.3.
Proof of Implicit Convergence. The main goal here is to prove Proposition 3.7. We loosely follow [22], however, due to the extra dependencies on time and measure and further allowing for random initial conditions we require more refined arguments. We take N as some fixed positive integer. Before considering the implicit scheme, let us show a result on the particle system (2.2).
Proposition 5.4. Let Hypothesis 2.1, 2.2 and H1 (in Hypothesis 3.5) hold, further, let X 0 ∈ L 2 (R d ). Then the following bounds hold, Proof. Firstly, let us consider the stopped process X i,N T ∧τ i m . Applying Itô to the square of this process and taking expectations yields where we have used the growth and stopping condition to remove the martingale term, then the monotone growth, uniform boundedness of b in the measure component b and Grönwall's inequality to obtain the result. Noting that the following lower bound also holds, hence we obtain, The result then follows by noting that E[|X i 0 | 2 ] = E[|X 0 | 2 ] and hence the bounds are independent of i, so we obtain the result for the supremum over i.
Let us now return to the implicit scheme. At each time step t i and for each particle i one needs to solve a fixed point equation, this leads us to consider a function F For the implicit scheme to have a solution the function F must have a unique inverse.
The following lemma is crucial in proving convergence of the implicit scheme.
Lemma 5.5. Let Hypothesis 2.1, 2.2 and H1 (in Hypothesis 3.5) hold and fix h * < 1/ max(L b , 2β). Further, let 0 < h ≤ h * and take any t ∈ [0, T ] and µ ∈ P 2 (R d ) fixed, then for all y ∈ R d , there exists a unique x such that F (t, x, µ) = y. Hence the fixed point problem in (2.5) is well defined.
Moreover, for all t ∈ [0, T ] and µ ∈ P 2 (R d ) the following bound holds, and for any i ≥ 1 the following recursive bound holds, where ∆W i t k a is the ath entry of the vector. Proof. Let us first prove there exists a unique solution to (5.7), in the sense that for all t ∈ [0, T ] and µ ∈ P 2 (R d ) fixed, then there exists a unique x ∈ R d such that F (t, x, µ) = y for a given y ∈ R d , provided 0 < h < h * . This is a classical problem considered in [30, p.557] or see [20, p.2596], which require F to be continuous, monotone and coercive (in x). Clearly, since b is continuous, one has F is continuous. For monotonicity in F , which is clearly > 0 provided h < 1/L b . Coercivity follows similarly by the monotone growth condition in b, Hence F (t, x, µ) = y has a unique solution for F defined in (5.7) and therefore the numerical scheme (2.5) is well defined.
To show x is bounded by F (·, x, ·), again fix some t ∈ [0, T ] and µ ∈ P 2 (R d ), then, Since h < 1/(2β), we obtain, This result is also useful since it holds for all t ∈ [0, T ] and µ ∈ P 2 (R d ). For the recursive bound it is useful to note, Proposition 5.7. Let Hypothesis 2.1, 2.2 and H1 (in Hypothesis 3.5) hold and fix h * < 1/ max(L b , 2β). Further assume that X 0 ∈ L 4 (R d ). Then, Proof. Firstly let us take a nonnegative integer K, such that Kh ≤ T . Now let us consider (5.8), one can note that this bound still holds where the F terms are multiplied by ½ {λ i m >0} (since both sides are nonnegative and the indicator is bounded above by one). Summing both sides from k = 1 to K ∧ λ i m , noting that F terms cancel, we obtain, The idea is to apply the discrete version of Grönwall's inequality to this (see for example [24, pg 436] or [22,Lemma 3.4]), which requires our bound to be in terms of F . Using arguments similar to previously, where we have used independence of σ(·)½ {λ i m >0} and ∆W along with the growth bounds on σ to obtain the final inequality. Combing this with our previous bounds and appealing again to Lemma 5.5 (to boundX by F ) we obtain, Applying a discrete version of Grönwall inequality and noting K k=1 1 ≤ T /h yields Recalling (5.9), we can apply the same arguments as previous to obtain the bound Noting that our bound for F is now independent of m, we can use Fatou's lemma to take the limit and obtain (for K ≥ 1), . This is not difficult to obtain by again using that we can boundX as follows, then we can apply the same bound on F as above.
In order to complete the proof, we need to also show this bound exists for all i and 0 < h ≤ h * . One can see immediately that all bounds decrease as h decreases, hence the supremum value is to set h = h * , which is also finite since h * < 1/(2β). The supremum over i follows from the fact that all bounds are independent of i. Now that we have established a bound on the second moment, we look to show convergence of this scheme to the true particle system solution. As always with discrete schemes it is beneficial to introduce their continuous counterpart. As it turns out doing it naively for implicit schemes leads to measurability problems, hence one introduces the so-called forward backward schemê The first result we wish to present is that the discrete and continuous versions stay close to one another, up to the stopping time (5.10).
Lemma 5.8. Let Hypothesis 2.1, 2.2 and H1 (in Hypothesis 3.5) hold and fix h * < 1/ max(L b , 2β). Further assume X 0 ∈ L 4(q+1) (R d ). Then for 1 ≤ p ≤ 4 the following holds for 0 < h ≤ h * , Moreover, we also have the following relation betweenX and F for all 1 ≤ k ≤ M , Proof. To show the first part we start by noting the following useful relation between (2.5) and (5.11), namely for 1 ≤ k ≤ M , Noting that one can bound, where we have used the growth bounds on the coefficient b, in particular Hypothesis H1. Hence, To proceed we note the following, Hence the following result holds, The next step is of course to take the expectation inside the integral, let us start by noting the difference term can be bounded as, where we have used Lemma 5.8 for the final inequality. For the other terms, one can note due to the growth assumptions on b, that, The term involving σ is more complex, however, we can bound as follows, Hence, where the final inequality follows from Grönwall.
In order to obtain an upper bound on the probability of the stopping time occurring we look to obtain a lower bound for (5.11) In the case thatX hits the boundary first, the lower bound is obvious, namely |X i,N,M η i m | = m. For the second case it is less obvious. Recalling (5.12) and Lemma 5.5 we obtain the following lower bound for, where again we are taking k ≥ 1 here, but this is not a problem since we are assuming for the moment X i 0 < m. Observing that this lower bound holds independent of which process triggers the stopping condition we can say w.l.o.g. that, Leaving the second term for the moment, one observes that for any ǫ > 0, for m sufficiently large, call this point m * , since X i 0 is uniformly integrable. It is also useful to note that P({|X i 0 | < m} ∩ {0 < η i m < T }) = P({0 < η i m < T }). It is clear from our previous analysis that for m large enough and (5.13) the probability can be bounded by, Now the goal is to bound this by 2ǫ/3, we already have taken m sufficiently large to obtain the last inequality, now consider for any given m, h * 01 (m) : C 2 h * 01 (m) + C(m)h * 01 (m) 2 ≤ 1. It is clear for 0 < h < h * 01 (m) the same bound holds. Then for the same ǫ as before choose m large enough such that, Redefine m * as the corresponding maximum of this m and m * . Now for any m ≥ m * , define h * 02 (m) such that, Again for 0 < h < h * 02 (m) the above inequality holds. Hence for any m ≥ m * and any 0 < h < min(h * 01 (m), h * 02 (m)), we have, P(η i m < T ) ≤ P(η i m = 0) + P(0 < η i m < T ) ≤ ǫ.
We now look towards showing our strong convergence result, firstly by showing convergence between (5.11) and (2.2) and then (2.5) and (2.2). From this point onwards we require H2 (in Hypothesis 3.5).
Remark 5.10 (On the diffusion coefficient σ being independent of the measure). The reason we cannot allow σ to have measure dependence is because our stopping time arguments do not work. Namely in order for two diffusion coefficients to be similar we require all N particles to be close to one another, not just the ith particle. As it turns out though, this is not a problem for the drift term, so we make no change to the measure dependence there.
Recalling the stopping time in Proposition 5.4, we now define θ i m := τ i m ∧ η i m and have the following convergence result.
Lemma 5.11. Let Hypothesis 2.1, 2.2, the full Hypothesis 3.5 hold, fix h * < 1/ max(L b , 2β) and assume X 0 ∈ L 4(q+1) (R d ). Then Ultimately we need to take supremum and expected values, hence we wish to bound | 2018-08-16T15:06:44.000Z | 2018-08-16T00:00:00.000 | {
"year": 2018,
"sha1": "0867399422500a7bfbe2205df2396a33cae6488a",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1808.05530",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8baca247f40b3c187792c49ca5b6240c44847061",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257427967 | pes2o/s2orc | v3-fos-license | Urban tolerance is phylogenetically constrained and mediated by pre‐adaptations in African bats
Abstract With increasing urbanization, particularly in developing countries, it is important to understand how local biota will respond to such landscape changes. Bats comprise one of the most diverse groups of mammals in urban areas, and many species are threatened by habitat destruction and land use change. Yet, in Africa, the response of bats to urban areas is relatively understudied. Therefore, we collated data on urban presence, phylogenetic relationship, and ecological traits of 54 insectivorous bats in Africa from available literature to test if their response to urbanization was phylogenetically and/or ecologically driven. Ancestral state reconstruction of urban tolerance, defined by functional group and presence observed in urban areas, suggests that ancestral African bat species could adapt to urban landscapes, and significant phylogenetic signal for urban tolerance indicates that this ability is evolutionarily conserved and mediated by pre‐adaptations. Specifically, traits of high wing loading and aspect ratio, and flexible roosting strategies, enable occupancy of urban areas. Therefore, our results identify the traits that predict which bat species will likely occur in urban areas, and which vulnerable bat clades conservation efforts should focus on to reduce loss of both functional and phylogenetic diversity in Africa. We, additionally, highlight several gaps in research that should be investigated in future studies to provide better monitoring of the impact urbanization will have on African bats.
species such as domestic cats (Marzluff & Ewing, 2008). On the other hand, some species thrive in urban landscapes. For instance, scavengers benefit from the build-up of garbage (O'Connor, 1993), opportunistic insectivorous bats hunt large and predictable swarms of insects near urban waterbodies and street lights (Naidoo et al., 2011;Schoeman, 2016;Stone et al., 2015), and some large mammals find shelter and refuge from natural predators in urban structures (Bateman & Fleming, 2012;Marzluff & Ewing, 2008).
Thus, urbanization can be beneficial for certain taxa and maintain, or even increase, biodiversity (Lee et al., 2021;McKinney, 2008).
Understanding the way individual species respond to urban areas is important for sustainable urban development and conservation of resident species, particularly in biodiversity-rich regions.
Bats (order Chiroptera) are often overlooked as part of urban wildlife (Voigt & Kingston, 2016). However, bats make up a fifth of all mammal species and are often the most diverse mammal group in urban areas (Jung & Threlfall, 2016). Insectivorous bats are ecologically important worldwide, particularly in the control of disease and insect pests (Kunz et al., 2011). However, many species are threatened (IUCN, 2021;Racey, 2009), with urbanization causing habitat destruction and fragmentation that is detrimental to bat populations (Mickleburgh et al., 2002). While some bat species largely avoid urban areas, other species are abundant and take advantage of the novel foraging and roosting sites in urban areas (Avila-Flores & Fenton, 2005;Jung & Threlfall, 2016;Schoeman, 2016). Based on their response to urban areas, insectivorous bats can be classified into three groups: urban exploiters, urban adapters, or urban avoiders (Jung & Kalko, 2011;McKinney, 2002). Urban exploiters are species that are almost dependent on urban resources and can become abundant in urban areas, urban adapters are common in suburban areas and readily use urban resources but are not reliant on them, and urban avoiders hardly occur in urban areas, unable to use urban resources (McKinney, 2002). In bats, these classifications often depend on the functional grouping of species and their use of urban resources (Avila-Flores & Fenton, 2005;Jung & Kalko, 2011).
Insectivorous bats are divided into three functional groups:
open-air, narrow-edge, and narrow-space bats (Denzinger & Schnitzler, 2013). Species are adapted to each of these environments via specialized wing morphology and echolocation for locomotion as well as optimal prey detection and capture (Aldridge & Rautenbach, 1987;Schnitzler & Kalko, 2001). Open-air foragers have wings with high aspect ratios, high wing loading, and pointed wing tips that enable fast flight over long distances, and echolocation with low frequencies and long duration that are optimal to detect prey in open spaces without background clutter (Denzinger & Schnitzler, 2013). Narrow-edge space foragers have intermediate aspect ratios and wing loading with rounded tips which favor flexible foraging at the edge of vegetation and open spaces. Their echolocation characteristics enable narrow-edge space bats to detect prey in the vicinity of clutter, but where there is enough space that prey and background signals do not overlap (Denzinger & Schnitzler, 2013).
The wings of narrow space foragers have low aspect ratios and wing loading with very rounded tips enabling agile flight in narrow spaces, with echolocation well adapted to detect echoes of insects against the cluttered background's interference (Denzinger & Schnitzler, 2013 (Bergeson et al., 2015;Jung & Threlfall, 2016;Schoeman, 2016). Thus, in combination, functional traits and roosting ecology of insectivorous bats may determine their tolerance of urbanization (Jung & Threlfall, 2018).
Bat families generally have distinct functional traits, and hence, likelihood of presence in urban habitats (Denzinger & Schnitzler, 2013;Jung & Threlfall, 2016). For instance, Molossidae, the free-tailed bats, are open-air foragers with flexible roost preferences in crevices, tombs, and houses, whereas Rhinolophidae, the horseshoe bats, are narrow space bats that are obligate cave roosters (Denzinger & Schnitzler, 2013). Consequently, Molossidae are frequently found foraging and roosting in urban areas (Avila-Flores & Fenton, 2005), whereas Rhinolophidae are often conspicuously absent (Jung & Threlfall, 2018;Schoeman, 2016;Schoeman & Waddington, 2011). Although many family groups generally fit into one of these functional groups (see Denzinger & Schnitzler, 2013), some families such as Vespertilionidae are more variable, with different species belonging to various functional groups (Jung & Threlfall, 2016;Monadjem et al., 2020). Therefore, responses to urbanization may be underpinned by phylogenetic history (measured by phylogenetic signal) where closely related species are significantly more likely to respond similarly to urban areas than distantly related species (Blomberg & Garland, 2002). Jung and Threlfall (2018) suggested that phylogenetic relationships may play some role in urban tolerance of bats but indicated the need for further studies to confirm this. Phylogenetic conservatism may hinder evolutionary adaptations that enhance the ability of species to utilize resources in urban habitats (Ackerly, 2009). Pre-adaptations, traits evolved to previous conditions that serve as an advantage in the novel environment (Blomberg & Garland, 2002), often mediate the successful invasion into novel environments (Bock, 1959), and subsequent rapid adaptive evolution allows persistence in the new environment (Jenkins & Keller, 2011;Sultan et al., 2012;Whitney & Gabler, 2008).
How these processes contribute to urban success of bats has rarely been studied, yet is key to predicting extinction risks and formulating effective conservation measures.
In this light, we asked what role evolutionary history played in current patterns of urban tolerance of African insectivorous bats.
Africa is a fast-developing continent where 20% of its 320 bat species are listed as threatened (ACR., 2018;IUCN, 2021;United Nations, 2014), yet it is markedly understudied compared to other continents (Collins et al., 2021;Magle et al., 2012). We tested the phylogenetic signal of urban tolerance (in terms of urban avoider, adapter, or exploiter status) in African bat species, and reconstructed the ancestral state of urban tolerance. If successful urban exploiters were pre-adapted for urban areas, we predicted significant phylogenetic signal in urban tolerance, with the reconstructed ancestral node in the urban exploiter state. We also tested for evidence of co-evolution between urban presence and the functional traits and roosting ecology of bat species. Previous studies found that high wing aspect ratio, low peak echolocation frequency, and high roost specificity were important traits in urban exploiters in other regions (Jung & Threlfall, 2018;Wolf et al., 2022). Thus, we predicted significant correlations between urban presence and echolocation, wing morphology, and roost specificity for African species.
We categorized bats into urban exploiters, adapters, or avoiders
| Phylogenetic analyses
We used the super-tree by Jones et al. (2005) and pruned it to 54 bat species for which we had ecological data in the geiger (Harmon et al., 2008) and ape (Paradis & Schliep, 2019) packages of R statistical software version 4.1.0 (R Core Team, 2021). We used the "fix.poly" function in RRphylo (Castiglione et al., 2021) to resolve polytomies of this tree for all subsequent phylogenetic analyses.
To test phylogenetic signals and reconstruct ancestral states, the model of evolution for the trait in question must be known.
Therefore, we first determined the model of evolution of urban tolerance among states of "urban exploiter," "urban avoider," and "urban adapter" in the pruned phylogeny, using the "fitDiscrete" function in the geiger package. We compared the fit of the three models of evolution for urban tolerance using weighted Akaike's information criterion (AIC). Discrete characters can evolve under three models of evolution that govern the rate at which a trait is likely to evolve along the branches of the tree: equal rates (ER; the trait evolves at a uniform rate across the tree regardless of which states it is changing between), all-rates-different (ARD; the trait evolves at different rates across the tree regardless of which states it is changing between), and symmetric models (SYM; the rate of evolution varies across the tree but the rate of change between two states is symmetrical in that the forward and backward rates of evolution between those two particular states are equal). The best fitting model based on weighted AIC comparison was then used as the model of evolution to test phylogenetic signal and reconstruct ancestral states.
We measured the degree of phylogenetic signal using the same function "fitDiscrete" in geiger, with the tree transformation of Pagel's lambda (λ). This provides a value for Pagel's λ between 1 and 0, where 1 = strong phylogenetic signal and 0 = no phylogenetic signal. Included in the output is the estimated AIC value for the tree. We tested the fit of this lambda value against a lambda value of 0 for urban tolerance evolution on the tree by creating a tree of lambda = 0 and comparing the weighted AIC values calculated in each. We reconstructed the ancestral state of urban tolerance with stochastic character mapping of the joint posterior probabilities of the internal nodes of the tree (Bollback, 2006;Huelsenbeck et al., 2003) using the packages phytools (Revell, 2012) and ape. We ran 1000 simulations and plotted the average probabilities for each node in any given state as a pie chart at each node onto the phylogenetic tree.
Finally, we tested which traits -echolocation, wing loading, aspect ratio, or roost specificity -significantly predicted the presence/absence of bat species in urban areas. Roost specificity was coded as dummy variables. We fit the phylogenetic generalized linear model (PGLM) developed by Ives and Garland (2010) using the package Phylolm (Ho & Ané, 2014) Parametric bootstrapping provided confidence intervals for the estimates. We used an alpha value of .05 to determine significance of parameters. The model phylogenetic signal (α = .02) was low.
A standard GLM run in the R base package yielded slightly different results, indicating that the phylogeny affects result even at this low α. We, therefore, report results of the PGLM. We also estimated the phylogenetic signal of the dependent variable using the same function (without independent variables) as well as the delta estimate (Fritz & Purvis, 2010) using "phylo.d" in the caper package (Orme, 2012). We estimated the phylogenetic signal of the continuous traits with "phylo.sig" in Phytools and the phylogenetic signal of roost specificity (categorical data) with "fitDiscrete" as above.
Urban tolerance in these African bat species evolved under a "symmetrical" evolutionary model. Here, the rates of change between two states of urban tolerance are not constrained to be equal to the rate of change between any other two states of urban tolerance, but the reverse (forward or backward) change between the same two states is equal (Figure 1). All transition rates were low, but the highest rate of switching occurred between urban avoider and urban adapter states implying transitions between these two states were most common within the phylogeny (Figure 1).
Pagel's λ for urban tolerance = 0.78, indicating that there is a significant phylogenetic signal in the manner urban tolerance is distributed across African bat species, further supported by the significant AIC value of 0.99 for this model. The reconstructed ancestral state of urban tolerance was 48% likely "urban adapter" (root node state: urban adapter = 0.48, urban avoider = 0.45, and urban exploiter = 0.07 [ Figure 2]). The "urban avoider" state was also ancestral, whereas "urban exploiter" is the most derived state, evolving once early in Molossidae and, more recently, once in Vespertilionidae and once in Emballonuridae (Figure 2).
F I G U R E 1
Evolutionary changes between states of urban tolerance represented by the "symmetrical" model (SYM). Transitions among states (avoider, adapter, or exploiter) are shown for the trait "urban tolerance." Transitions are represented by double-ended arrows (change can occur in either direction between states). Values indicate the rates of change between each pair of states, under a model where the rate between each pair is allowed to be different, but the rate of switching forward or backward between pairs of states is equal. The weighted AIC value for this model = 84% support compared to "equal rates" (2%) and "all-rates different" (14%) models.
| DISCUSS ION
This study is the first to investigate the evolutionary drivers of urban tolerance in African bats. We found significant phylogenetic signal in urban tolerance among insectivorous African bat species, and the ancestral state aligned with both urban adapters and urban avoiders. Therefore, the ancestral bat of these African species was likely a narrow-edge space forager with traits to successfully utilize F I G U R E 2 Phylogenetic tree of African bats with known states of urban tolerance at the tips and calculated posterior probabilities for states of internal nodes. The lambda value for urban tolerance on this tree = 0.78. The probabilities of the ancestral node in each state = 0.48 for urban adapter, 0.45 for urban avoider, and 0.07 for urban exploiter. States of urban tolerance are color coded (urban exploiter = green, urban adapter = blue, and urban avoider = red). Superfamily groups are indicated on the right (after ACR, 2018). (Bell et al., 2017). These low rates are probably the reason for the significant phylogenetic signal for urban tolerance among extant bat species. These results indicate that bat species are most likely to inhabit environments they are well suited to rather than undergo rapid adaptive evolution (Ackerly, 2009). Therefore, tolerance of urban areas is mediated by pre-adaptations that evolved in nonurban environmental conditions and were present in the common ancestor of these bats (Ackerly, 2009;Blomberg & Garland, 2002).
Standard error
Similarly, in birds, urban tolerance is characterized by a suite of preadapted traits -such as short flight distances (Møller, 2009) and highfrequency songs (Hu & Cardoso, 2009) -and urban exploiters are mostly from particular clades (Sol et al., 2017). These results suggest that increased urbanization spread may be linked to marked loss of phylogenetic diversity in local assemblages (Callaghan et al., 2021;Sol et al., 2017).
On the other hand, we found that the urban exploiter state recently appeared in two clades -Emballonuridae and Vespertilionidae.
Generally, long lifespans and generation times, like those of bats, decrease adaptation rates (Jones et al., 2003). However, it is possible that in established urban populations, rapid evolution can work in tandem with pre-adaptations to promote persistence of these populations (Jenkins & Keller, 2011;Yeh, 2004). Moreover, strong novel selection pressure may act on mechanisms of phenotypic or behavioral plasticity such that populations rapidly shift the way they use resources in the environment, without genotypic or evolutionary change (Charmantier et al., 2008;Garland & Kelly, 2006).
For example, some urban fruit bat populations have adjusted their diets (Egert-Berg et al., 2021), and some urban birds can alter their song frequency (Slabbekoorn & den Boer-Visser, 2006) in response to noise in urban areas. In insectivorous bats, echolocation peak frequency and bandwidth may display plasticity as bats can adjust these to prevent masking from acoustic interference altitudinally, geographically, and in response to some anthropogenic noises (Bunkley et al., 2015;Gillam et al., 2009;Jiang et al., 2015). Thus, the role of adaptive phenotypic plasticity in insectivorous bats should be further investigated as an avenue of adapting to urbanization.
In support of our predictions, wing morphology and roost specificity best predicted the presence of bats in urban areas.
Specifically, bats pre-adapted for urban areas have high wing loading and aspect ratio, and low-to-medium roost specificity. Bats with intermediate-to-high wing loading and aspect ratios are highly mobile, with good dispersal abilities and moderate-to-fast flight speeds (Arita & Fenton, 1997;Denzinger & Schnitzler, 2013;Norberg & Rayner, 1987). These traits are beneficial in urban environments because resources are distributed patchily across the landscape (Jung & Kalko, 2011;Jung & Threlfall, 2018;Piano et al., 2017). Moreover, in urban areas, artificial night lighting is ubiquitous, and provides an important source of concentrated insect prey for narrow-edge space and open-air species (Gaisler et al., 2006;Schoeman, 2016;Tomassini et al., 2014), whereas slow-flying bats with low aspect ratio and wing loadings avoid lit areas and instead rely on vegetated habitats (Hourigan et al., 2006;Jung & Kalko, 2010;Rydell, 1992).
These traits also display strong phylogenetic signals and therefore, allow conclusions on species responses to urban areas based on evolutionary history. Our results support those of a global metaanalysis (Jung & Threlfall, 2018) that found high aspect ratio and flexible roosting strategies promote urban tolerance. Although we found that high wing loading was also a significant driver of urban tolerance, the global analysis included few African species (Jung & Threlfall, 2018). Similarly, Wolf et al. (2022) suggest that flexible roosting strategies were important for urban tolerance, in addition to low echolocation peak frequency and broad bandwidth duration.
Overall, it appears that high mobility and flexible roost habits are the most important predictors of urban tolerance and can be used to determine species-specific responses to urban areas for bats (Jung & Kalko, 2011;Jung & Threlfall, 2018).
Some African bat species with traits that favor wide dispersal, such as the open-air Nyctalus species, Tadarida fulminans, and Taphozous nudiventris, were absent from urban environments.
Although this may be due to the lack of records of these species in African urban areas, roost specificity (e.g., T. fulminans) or dietary requirements may prevent species from occupying urban areas (Jung & Threlfall, 2018;Palacio, 2019). Roosts are crucial resources for bats, and often limiting in natural habitats (Mickleburgh et al., 2002;Zukal et al., 2017). Urban areas provide various roost types including roofs of houses, crevices in the walls of buildings, attics, and the eaves of houses (Monadjem et al., 2020;Voigt & Kingston, 2016). Bat species that select roosts in buildings over natural roosts gain significant reproductive benefits and pro- urban areas . This corroborates previous findings that roosting ecology determines the presence of bat species in urban areas (Duchamp et al., 2004;Jung & Threlfall, 2018;Wolf et al., 2022). Furthermore, roost specificity exhibited little phylogenetic signal, and its importance in determining which species inhabit urban areas may explain why urban presence had relatively low phylogenetic signal. Because relatively little is known about the roosting or dietary ecology of many African bat species (Monadjem et al., 2020), more research on roost and diet requirements is an important step to identify species vulnerable to urbanization.
Although more than 50% of bat species in this study appeared to be sensitive to urbanization, only 12 were classified as urban exploit- Taylor et al., 1999). Unfortunately, published studies do not report the relevant levels of urbanization. These data are important to compare with levels of urbanization in Africa. Future studies should utilize data reported in a standardized manner (Wolf et al., 2022), controlling for the level of urbanization, surrounding micro-habitats, and broadscale land use.
Our results show that resident urban bat species are pre-adapted to successfully occupy urban environments. African bat species that are found in urban landscapes belong to particular phylogenetic groups and exhibit particular ecological traits including high mobility and flexible roosting strategies. Consequently, urbanization will probably reduce both functional and phylogenetic diversity of local bat faunas (McKinney, 2006;Morelli et al., 2016;Schoeman, 2016;Sol et al., 2017Sol et al., , 2020. This homogenization of bat diversity may lead to loss of key ecosystem services such as pest and disease control (Kalda et al., 2015;Kunz et al., 2011). Narrow space-adapted species with high roost specificity are the most vulnerable to effects of urbanization in Africa. Therefore, conservation efforts and urban planning should focus on preserving suitable roost and foraging habitats for these species (McKinney, 2006;Morelli et al., 2016). Because bats of the African continent remain relatively understudied (Voigt & Kingston, 2016), more ecological and evolutionary data, particularly at fine geographic scales, are necessary to ensure that such conservation efforts are successful in urban landscapes across the continent.
ACK N O WLE D G E M ENTS
This research was funded by the University of KwaZulu-Natal and the South African National Research Foundation. We thank three anonymous reviewers who helped improve a previous version of the manuscript.
CO N FLI C T O F I NTE R E S T S TATE M E NT
The authors declare no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The full data set is available online at Dryad repository, DOI https:// | 2023-03-11T05:06:50.854Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "0e5c7d7f5e81924e5924ad202b3e72c66c5b911f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0e5c7d7f5e81924e5924ad202b3e72c66c5b911f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257252907 | pes2o/s2orc | v3-fos-license | Prognostication of progressive pulmonary fibrosis in connective tissue disease-associated interstitial lung diseases: A cohort study
Background Connective tissue diseases-associated interstitial lung disease (CTD-ILD) is a heterogeneous condition that impairs quality of life and is associated with premature death. Progressive pulmonary fibrosis (PPF) has been identified as an important risk factor for poor prognosis. However, different criteria for PPF are used in clinical studies, which may complicate comparison between trials and translation of study findings into clinical practice. Methods This is a retrospective single center study in patients with CTD-ILD. The prognostic relevance of PPF definitions, including INBUILD, ATS/ERS/JRS/ALAT 2022, and simplified progressive fibrosing (simplified PF) criteria, were examined in this cohort and validated in the other reported Dutch CTD-ILD cohort. Results A total of 230 patients with CTD-ILD were included and the median follow-up period was six (3—9) years. Mortality risk was independently associated with age (adjusted HR 1.07, p < 0.001), smoking history (adjusted HR 1.90, p = 0.045), extent of fibrosis on high-resolution computed tomography (HRCT) at baseline (adjusted HR 1.05, p = 0.018) and baseline DLCO (adjusted HR 0.97, p = 0.013). Patients with regular pulmonary function tests in the first 2 years (adjusted HR 0.42, p = 0.002) had a better survival. The prognostic relevance for survival was similar between the three PPF criteria in the two cohorts. Conclusion Higher age, smoking, increased extent of fibrosis and low baseline DLCO were associated with poor prognosis, while regular pulmonary function evaluation was associated with better survival. The INBUILD, ATS/ERS/JRS/ALAT 2022, and simplified PF criteria revealed similar prognostication.
Introduction
Connective tissue diseases (CTD) are characterized by dysregulation of the immune system resulting in inflammation and subsequent tissue damage followed by fibrosis. In CTDs with lung involvement, inflammation and/or fibrosis of pulmonary parenchyma leads to deterioration of lung function, cough and shortness of breath. Interstitial lung disease (ILD) occurs in approximately 15% of CTD patients, depending on the type of CTD, and is associated with high mortality and decreased quality of life (1).
The disease course of CTD-associated ILD (CTD-ILD) is heterogeneous. Therefore, clinical characteristics and risk factors for poor prognosis are crucial in managing patients with CTD-ILD. In previous studies, several biomarkers, fibrotic high-resolution computed tomography (HRCT) at baseline, senior age, smoking, steroid use and progressive pulmonary fibrosis have been identified as predictors of poor prognosis in CTD-ILD (2)(3)(4).
Particularly, rapid deterioration of respiratory symptoms, lung function and progressive fibrosis on HRCT are referred to as progressive fibrosing interstitial lung diseases or progressive pulmonary fibrosis (PPF) (3,(5)(6)(7). Identification of patients with PPF is crucial for clinical practice, as these patients have a poor prognosis and may benefit from antifibrotic drugs similar to patients with idiopathic pulmonary fibrosis (IPF) in randomized controlled trials (8,9); however, the definition of PPF criteria differ between studies. Furthermore, the American Thoracic Society, European Respiratory Society, Japanese Respiratory Society, and Asociación Latinoamericana de Tórax (ATS/ERS/JRS/ ALAT) defined scientific societies-approved criteria in the 2022 guideline (7). The variety in criteria complicates study comparison and clinical implication. In this study, we aimed to explore the prognostic relevance of the different PPF criteria in patients with CTD-ILD.
Study population
This is a single center retrospective cohort study performed at the ILD Center of Excellence, St. Antonius Hospital, Nieuwegein, Netherlands. Patients diagnosed with CTD-ILD or interstitial pneumonia with autoimmune features between 2005 and 2021 were included when at least a baseline HRCT was available (10-12). Baseline was defined as the time of ILD diagnosis. All patients were discussed in multidisciplinary team meetings. Clinical characteristics, laboratory results and pulmonary function tests (baseline, 6 months, 1 year, and 2 years) were retrieved from the electronic medical records. This study was approved by the Medical Research Ethics Committees United (MEC-U, number R05-08A) and all patients provided written informed consent.
Pulmonary imaging
High-resolution computed tomography results were collected at baseline, 1 and 2 years. Baseline HRCT patterns were classified according to the classification for idiopathic interstitial pneumonia (13,14), listing as consistent with usual interstitial pneumonia (UIP), probable UIP, alternative diagnosis or indeterminate for UIP. Probable and consistent with UIP were summarized as UIP. The alternative diagnosis was then classified as non-specific interstitial pneumonia [NSIP, including fibrotic, cellular, or mixed (15)], lymphocytic interstitial pneumonia (LIP), organizing pneumonia (OP), desquamative interstitial pneumonia, nodular lymphocytic hyperplasia, pleuroparenchymal fibro-elastosis and acute interstitial pneumonitis (AIP). The predominant HRCT features were categorized into fibrotic, including features as reticulation and honeycombing, or inflammatory, including ground-glass opacity and consolidation (3,(16)(17)(18). The changes in fibrosis and inflammation over time were classified as progression, stable, or regression. Extent of fibrosis on HRCT was evaluated at all time points. HRCTs were evaluated by two experienced thoracic radiologists who were blinded to clinical information and pathology diagnosis.
Criteria for progression
The INBUILD criteria included patients with ≥10% relative decline in percentage of predicted forced vital capacity (FVC), ≥5 and <10% relative decline in FVC with progressive fibrosis on HRCT or worsening of respiratory symptoms, or deterioration of both HRCT fibrosis and respiratory symptoms within 2 years despite standard (anti-inflammatory) treatment (8). The ATS/ERS/JRS/ALAT 2022 criteria were met with at least two of the following criteria; worsening of respiratory symptoms, fibrotic progression on HRCT and lung function deterioration [≥5% absolute decline in FVC and/or ≥10% absolute decline in percentage of predicted hemoglobin adjusted diffusing capacity of the lung for carbon monoxide (DLCO)] occurring within 1 year and without alternative explanation (7). The simplified progressive fibrosing (simplified PF) criteria were met with any of the following: ≥10% relative decline in FVC, ≥15% relative decline in DLCO, or progression of fibrosis on HRCT within 2 years [Supplementary Table S1; (3,6)].
The prognostic relevance for mortality over time was evaluated for the INBUILD criteria, the ATS/ERS/JRS/ALAT 2022 criteria, and simplified PF criteria. The prognostic relevance of the three PPF criteria was then validated in a previously published Dutch CTD-ILD cohort at University Medical Center Utrecht (UMCU) (3).
Statistical analysis
Categorical variables were presented in frequencies, and the difference between groups was examined in Fisher's exact test. The distribution of the data was assessed in histograms. The continuous variables were presented in medians (interquartile range, IQR), and the difference between groups was determined using the Wilcoxon rank sum test. The hazard ratios (HR) for mortality risks were calculated using Cox regression, and variables with a value of p < 0.1 were included in a multivariable analysis with age, gender, smoking, comorbidities, and underlying CTD. The prognostic relevance for mortality and the PPF criteria was examined in the time-dependent receiver operator characteristic (ROC) model and visualized in area under curve (AUC) over time. Risk factors for PPF were examined in logistic regression. Missing data were omitted from each regression analysis. A value of p < 0.05 was considered statistically significant. All statistical analyses were performed using R 4.0.3.
Progressive pulmonary fibrosis in the first 2 years was observed in 61 (27%) patients meeting INBUILD criteria, 53 (23%) meeting ATS/ ERS/JRS/ALAT criteria, 136 (59%) meeting simplified PF criteria and 125 (54%) when using simplified PF criteria with a threshold for HRCT ≥5% increase in the extent of fibrosis. The prevalence of PPF in each CTD was shown in Supplementary Table S2. Diagnosis of SSc, azathioprine use, PVD, regular follow-up pulmonary function, NSIP pattern and ANA positivity were revealed as predictors for more than two PPF criteria in univariable analysis; TNF inhibitor use was associated with reduced PPF risk. After multivariate adjustment, PVD and NSIP pattern remained significant as predictors for more than two PPF criteria (Supplementary Table S3). In RA patients, baseline HRCT with fibrotic NSIP pattern was associated with PPF meeting ATS/ERS/ JRS/ALAT criteria (OR 6.04, p = 0.012) and INBUILD criteria (OR 7.60, p = 0.004). For other CTDs, no risk factors could be identified for more than two PPF criteria.
None of the PPF criteria (in the first 2 years) achieved significant relation with mortality in Cox regression. The prognostic relevance did not differ between simplified PF criteria, INBUILD and ATS/ERS/ JRS/ALAT criteria; the prognostic value improved in simplified PF criteria with defining HRCT progression with a ≥5% increase in fibrosis. The prognostic relevance of the PPF criteria with mortality A B FIGURE 1 Serial change in pulmonary function test including percentage of predicted forced vital capacity (FVC) (A) and hemoglobin adjusted diffusing capacity of the lung for carbon monoxide (DLCO) (B).
Frontiers in Medicine 05 frontiersin.org risk over time in both cohorts is shown in Figure 2; The prognostic value of PPF criteria increased during the first 3 years and achieved a plateau thereafter in both cohorts.
Discussion
This study explored the characteristics of patients with early CTD-ILD and their prognostic correlation with PPF. Increased age, smoking, and increased extent of fibrosis were associated with higher mortality risk, while higher baseline DLCO and regular pulmonary function tests were associated with reduced mortality risk. The prognostic relevance with mortality did not differ between simplified PF criteria, INBUILD and ATS/ERS/JRS/ALAT 2022 criteria.
The risk factors associated with mortality in this cohort are in line with identified risk factors in previous studies. Age and smoking are overarching risk factors across diseases (20). Patients with early diagnosis and subsequently low extent of fibrosis on HRCT and better DLCO, have a larger window of opportunity to initiate treatment in order to decrease the risk of progression. In addition, a large proportion of patients in this study had low extent of fibrosis at baseline, in contrast to previous studies, including the INBUILD trial and the validation cohort, in which more patients had high extent of fibrosis (3,8). The correlation between mortality and PPF was also more prominent in patients with extensive lung fibrosis than in those with limited lung fibrosis in another SSc-ILD cohort (6).
In several studies, UIP pattern was observed more often in RA patients and was associated with mortality and DLCO decline (21, 22). In our study, RA patients were older and had UIP patterns more frequently than patients with other CTDs. However, this was not significantly associated with mortality. We did find an association with UIP pattern and mortality in the non-RA group. Similarly, in a recent RA-ILD study, UIP pattern was not associated with mortality or FVC decline at 2 years (23). A possible explanation is that treatment strategies in RA have improved tremendously in the last decades, whereas disease control in other underlying CTD diseases has proven more challenging. Moreover, not only UIP pattern was associated with predominant fibrosis; also, fibrotic NSIP and some other patterns could be linked to predominant fibrosis and were associated with increased risk for PPF. This finding is in line with the results of the validation cohort; predominantly fibrotic HRCT patterns revealed an increased risk for PPF (3,18). Patients with predominantly inflammatory HRCT may respond better to anti-inflammatory treatment than those with predominantly fibrotic HRCT and therefore reduce the risk of PPF.
There may be a different risk profile of PPF in each CTD, while baseline severity, including lung function and HRCT, seems to be an overarching risk. In the European Scleroderma Trials and Research (EUSTAR) database, a large registry of SSc patients in Europe, male gender, higher modified Rodnan skin score and reflux/dysphagia symptoms were associated with FVC decline over 5 years in patients with SSc-ILD (24). In patients with RA-ILD, low baseline FVC/DLCO, UIP pattern, and steroid-use (>10 mg/day) were associated with progressive lung function decline (25). A positive serum anti-MDA5 is associated with rapid progression in IIM patients, but distinct clinical course was observed in subgroups (26, 27).
In recent years, PPF has received attention in trials increasingly, especially after the randomized trials with antifibrotic treatment. The natural history of PPF in ILD, including CTD-ILD, appears to be comparable with idiopathic pulmonary fibrosis (IPF) (28). Nevertheless, definitions of PPF vary across studies. The ATS/ERS/ JRS/ALAT 2022 criteria were the first consensus of scientific societies but were based on data from IPF (7). As emphasized in the ATS/ERS/ JRS/ALAT 2022 guideline, PPF should be utilized in prognostication instead of diagnosis. We examined the prognostic correlation of these PPF criteria in the time-dependent ROC model. The prognostic correlation with mortality was similar between the three PPF criteria and achieved a plateau after 3 years in this cohort (predominant CTD in RA) and the validation cohort (predominant CTD in SSc); the AUC in time-dependent ROC model was higher in the validation cohort than this cohort. The strength of this study is that we validated the prognostication with two real world CTD-ILD data. The prognostic relevance was visualized in time dependent ROC model. Most patients were diagnosed early with low extent of fibrosis at baseline. However, the proportion of missing data was relatively high and can be regarded as limitation of this study (Supplementary Figure S1). As the St. Antonius Hospital is an ILD referral center, patients are often evaluated once for expert opinion after which follow-up will take place at local hospitals, which could largely explain the missing data at follow-up. In addition, patient reported respiratory symptoms were not systematically scored in the medical records, therefore we did not include this parameter in our analysis. In the validation cohort, 23 (15%) patients reported symptom progression from dyspnea on exertion to dyspnea at rest or oxygen requirement in the first 2 years. Because of the missing data at follow-up, the proportion of patients with PPF may be underestimated. Nonetheless, regular pulmonary function test in the first 2 years was associated with a significant preferable prognosis. A second limitation is that the reading of HRCT, which relies on experienced radiologists, may be variation in interobserver agreement, and radiological progression of most of the criteria is descriptive (3, 7-9, 29, 30). An artificial intelligence-aided quantitative HRCT evaluation could improve accurate detection of changes, although these techniques are not universally available yet (31,32). Since The prognostic relevance to mortality and progressive pulmonary fibrosis (PPF) is shown in this time dependent receiver operator characteristic (ROC) model. The figure demonstrates the area under the ROC curve (AUC) over the follow-up period in this cohort (A) and the validation cohort (B). The vertical line indicates the timepoint of 24 months when PPF was identified. A higher AUC reflects a better correlation of the criteria with prognosis. The PPF criteria, including ATS/ERS/JRS/ALAT criteria (ATS/ERS), INBUILD criteria (INBUILD), and the simplified progressive fibrosing criteria (SPF), did not substantially outcompete each other. The prognostic value in AUC improved in SPF with defining HRCT progression with a ≥ 5% increase in fibrosis (SPF with 5% threshold) in the present cohort (A).
Frontiers in Medicine 07 frontiersin.org CTD-ILD is a heterogenous manifestation, further research in biomarkers and artificial intelligence-aided HRCT analysis could support tailored clinical decision making.
In conclusion, we identified risk factors for mortality and examined prognostication of PPF in CTD-ILD patients. CTD-ILD is a rather heterogenous disease and the current PPF criteria may not be applicable universally. Disease control of the underlying CTD, multidisciplinary evaluation and systematic assessment of respiratory symptoms, pulmonary function, and HRCT are instrumental to identify high-risk patients and tailor treatment strategies (33). Further research is needed to explore optimal use of PPF criteria in managing patients with CTD-ILD.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Medical Research Ethics Committees United. The patients/participants provided their written informed consent to participate in this study.
Author contributions
Y-HC, JL, JG, and JS conceptualized this study. MK and AW retrieved the clinical data. HE and LL analyzed the pulmonary images. Y-HC, MV, PW, AJ, JL, JG, and JS interpreted the clinical data. Y-HC, MK, and PW performed the formal analysis. Y-HC wrote the original draft. AW performed the data management. All authors have critically reviewed and agreed on all versions of the article, the article submission, and taking responsibility for all aspects of the work.
Funding
This study was funded in part by a student grant from the government of Taiwan (Y-HC). | 2023-03-01T16:12:33.104Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "7cf76c905122c130c9f63b9cf54cb7dae9a6488c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2023.1106560/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e8d5b24c4b72951f0ec60d5ea996d58ef268fee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
72214307 | pes2o/s2orc | v3-fos-license | Detection of low birth weights in newborns by Foot length as proxy measure
Introduction : Birth weight is an important and sensitive parameter and a determinant factor regarding mortality and morbidity in neonates. However weighing facility may not be available for all home deliveries in remote and rural area in developing countries, where an alternative parameter like foot length may be considered in place of birth weight.Methods: Foot length, birth weight and various other anthropological parameters were measured and compared in 316 low birth weights out of 500 live newborns. A cutoff foot length was detected for different low birth weight groups and its sensitivity, specificity, positive and negative predictive values was determined .Results: Cut off Foot lengths of 6.70cm ,7.45cm and 8.20 cm were identified for corresponding birth weight groups of 1-1.499 kg, 1.5-1.999 kg, 2.0-2.500 kg. Sensitivity, Specificity ,Positive predictive value for identifying newborns <1.499 kg were 91.91% , 86.54% and 71.84% for newborns <1.99 kg it was 91.01% ,99.14% and 99.18% for newborns <2.5 kg it was 79.32, 100% and 100% respectively. Correlation coefficients between foot length and birth weight showed the highest correlation (r=0.96).Conclusion: Foot length may be considered an alternate parameter to birth weight to detect low birth weight babies ,especially in remote areas(where baby weighing machines are not available) and also in those conditions where baby is less likely to be disturbed. The Calipers , for measuring Foot length may be used by paramedical workers as efficient tool in such places.
Introduction
Parameters of growth are most sensitive indicators of nutritional status of population 1 . Birth weight is an important indicator of survival, future growth and overall development of the child. It is associated with socio-economic, clinical, racial, hereditary personal and geographical factors 2 . Low birth weight is associated with high neonatal morbidity and mortality due to susceptibility to adverse environmental influences predilection to infections and difficulties in maintaining adequate nutrition .The prevalence of low birth weight babies is 22.5% by National family health survey-3, however, birth weight was reported only in 34.1% of cases of live births, this means that actual numbers might be even higher 3 . It is estimated that about 30% of babies born in India are Low birth weights and over 80% of all neonatal deaths are among them in developing countries. 4 In spite of this importance of birth weight, in the developing countries including India recording of birth weight has been a problem. In India 31% of all deliveries in rural area and overall 26% of rural and urban India is conducted by untrained functionaries 5 . According to 2010-11 report, at birth 23% of newborns remained not weighed , as the deliveries are conducted in homes where weighing of baby is not feasible 5 ,however this can be even higher as no data is available as to how many health centers in India have facility of baby weighing machine.
In fact this is due to the non availability or lack of facility such as baby weighing machines 6 . Accurate weight record of babies is a sensitive index of their well being and availability of a sturdy and reliable weighing machine fulfills a fundamental need. 4 Therefore, there arises a need of alternative measurements for estimation of birth weight which should be easy, simple, and reliable in the hands of inexperience staff and have a good correlation with it 7 . Foot length is one such parameter which can be measured and implemented easily in such conditions even in a sick baby.
Material and Methods
The Study was prospective observational type and was conducted in the department of pediatrics in Surat municipal institute of medical education and research, Surat (SMIMER). It was approved by Institutional ethical committee. Five hundred live newborns delivered (SMIMER and municipal city health centers) from October 2011 to October 2012 weighing from 1 to 3.5 kg, out of which those weighing between 1 to 2.5 kg, total numbering to 316 were selected for study. Newborns, having congenital anomalies, dysmorphic features, vertebral, cranial, limb deformities, and having intrauterine infections were excluded from the study. The selected newborns were thoroughly examined by single investigator(to avoid any interpersonal error) within 48 hours of birth and underwent anthropological measurements Newborns were weighed nude on electronic weighing scale to the nearest 10 gm. Digital Sliding calipers(measuring range 0 -150mm,accuracy +/-0.02mm) was used for Foot length. Foot length was measured from posterior most prominence of foot to the tip of longest toe(first/second) of the right & left foot with calipers twice and mean of both feet was taken in the study. Flexible non stretchable, fiber tape(measuring nearest to 0.1cm) was used for measuring head circumference ,calf circumference ,chest circumference and Infantometer was used for measuring length. Data were analyzed using SPSS (Version16) software. Correlation between foot length and other parameters was analyzed by correlation and regression. ANNOVA (Leven"s and Robust test) was applied to find out difference in means of foot length of different birth weight groups. Linear regression equation to derive cut off foot length for various birth weight groups. Sensitivity, specificity Positive and Negative predictive values were calculated for each birth weight group from each cut-off foot length.
Abstract
Introduction: Birth weight is an important and sensitive parameter and a determinant factor regarding mortality and morbidity in neonates. However weighing facility may not be available for all home deliveries in remote and rural area in developing countries, where an alternative parameter like foot length may be considered in place of birth weight. Methods: Foot length, birth weight and various other anthropological parameters were measured and compared in 316 low birth weights out of 500 live newborns. A cutoff foot length was detected for different low birth weight groups and its sensitivity, specificity, positive and negative predictive values was determined. Results: Cut off Foot lengths of 6.70cm, 7.45cm and 8.20 cm were identified for corresponding birth weight groups of 1-1.499 kg, 1.5-1.999 kg, 2.0-2.500 kg. Sensitivity, Specificity, Positive predictive value for identifying newborns <1.499 kg were 91.91% , 86.54% and 71.84% for newborns <1.99 kg it was 91.01% ,99.14% and 99.18% for newborns <2.5 kg it was 79.32, 100% and 100% respectively. Correlation coefficients between foot length and birth weight showed the highest correlation (r=0.96). Conclusion: Foot length may be considered an alternate parameter to birth weight to detect low birth weight babies, especially in remote areas (where baby weighing machines are not available) and also in those conditions where baby is less likely to be disturbed. The Calipers, for measuring Foot length may be used by paramedical workers as efficient tool in such places. Keywords: Foot Length, Low Birth Weight, Newborns IJBR (2014) 05 (02) www.ssjournals.com
Results
Out of 316 low birth weight newborns 172(54%) were male and 144(46%) were females ( Table -2 shows the mean (along with 95% Confidence Interval) and Standard Deviation of each of the birth weight groups ANNOVA was applied to find out any differences in the means of any of the three birth weight groups. Leven"s test of homogenesity of variance was used which came to be significant (<0.05) suggesting ANNOVA results are invalid. As Leven"s test failed to demonstrate homogenesity, Robust test was considered for equality of means and p value, which came to be significant (p<0.05), suggesting ANNOVA results are valid.
There was also positive Linear correlation of foot length with all birth weight group (p<0.001) and from this a regression equation was obtained for deriving Foot length as follows.
Cut off Foot lengths of 6.70 cm, 7.45cm and 8.20 cm were identified for corresponding birth weight groups of 1-1.499 kg, 1.5-1.999 kg, 2.0-2.500 kg from above equation. Table-3 shows Sensitivity, Specificity, Positive and Negative predictive values of Foot lengths for given birth weight groups. Sensitivity, Specificity, Positive predictive value for identifying newborns <1.499 kg were 91.91% , 86.54% and 71.84% ,for newborns <1.99 kg it was 91.01% ,99.14% and 99.18% for newborns <2.5 kg it was 79.32 ,100% and 100% respectively. In Table-4 comparing our study with Elizabeth et al shows correlation coefficients between foot length and other anthropological parameters , with regard to Foot length ,birth weight showed the highest correlation(r=0.96) as compared to other parameters ,followed by head circumference ,chest circumference ,calf circumference and length.
Discussion
Early identification of low birth weight is an important pre-requisite of any initiative to reduce mortality. However identifying low birth weight newborns may be hampered due to lack of availability of weighing machine ,fears of its cost , maintenance sustainability and to some extent reluctance for health volunteers to carry weighing machines especially in developing countries 7 like India .Other alternatives of growth parameters are measurement of head circumference ,chest ,calf and thigh circumferences , body length etc. These are simple and good alternatives but may require greater exposure of newborns while measurement, to environmental variations (winter) and may be more disturbing and handling in sick children. Foot length is one such alternative in above conditions. Foot length can be measured by simple stiff plastic or metal ruler or more precisely by digital sliding calipers ,which gives a more accurate and direct reading access ,easy to handle and carry ,and can be utilized by a health worker/ volunteer by very simple training.
On comparing cut off Foot lengths with other studies ,it was found that in <1.5 kg newborns group Hirve et al 10 11 and in our study it was 92% and 87% respectively. In our study Foot length had highest correlation with birth weight ( r=0.96), followed by head circumference (r=0.88),chest circumference(r=0.82), calf circumference(r=0.76) and length (r=0.65), Elizabeth et al 8 had also highest correlation of Foot length with birth weight (r=0.97) like us and head circumference (r=0.88), however Foot length had higher correlation with chest circumference in their study (r= 0.93) whereas in our study it was lower (r=0.82).
Conclusion
Foot length is an alternative anthropometrical parameter to birth weight and can be useful especially in remote areas with no facility of baby weighing machines and in conditions where baby would not liked to be exposed like in winter and disturbed in sickness. The digital calipers used to measure foot length are less costly, easy to carry and operate in the absence of baby weighing machines. | 2019-03-09T14:03:50.259Z | 2014-03-02T00:00:00.000 | {
"year": 2014,
"sha1": "df9649c492eea5dee9703a76f5a066fdde29655b",
"oa_license": "CCBY",
"oa_url": "https://ssjournals.com/index.php/ijbr/article/download/940/936",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e4845a4b3619eb3d1228394172a75db22d949adf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119579615 | pes2o/s2orc | v3-fos-license | On weak Zariski decompositions and termination of flips
We prove that termination of lower dimensional flips for generalized klt pairs implies termination of flips for log canonical generalized pairs with a weak Zariski decomposition. Moreover, we prove that the existence of weak Zariski decompositions for pseudo-effective generalized klt pairs implies the existence of minimal models for such pairs.
Introduction
One of the main goals of the minimal model program is to show that given a Q-factorial klt pair (X, B) such that K X + B is pseudo-effective (resp. not pseudo-effective), then there exists a finite sequence of divisorial contractions and flips X X 1 X 2 . . . X n such that (X n , B n ) is a minimal model (resp. there is a Mori fiber space X n → Y and in particular −(K Xn + B n ) is ample over Y ), where B n is the strict transform of B on X n . We refer the reader to [KM98] for the details of the minimal model program. After [BCHM10], it is known that the above sequence of flips and divisorial contractions always exists and the only remaining question is wether it terminates after finitely many steps. It is well known that any such sequence can have only finitely many divisorial contractions and hence the main open question is if there are no infinite sequences of flips. A flip X X + is a small birational map of Q-factorial varieties, projective over a variety W such that ρ(X/W ) = ρ(X + /W ) = 1 and both −(K X + B) and K X + + B + are ample over W where B + is the strict transform of B. As a consequence of the negativity lemma, it is easy to see that flips improve certain singularity invariants known as log discrepancies. More precisely, if X X + is a flip, then we have the following inequality a E (X, B) ≤ a E (X + , B + ) which is strict if and only if the center of E is contained in the flipping locus i.e. the exceptional locus of the flipping contraction X → W . Shokurov has shown [Sho04] that certain natural conjectures concerning log descrepancies (namely the ascending chain condition for MLD's and the semicontinuity for MLD's) actually imply termination of flips. Unluckily these conjectures are very subtle and not well understood in dimension ≥ 3. In [BCHM10] a different approach is introduced. Instead of trying to prove termination of arbitrary sequences of flips, the authors show termination of specific kinds of minimal model programs known as minimal model programs with scaling. This approach is succesful whenever K X + B is big or B is big or K X + B is not pseudo-effective. In particular the existence of minimal models for klt pairs of log general type follows as well as the existence of Mori fiber spaces for klt pairs (X, B) such that K X + B is not pseudo-effective. This approach does not seem to shed any light on the termination of arbitrary sequences of flips.
In [Bir07], Birkar introduced a new philosophy to prove termination of flips for klt pairs such that K X + B is pseudo-effective. In this case one expects that K X +B ≡ G ≥ 0. Birkar shows that assuming the ascending chain condition conjecture for log canonical thresholds and the termination of flips for klt pairs of dimension ≤ d − 1, then flips terminate for any d-dimensional log canonical pair (X, B) such that K X + B ≡ G ≥ 0. The ascending chain condition conjecture for lct's was proved by Hacon, McKernan and Xu in [HMX14], and later extended to the context of generalized pairs by Birkar and Zhang in [BZ16]. In [Sho09], Shokurov shows that termination of flips with scaling holds for pseudo-effective klt fourfolds and in particular these pairs admit a minimal model and hence a Zariski decomposition. In [Mor18], the second author proves termination of psuedo-effective 4-fold flips by combining the results of [Bir07], [Sho09] and [BZ16]. Following this phylosophy, in this article we prove that the existence of a weak Zariski decomposition for a generalized log canonical pair can be used to reduce termination of flips for such pair to lower dimensional terminations. More precisely, we prove the following theorems: Theorem 1. Assume termination of flips for generalized klt pairs of dimension at most n−1. Let (X/Z, B + M ) be a generalized log canonical pair of dimension n admitting a weak Zariski decomposition. Then any minimal model program for K X + B + M/Z terminates.
Theorem 2. Assume existence of weak Zariski decompositions for pseudo-effective generalized log canonical pairs of dimension at most n. If (X/Z, B+M ) is a pseudo-effective generalized log canonical pair of dimension n, then any good minimal model program for (X/Z, B + M ) terminates. See 1.12 for the definition of good minimal model program. In particular, any minimal model program with scaling of an ample divisor is good so we obtain the following corollary.
Corollary 1. Assume the existence of weak Zariski decompositions for pseudo-effective generalized log canonical pairs of dimension at most n. If (X/Z, B + M ) is a pseudo-effective log canonical pair of dimension n, then (X/Z, B + M ) has a minimal model.
Finally we remark that it is expected that the existence of a weak Zariski decompositions for pseudoeffective generalized log canonical pairs is implied by the existence of a weak Zariski decompositions for pseudo-effective log canonical pairs. We hope to adress this issue in a separate paper.
Preliminary results
1.1. Weak Zariski decomposition. Definition 1.1. Let D be a R-Cartier divisor on a normal variety X/Z. A weak Zariski decomposition for D over Z consists of a normal variety X ′ , a projective birational morphism f : X ′ → X, and a numerical equivalence f * D ≡ Z P ′ + N ′ such that the following properties hold (1) P ′ and N ′ are R-Cartier divisors, (2) P ′ is nef over Z and N ′ is an effective R-divisor. We will say that a generalized pair (X/Z, B + M ) has a weak Zariski decomposition if the R-Cartier divisor K X + B + M/Z has a weak Zariski decomposition. In what follows, we may write WZD instead of weak Zariski decomposition in order to shorten the notation.
Remark 1.2. Consider an R-Cartier divisor D on a projective normal variety X. If there exists a projective D-non positive birational contraction π : X X 1 , such that the divisorial push-forward π * D is a nef R-Cartier divisor, then D has a weak Zariski decomposition. Indeed, we consider a common resolution of singularities with projective birational morphisms f : X ′ → X and f 1 : X ′ → X 1 , and we write is nef and E is an effective R-divisor. In particular, a pair (X, ∆) admitting a minimal model has a weak Zariski decomposition. Therefore, conjecturally, every pseudo-effective log canonical pair has a WZD.
Remark 1.3. In [Zar62], Zariski proved that any effective divisor D on a smooth projective surface X can be decomposed as P + N , where P and N are Q-divisors, P is nef, N is effective, the intersection matrix of N is negative definite, and P · C = 0 for every irreducible componente C of N . In [Fuj79], Fujita generalized the above decomposition to the context of pseudo-effective R-divisors.
There have been many attempts to generalize the above decomposition for higher dimensional varieties. For instance, the Fujita-Zariski decomposition [Fuj86] and the CKM-Zariski decomposition (see, e.g., [Pro03]). In [Bir12a], assuming the minimal model program for dlt pairs in dimension d − 1, the author proves that the existence of a WZD for a log canonical pair of dimension d is equivalent to the existence of all of the above decompositions.
Remark 1.4. In [Les14], the author constructs a psuedo-effective divisor on the blow up of P 3 at nine very general points, which lies in the closed movable cone and has negative intersections with a set of curves whose union is Zariski dense. Hence, this pseudo-effective divisor does not admit a weak Zariski decomposition.
1.2. Generalized pairs. In this subsection, we recall the language of generalized pairs. Definition 1.5. A generalized pair is a triple (X/Z, B + M ), such that the following conditions hold (1) X is a quasi-projective normal algebraic variety, (2) X → Z is a projective morphism of normal varieties, (3) M is the push-forward of a nef R-divisor on a higher birational model of X over Z, (4) B is an effective R-divisor, (5) K X + B + M is an R-Cartier divisor. More precisely, there exists a projective birational morphism f : X ′ → X from a normal quasi-projective variety X ′ and a nef R-Cartier R-divisor M ′ such that M = f * M ′ . We can define B ′ via the equation We will say that B is the boundary part and M is the nef part of the generalized pair. Observe that M ′ defines a nef b-Cartier R-divisor in the sense of [Cor07, Definition 1.7.3]. We will say that this is the nef b-divisor associated to the generalized pair.
Definition 1.6. Given a projective birational morphism g : X ′′ → X which dominates X ′ → X, we can write where M ′′ is the pull-back of M ′ to X ′′ . Given a prime divisor E on X ′′ , we define the log discrepancy of (X/Z, We say that (X/Z, B + M ) is Kawamata log terminal or klt if the log discrepancy of (X/Z, B + M ) at any prime divisor over X is positive, and we say that (X/Z, B + M ) is log canonical or lc if the log discrepancy of (X/Z, B + M ) at any pirme divisor over X is non-negative. By Hironaka's resolution of singularities we may assume that X ′′ is smooth and B ′′ has simple normal crossing support. In this case, (X/Z, B + M ) is klt (resp. lc) iff coeff(B ′′ ) < 1 (resp. coeff(B ′′ ) ≤ 1).
Definition 1.7. Let (X, B + M ) be a generalized pair and (X ′′ , B ′′ + M ′′ ) any log resolution as above. A prime divisor E of X ′′ such that coeff E (B ′′ ) ≥ 1 is called a generalized non-klt place of the generalized pair (X, B + M ). Moreover, if coeff E (B ′′ ) = 1 (resp. coeff E (B ′′ ) > 1) then we may call it a generalized log canonical place (resp. generalized non-lc place) of the generalized pair on X ′′ . The image of a generalized nonklt place (resp. generalized log canonical place) on X is called a generalized non-klt center (resp. generalized log canonical center) of the generalized pair. A generalized non-klt center of a generalized pair (X, B + M ) is said to be minimal if it is minimal with respect to inclusion.
Definition 1.8. Let (X/Z, B + M ) be a generalized log canonical pair. A weak contraction φ : X → W for the generalized pair is a projective birational contraction over Z, such that −(K X + B + M ) is nef over W . A quasi-flip of φ is a projective birational map π : X X + with a projective birational contraction φ + : X + → W over Z, such that the following conditions hold of Weil R-divisors on W holds, and (4) the nef parts M and M + are the trace of a common nef b-Cartier b-divisor.
As usual, the morphism φ (resp. φ + ) is called the flipping contraction (resp. flipped contraction). We may call (X/Z, B + M ) (resp. (X + /Z, B + + M + )) the flipping generalized pair (resp. flipped generalized pair) when the flip is clear from the context. Definition 1.9. A quasi-flip π is said to be ample if −(K X + B + M ) and K X + + B + + M + are ample over W , and at most one of the morphisms φ and φ + is the identity. Observe that if φ + is the identity, then φ is a divisorial contraction, and vice-versa. In the above case, the quasi-flip will be called a weak divisorial contraction and weak divisorial extraction, respectively. The quasi-flip π is said to be small if both φ and φ + are small morphisms. A flip is an ample small quasi-flip of relative Picard rank one. A divisorial contraction (resp. divisorial extraction) is a weak divisorial contraction (resp. weak divisorial extraction) of relative Picard rank one.
Definition 1.10. A sequence of quasi-flips for a generalized log canonical pair (X, B + M ) is said to be with a common b-nef divisor if all the nef parts M i in the sequence of quasi-flips are the trace of a common b-nef b-Cartier R-divisor. A sequence of quasi-flips for a generalized log canonical pair (X, B + M ) is said to be under a set satisfying the DCC if the coefficients of all the boundary parts B i in the sequence of quasi-flips belong to a fixed set satisfying the DCC. Moreover, we say that the sequence is with a fixed boundary divisor if the boundary divisor on the flipped pair is the divisorial push-forward of the boundary divisor on the flipping pair.
Definition 1.11. A minimal model program for K X + B + M over Z, is a sequence of flips and divisorial contractions for K X + B + M over Z. A weak minimal model program for K X + B + M over Z, is a sequence of ample quasi-flips for K X + B + M over Z.
Definition 1.12. We say that a weak minimal model program Remark 1.13. An R-Cartier divisor D is nef if and only if its diminished base locus Bs − (D) is empty. By the negativity lemma, every irreducible component of is an irreducible component of the diminished base locus of the flipped generalized pair. Therefore, conjecturally every minimal model program is good. Moreover, it is known that a minimal model program with scaling of an ample divisor is good.
Proposition 1.14. Given a quasi-flip π : X X + for generalized log canonical pairs (X/Z, B + M ) and (X + /Z, B + + M + ) over Z, with flipping contraction φ : X → W , and a prime divisor E over X, we have that a E (X/Z, B + M ) ≤ a E (X + /Z, B + + M + ) and the inequality is strict if and only if the center of E on X is contained in the flipping locus.
where the above generalized pair has boundary part B + λN and nef part M + λP . If (X, B + M ) is generalized log canonical, then the above real number is non-negative. Observe that the above threshold may be infinite, for instance if P ′ = f * P and N = 0. satisfies the ascending chain condition. Here, we assume that N + P is R-Cartier so that the definition of log canonical threshold makes sense. The proof relies on [HMX14], where this result is proved in the case M ′ = N ′ = 0. In [BZ16], the authors prove the statement by induction in the number of non-trivial coefficients of M ′ and N ′ .
Remark 1.17. If M = 0, then we will drop the word "generalized" from the definition. In this case, we are in the usual setting of log pairs as in [KM98,HK10].
1.3. Log canonical threshold with respect to weak Zariski decompositions. In this subsection, we introduce an invariant for generalized log canonical pairs admitting a weak Zariski decomposition.
Definition 1.18. Let (X/Z, B + M ) be a Q-factorial generalized log canonical pair with a weak Zariski decomposition given by the projective birational morphism f : X ′ → X over Z and the numerical equivalence We call this invariant the log canonical threshold of the generalized pair with respect to the weak Zariski decomposition or just the lct with respect to the WZD. When the weak Zariski decomposition is clear from the context, we will just write lct WZD instead of lct WZD(f,N +P ) .
Remark 1.19. The generalized log canonical threshold with respect to the weak Zariski decomposition depends on the chosen WZD and not only on the given generalized pair. For instance, every effective divisor linearly equivalent to K X + B + M gives a different weak Zariski decomposition, and different choices of effective divisors give different log canonical thresholds. The above invariant is uniquely determined by the generalized pair if we choose a Nakayama decomposition (see, e.g., [Nak04]). However, the existence of a WZD is a weaker assumption (see, e.g., [Bir12a]).
Lemma 1.20. Let (X/Z, B + M ) be a Q-factorial generalized log canonical pair with a weak Zariski decomposition. The lct with respect to the WZD is finite unless K X + B + M is nef over Z.
Proof. Without loss of generality we may assume that we have a projective birational morphism f : X ′ → X such that both nef b-Cartier divisors P ′ and M ′ descend to X ′ . If N ′ is a non-trivial effective divisor, then the above log canonical threshold is finite, so we may assume it is trivial. Since X is Q-factorial, by the negativity lemma we can write f * P = P ′ + E where E is an effective divisor. If E is non-trivial, the above log canonical threshold is again finite. Otherwise, we have f * P = P ′ which implies that P is a nef divisor over Z, and so we conclude that K X + B + M ≡ Z P is nef over Z as well.
Lemma 1.21. The lct with respect to the WZD does not change if we replace X ′ by a higher birational model.
Proof. The generalized log canonical threshold only depends on the nef b-Cartier divisor P ′ and the effective divisor N = f * N ′ , both of them are invariant by taking higher birational models of X ′ Lemma 1.22. Let (X/Z, B + M ) be a Q-factorial generalized log canonical pair with a weak Zariski decomposition f : X ′ → X such that f * (K X + B + M ) ≡ Z P ′ + N ′ where P ′ is nef over Z and N ′ ≥ 0. If π : X X 1 is a quasi-flip that extracts no divisors, then (X 1 /Z, B 1 + M 1 ) is a Q-factorial generalized log canonical pair with a compatible weak Zariski decomposition where B 1 = π * B and M 1 = π * M .
Proof. We may assume that f 1 : X ′ → X 1 is a morphism. We have be a sequence of small ample quasi-flips for K X + B + M over Z. Then, the lct of the generalized pairs (X i /Z, B i + M i ) with respect to the WZD induced by Lemma 1.22 forms a non-decreasing sequence of positive real numbers.
Proof. Since π i is a small ample quasi-flip over Z we know that the generalized log canonical pair (X i /Z, B i + M i ) is not nef over Z. Hence, by Lemma 1.20, we conclude that the lct with respect to any WZD of K Xi + B i + M i over Z is finite. It suffices to prove the statement for a single small ample quasi-flip π : X X + over Z, of the Q-factorial generalized log canonical pair (X/Z, B + M ). We will denote by (X + /Z, B + + M + ) the flipped generalized log canonical pair. Consider two projective birational morphisms f : X ′ → X and f + : X ′ → X + over Z, such that both nef b-Cartier divisors P ′ and M ′ descend on X ′ . We will denote by f * (K X + B + M ) ≡ Z P ′ + N ′ the induced weak Zariski decomposition for K X + B + M on X ′ . By the negativity lemma we have Hence, we have an induced Zariski decomposition for K X + + B + + M + /Z and we will denote P + = f + * P ′ and N + = f + * N ′ + . Without loss of generality we may assume that X ′ is a log resolution of both generalized pairs. By Lemma 1.21, this assumption does not change the lct with respect to the WZD. Therefore, for every λ > 0 we have that f * 1 (K X + + B + + M + + λ(P + + N + )) ≤ f * (K X + B + M + λ(P + N )), concluding the inequality between log canonical thresholds. Definition 1.25. We say that the pair (X/Z, B) is divisorially log terminal or dlt if the coefficients of B are less than or equal to one, and there is a log resolution g : X ′′ → X over Z, such that a E (X/Z, B) > 0 for all g-exceptional prime divisors E on X ′′ . We say that (X/Z, B + M ) is generalized divisorially log terminal or generalized dlt if (X/Z, B) is dlt and if every generalized non-klt center of (X/Z, B + M ) is a non-klt center of (X/Z, B).
is a divisor with simple normal crossing support, and for every prime divisor E with center in Y 0 we have Definition 1.27. Let (X/Z, B + M ) be a generalized log canonical pair. Let h : Y → X be a projective birational morphism of normal varieties over Z. We may assume that the given projective birational morphism f : X ′ → X factors through h. Then, we define B Y and M Y to be the push-forwards of B ′ and M ′ on Y , respectively. Thus, we can write Proof. Assume that the morphism f : X ′ → X over Z gives a log resolution of the generalized pair. We may assume that f is obtained by blowing up loci of codimension at least two, so that there exists an f -exceptional divisor C ≥ 0 such that −C is f -ample.
We define ∆ = ⌊B⌋ and T = B − ∆, and as usual we write is supported on the sum of the divisors with generalized log discrepancy zero, E 0 is supported on the sum of the f -exceptional divisors with generalized log discrepancy in (0, 1], and E − is supported on the sum of the f -exceptional divisors with generalized log discrepancy ≥ 1. We may assume that the support ogf E 0 contains the f -exceptional divisors with generalized log discrepancy equal to 1.
We consider a sufficiently ample divisor H on X. For every ǫ 1 , ǫ 2 , ǫ 3 ∈ R >0 we have We can choose ǫ 1 sufficiently small such that both −ǫ 2 (−C + f * H) + M ′ and ǫ 2 (ǫ 1 E + − C + f * H) + M are ample, so they are Q-linearly equivalent to effective divisors H 1 (ǫ 2 ) and H 2 (ǫ 2 ) with coefficients in (0, 1) such that B ′ + H 1 (ǫ 2 ) + H 2 (ǫ 2 ) has simple normal crossing support. If ǫ 3 is small enough, the pair is klt, so by [BCHM10] we can run a minimal model program π : X ′ Y , of the above pair with respect to X which terminates with a minimal model h : Y → X. The above minimal model program is also a minimal model program for the pair (X, ∆ ′ + E + + (1 + ǫ 3 )E 0 + H 1 (ǫ 2 )), so the minimal model is dlt.
Observe that the strict transform of the Q-divisor on Y is h-anti-nef and its push-forward on X is trivial. By the negativity lemma we conclude that the push-forward on Y of the above divisor must be effective. Then, if we take 0 < ǫ 2 ≪ ǫ 3 ≪ 1, the irreducible divisors on the support of E 0 ad E − are contracted in the minimal model program π : X Y . Thus, the generalized pair (Y, The following lemma is proved in a more general setting in [BZ16, Section 4]. Lemma 1.29. Let (Y /Z, B Y + M Y ) be a Q-factorial generalized dlt pair. Let A be a general effective ample divisor on Y over Z, then we can run a minimal model program for the generalized pair with scaling of A over Z.
1.5. Generalized dlt adjunction. In this subsection, we recall the construction and properties of generalized divisorial adjunction in [BZ16] and introduce a generalized dlt adjunction formula.
Definition 1.30. Let (X/Z, B + M ) be a generalized log canonical pair, assume that S is the normalization of a component of ⌊B⌋ and S ′ its birational transform on X ′ . Replacing the morphism f : X ′ → X with a higher birational model, we may assume that f is a log resolution for the generalized log canonical pair (X, B + M ). Then, we can write Proof. This is proved in [BZ16, Proposition 4.9].
Lemma 1.33. Let Λ be a set of nonegative real numbers satisfying the DCC condition and d ∈ Z ≥1 , then there is a set of nonegative real numbers Θ satisfying the DCC condition, which only depends on d and Λ, such that if is a generalized dlt pair, the coefficients of B V belong to Θ and we can write are Cartier divisors and µ i ∈ Λ. Proof. We proceed by induction on the codimension of the log canonical center. If the log canonical center has codimension one, then this is Lemma 1.32. If the log canonical center V has higher codimension, by Lemma 1.26 we know that V is contained in some divisor S which appears with coefficient one in B. Therefore, by Lemma 1.32 we can do a divisorial generalized adjunction to S. By [HK10, Theorem 3.24], the generalized pair (S/Z, B S + M S ) is dlt and V is a non-klt center of such generalized pair. Hence, by the induction hypothesis on the codimension, we can write an adjunction formula where Ω is the set of Lemma 1.32.
be a minimal model program which is an isomorphism at the generic point of a log canonical center V of (Y /Z, B Y + M Y ). Then, the induced sequence of birational maps (see §1.5) is a sequence of ample quasi-flips or identities for the generalized dlt pair (V, Proof. This is proved in [Mor18, Proposition 4.3] for the divisorial generalized adjunction. The general case follows by induction on the codimension of the log canonical center. Proof. Suppose that π : X X + is a small ample quasi-flip so that we have generalized klt pairs (X, B + M ) and (X + , B + + M + ) and projective morphisms φ : X → W and φ + : X + → W over Z such that −(K X + B + M ) and K X + + B + + M + are ample over W and B + = π * B. We now run a K X + B + M mmp with scaling over W which terminates by [BZ16,Lemma 4.4]. The output of this minimal model program is a good minimal model (X ′ , B ′ + M ′ ) for K X + B + M over W , it has a projective birational morphism π ′ : X ′ → X + . Since flips do not change the relative Picard rank over W we conclude that both varieties X ′ and X have the same Picard rank over W , which means that π ′ is a small morphism between Q-factorial varieties, so it must be an isomorphism.
The following lemma is a version of Fujino's special termination for dlt pairs in the context of generalized pairs (see, e.g., [Fuj07]). For the second claim, since the pair (V /Z, B V + M V ) is generalized dlt, from the inclusion it follows that the induced minimal model program for (V /Z, B V + M V ) is good.
Proof of Theorem 1. Assume termination of flips for generalized klt pairs of dimension at most n − 1. Let (X/Z, B + M ) be a generalized log canonical pair of dimension n admitting a weak Zariski decomposition. We proceed by contradiction. Let be an infinite minimal model program for (X/Z, B + M ).
Step 1. We reduce to the Q-factorial dlt case.
Consider the ample quasi-flip π 1 : X X 1 with flipping contraction φ : X → W . By Lemma 1.28, we have a Q-factorial dlt modification By Lemma 1.29, we can run a minimal model program for the Q-factorial generalized dlt pair (Y /Z, B Y +M Y ) with scaling of a general ample divisor over W . By Lemma 2.5 and the induction hypothesis, we may assume that the sequence of flips is eventually disjoint from the generalized non-klt locus of the generalized pair. However, in this case we obtain a minimal model program with scaling for a quasi-projective generalized klt pair which is big over the base. This terminates by [BZ16,Lemma 4.4]. Thus, the above minimal model program terminates with a minimal model (Y 1 /Z, B Y1 + M Y1 ) over W which is a generalized dlt modification of (X 1 /Z, B 1 + M 1 ) and (X 1 /Z, B 1 + M 1 ) is its generalized log canonical model over W .
Proceeding analogously with the other steps of the minimal model program, we obtain an infinite minimal model program for Q-factorial generalized dlt pairs we denote by P Yi and N Yi the push-forward of the nef part and effective part of the weak Zariski decomposition induced by Lemma 1.22 on each generalized pair (Y i /Z, B Yi + M Yi ). Moreover, we denote by λ i the log canonical threshold of the Q-factorial generalized dlt pair (Y i /Z, B Yi + M Yi ) with respect to P Yi + N Yi .
Recall from Lemma 1.23 that the λ i form a non-decreasing sequence of non-negative real numbers.
Step 2. We may assume that the non-decreasing sequence λ i is eventually constant and equal to a nonegative real number λ > 0 and the set of non-klt centers of the Q-factorial generalized log canonical pairs (Y i /Z, B Yi + M Yi + λ(P Yi + N Yi )) are birational for all i ≫ 0.
By the ACC for generalized log canonical thresholds [BZ16, Theorem 1.5] we conclude that after finitely many steps the sequence λ i must stabilize to a nonegative real number λ. Moreover, by the monotonicity property of generalized log discrepancies (Proposition 1.14), we conclude that after finitely many steps of the minimal model program, the generic point of any of the (finitely many) generalized non-klt centers is not contained in the flipping locus. By applying Step 1 again, we may assume that the generalized pairs (Y i /Z, B Yi + M Yi + λ(P Yi + M Yi )) are indeed Q-factorial generalized dlt.
Suppose that we are given λ ′ ≥ λ and divisors 0 ≤ B ′ Yi ≤ B Yi and 0 ≤ N ′ Yi ≤ N Yi as above satisfying (1) and (2). Notice that this is the case for B ′ Yi = B Yi and U i = Y i . Suppose that (3) is not satisfied, then the flipping loci are eventually disjoint from the non-klt locus i.e. from the support of (B Notice that as the flipping loci are disjoint from (B ′ Yi + λN ′ Yi ) =1 they are contained in U i . Passing to appropriate dlt models as in Step 1, we may assume that we have a sequence of the mmp for dlt pairs (Y i , B ′′ Yi + M Yi + λ ′′ (N ′′ Yi + P Yi )).
Similar to the proof of Step 2 of Theorem 1.
Step 3. We may assume that there exists λ ′ ≥ λ and divisors 0 ≤ B ′ Yi ≤ B Yi and 0 ≤ N ′ Yi ≤ N Yi such that the following conditions hold there exists an open subset U i ⊂ Y i , containing all the flipping loci such that (3) there is a stratum of (B ′ Yi + λ ′ N ′ Yi ) =1 that is not contained in any flipping locus but intersects infinitelly many flipping loci.
Similar to the proof of Step 3 of Theorem 1.
Step 4. We prove that a minimal model program as in Step 3 terminates.
Observe that we have a good minimal model program for a Q-factorial generalized dlt pair and a log canonical center which is intersected non-trivially by infinitely many flips. By Lemma 2.5, we obtain an infinite good minimal model program for a generalized klt pair of dimension at most n − 1, leading to a contradiction. | 2018-08-16T22:59:11.000Z | 2018-05-04T00:00:00.000 | {
"year": 2018,
"sha1": "4a432b47c2f49c0a6e36644df190498671aae128",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.01600",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4a432b47c2f49c0a6e36644df190498671aae128",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
244574819 | pes2o/s2orc | v3-fos-license | Intestinal Parasite Infection and Its Association with Undernutrition among Early Adolescents in Hawassa University Technology Village, Southern Ethiopia
Background. Different studies presented negating findings of the association between intestinal parasite infections (IPIs) and undernutrition among early adolescents in Ethiopia. )is study was aimed at assessing intestinal parasite infection and its association with undernutrition among early adolescents in four selected districts of the Sidama region. Method. An institutionbased cross-sectional study was conducted in October 2020 among 792 early adolescents. )e multistage stage sampling was applied to select 16 primary schools. Simple random sampling was applied to select study participants. Trained data collectors administered questionnaires. Stool samples were collected and analyzed. Anthropometric measurements were taken and indices were calculated using AnthroPlus software. Data were entered into and analyzed by SPSS version 25 software. Association between IPI and undernutrition was measured using multivariable analysis. )e outputs are presented using an adjusted odds ratio (AOR) with 95% confidence intervals (CIs). Result. )e prevalence of IPI, thinness, and stunting was 32% (95% CI: 28.7%, 35.3%), 17.5% (95% CI: 14.8%, 20.2%), and 21.5% (95 CI: 18.6%, 24.4%), respectively. )e higher odds of IPIs were observed among adolescents stunted (AOR� 3.61; 95% CI: 2.44–5.33), those who are thin (AOR� 3.07; 95% CI: 2.02–4.66), those who did not wash their hands after toilet (AOR� 1.89; 95% CI: 1.35–2.66), those who ate raw meat (AOR� 1.50; 95% CI: 1.03–2.14), and those whose family did not own toilet (AOR� 1.71; 95% CI: 1.18–2.46). Conclusion. )e prevalence of IPI, thinness, and stunting was high and has public health significance in the study area. IPIs were associated with stunting, thinness, lack of toilets, not washing hands after a toilet visit, and eating raw meat. Strengthening nutrition interventions, deworming programs, and health education on personal and environmental hygiene and sanitation are recommended.
Introduction
Our world is a house of 1.2 billion adolescent population of which 90% live in low-and middle-income countries [1]. e adolescent is defined as a person aged 10-19 years, which is a period of gradual transition from childhood to adulthood. Early adolescence is the first stage and occurs from ages 10 to 14 years [2]. e adolescence period is a critical period of physical growth and development. It is a period of 15-20% of adult height, up to 60% of skeletal mass, and 50% of adult body weight development [3]. It is the second window of opportunity in terms of nutritional status [4]. It is a time of increased nutritional needs, and lifelong health and nutrition behaviors are formed [5].
Parasitic infections caused by intestinal helminths and protozoans are among the most prevalent infections in the globe, carrying a high burden of morbidity and mortality [6]. Globally, 2 billion people, majorly children, were infected by soil-transmitted helminths (STHs) and schistosomiasis, of whom more than 300 million suffer from associated severe morbidity [7]. About 550 million primary school adolescents live in areas where these parasites are extensively transmitted. STHs are widely distributed all over the world with the greatest numbers occurring in Sub-Saharan Africa [7].
People living in the least developed countries are most vulnerable to intestinal parasitic infections mainly due to poverty and malnutrition [4]. In the case of Sub-Saharan Africa, almost half of the primary school early adolescents were infected with one or more intestinal parasitic worms [8]. Different studies conducted in Ethiopia showed that intestinal parasitic infections were higher among school-age children and early adolescents due to their habits of playing or the handling of infested soil, eating with soiled hands, unhygienic toilet practices, drinking, and eating of contaminated water and foods [9][10][11].
e Federal Ministry of Health of Ethiopia started the health extension program in 2004 and trained and assigned extension workers at almost every village in rural as well as urban areas to create awareness of prevention methods of intestinal parasites and other infections [12]. Despite efforts, due to climatic conditions, poverty, poor personal hygiene, poor environmental sanitation, and lack of safe and clean water, IPIs have been a major health burden of the country, especially in primary school early adolescents [13]. ere is also variation in the prevalence rate of IPI in rural and urban areas. e poorest people living in rural areas and in urban slums are the most affected by IPIs [14].
Intestinal helminths and protozoan infections impact host nutrition through a number of mechanisms that may have additive or multiplicative impacts, especially in school children and early adolescents. It causes and/or aggravates undernutrition through worm-induced gastrointestinal tract pathophysiology and food intake, chronic blood loss, and intestinal inflammation, which disturb the absorption of nutrients from the gut [15].
In early adolescents, undernutrition caused by helminth infections/low food intake has been a major public health problem but largely ignored as a target of public health and nutrition programs [16]. In addition to IPIs, in Ethiopia, the inequity between male and female adolescents (due to cultural influence, females are nutritionally vulnerable consuming less nutrients than their fair share relative to males) increased the risk for poor nutrition and health in girls [17]. e actual number of stunted and underweight adolescents has risen in Ethiopia due to rapid population growth [18].
Childhood undernutrition may reduce early adolescent immune function, impairing the body's resistance to infectious diseases and increasing the risk of school absenteeism and drop-out rates [19]. Its long-term consequences are associated with impaired cognitive development and poor school achievement, growth retardation, reduced economic productivity, and poor reproductive health outcomes in females. It also increases the risk for nutritionrelated chronic diseases in adulthood age [20].
Several studies conducted in Ethiopia assessed the prevalence of IPIs in adolescents but failed to associate with undernutrition [11,[21][22][23][24][25][26]. ere is also limited evidence about the magnitude of undernutrition among early adolescents in the study area. erefore, this study was carried out to identify the recent burden of intestinal parasitic infection and its association with undernutrition among primary school, early adolescents in selected districts of Hawassa University Technology Village in Sidama National Regional State.
e Study Area.
is study was conducted in 16 primary schools found in four randomly selected districts out of seven districts located in Hawassa University Technology Village, Sidama National Regional State. Boricha, Loka Abaya, Daara, and Bona were selected districts located at 311 km, 342 km, 260 km, and 385 km south of Addis Ababa, respectively, the capital of Ethiopia. e selected districts have 192 primary schools and 233,228 primary school children. e physical health service coverage of the village was 100%. e village has two general and two primary level hospitals. Agriculture is the main source of income on the site. More than 85% of the inhabitants of the village's livelihood depend on farming. Major crops grown in the village include Enset (false banana), cereals, cash crops (khat and coffee), and livestock.
Source Population.
e source population of the study was all (10-14 years) early adolescents enrolled in primary schools of the selected districts.
Study Population.
e study population was all selected early adolescents registered for the academic year of 2020, in sixteen primary schools of the selected districts.
Study Design.
A facility-based analytical cross-sectional study was carried out in the Hawassa University Technology Village in October 2020.
Inclusion Criteria.
All healthy early adolescents aged 10-14 years who had parental consent to participate in research from the selected primary schools were included.
Exclusion
Criteria. All students who had a known history of deworming tablet consumption in the previous six months/IP infection, were seriously ill, and were physically disabled were excluded from the study.
Sample Size Calculation.
e sample sizes for each specific objective were calculated using a single population proportion formula taking the assumptions of 5% margin of error and 95% confidence level. From previous studies, the prevalence of 20.7% for stunting [9], 27.5% for thinness [9], and 62.4% for IPI [26] were used. Using the design effect of two and 10% of nonresponse rate, sample sizes of 556, 674, and 792 were calculated for the prevalence of stunting, thinness, and IP infection, respectively. A sample size of 792 obtained from the IP infection was used because it was the largest sample size estimated and would be sufficient for the study.
2
Advances in Public Health
Sampling Procedure.
e multistage stage sampling was applied to select 16 primary schools located in Hawassa University Technology Village, Sidama National Regional State. At the first stage, four districts were randomly selected out of seven. In the second stage, four primary schools were also randomly selected from each selected district. Simple random sampling was applied to select 792 study participants. e calculated sample size was proportionally allocated to each school according to the number of students.
Data Collection Process and Quality Assurance.
A structured and interviewer-administered questionnaire was used. e questionnaire was adapted from different related works of the literature [14,27,28]. It was first prepared in English and translated into Sidaamu Afoo (local language) and back-translated to the English language by language experts for data collection. Before the data collection, five percent of the sample size was pretested to check clarity and local understanding of the points included in the data collection tool. Eight degree holder nurses and four laboratory technologists for data collection and four public health experts for supervision were recruited and trained for 3 days before the data collection. e data were collected by developing three teams; each team had three data collectors and a supervisor. e principal investigators were also involved in the field supervision to ensure the overall quality of the data.
Weight.
Early adolescents were weighed using Seca digital scales which were validated against standard weights before taking the actual measurements. e scales were placed on a hard, flat surface. Early adolescents wearing only lightweight clothing, excluding shoes, belts, socks, watches, jackets, and heavy items from the pocket, were weighted. Measurement was taken twice, and an average of the two was used for analysis. Based on the 2007 World Health Organization standard reference values, the body mass index for ages below <2 Z-score was defined as thin [28].
Height.
e measurement of height was done on a vertical wall with an attached measuring tape and a horizontal headboard that could be brought into contact with the uppermost point on the head. e moveable headboard was brought on to the topmost point on the head with sufficient pressure to compress the hairs. e height is measured in meters and recorded to the nearest 0.1 cm. e height was measured barefoot or in thin socks. Measurement was taken twice, and an average of the two was used for analysis. Based on the 2007 WHO standard reference values, height for age Z-score below <2 Z-score was defined as stunted [28].
Stool Examination.
After informing how to bring the stool specimen a clean, stool specimen container was given with an applicator stick to each study participant. At the time of collection, the date of sampling, school name, the name of the participant, age, and sex were recorded for each subject in a recording format. About 2 g of stool specimens was collected from each student and mixed with 10% formalin for preservation. e preserved fecal specimens had been transported to the laboratory of the Leku hospital. All specimens were processed by using formal ether fecal concentration techniques as indicated in the WHO standard operating procedures for the parasitological examination of feces [29]. e direct microscopy method was applied to identify intestinal parasites.
A senior laboratory technician of the hospital reprocessed 10% of randomly selected fecal samples, and the results were compared with the results made by the original laboratory technician.
Data Management and Analysis.
Data were cleaned, coded, and entered into, and analyzed by SPSS version 25 software.
e AnthroPlus software was used to calculate anthropometric indices. Binary logistic regression was employed to determine the odds ratio for bivariable analysis. Candidate variables with P ≤ 0.2 were selected for multivariable logistic regression analysis. e outputs are presented using an adjusted odds ratio with 95% confidence intervals. Statistical significance was declared at P ≤ 0.05.
Ethical Consideration.
Ethical clearance was obtained from the Institutional Review Board (IRB) of College of Medicine and Health Science, Hawassa University (Ref. No: IRB/216/13; Date 17/9/2020). A formal letter was written from the School of Nutrition, Food Science and Technology to selected districts' education offices. Concerned officials were informed about the purpose of the study, and a permission letter was taken. Data were collected after taking informed consent from the parents/caretakers. Study participants were told the laboratory analysis and physical examination results. Personal hygiene/sanitation and nutrition counseling was provided. Positive cases for IPIs and undernourished were consulted and linked to local health centers and hospitals for the treatment.
Sociodemographic Characteristics of the Respondents/
Family. A total of seven hundred fifty early adolescents in secondary cycle primary school were enrolled in the study with an overall response rate of 94.7%. e mean (±standard deviation) age of the early adolescents was 12.4 (± 1.60) years. Four hundred seventy-eight (63.7%) of the respondents were older than or equal to 12 years, whereas three hundred ninety-five (52.7%) were females. Concerning the ethnicity of respondents, 691 (92.1%) were Sidama. Almost all, 710 (94.7%), adolescents' mothers/ caregivers were married. Slightly greater than two-thirds, 533(71%), of the study participants' families were Protestant religion followers. Pertaining to the place of residence, nine in ten (90.3%) were rural dwellers. About two-thirds of the respondents (65.5%) had five or more family members (Table 1).
Socioeconomic Characteristics of the Respondents/
Family. Pertaining to the educational status of the study participants' parents, nearly half, 362 (48.3%) of mothers and 230 (30.7%) of fathers were not attended formal education. Concerning the occupation, two-thirds (67.9%) of fathers were farmers whereas six in ten (60.4%) of the mothers were housewives. e vast majority of respondents, 653(87.1%) families earned monthly income less than or equal to 2000 Ethiopian Birr (Table 2).
Water, Sanitation, and Hygiene (WASH) Practices of Respondents' Families.
is study revealed that 276 (36.8%) of the study participants' families lacked toilet facilities. About one-fourth, 201 (26.8%), of the respondents' families had a history of sharing the toilet with other families or neighbors. Slightly more than one-third, 265 (35.3%), of the respondents' families disposed waste in the open field. Piped outside the compound was the source of water for 618 (82.4%) of the families of the study participants. Ninety-seven (12.9%) reported that they walked on foot for more than thirty minutes to fetch water. About onequarter, 197 (26.3%), used some methods of water purification (Table 3).
Personal Hygiene Practices of Study Participants.
Of the total respondents, three hundred thirty-six (44.8%) reported that they did not wash their hands after a toilet visit. Dirty in the fingernails was observed in nearly half, 360 (48%), of respondents. Concerning handwashing with soap, 405 (54%) of the early adolescents used soap for washing their hands. Of the 750 respondents, 185 (24.7%) did not wear shoes. One-third, 255(34%), had a history of raw meat/vegetable consumption at least once a week (Table 4).
Factors Associated with IPIs.
e odds of IPIs were 3.6 times higher among stunted adolescents compared with not stunted (AOR = 3.61; 95% CI: 2.44-5.33). in adolescents were three times more likely to have intestinal parasite infections than their counterparts (AOR = 3.07; 95% CI: 2.02-4.66). Compared with those who had no history of eating raw meat at least per week, adolescents who ate raw meat once per week had 1.5 times higher odds of being infected by intestinal parasites (AOR = 1.50; 95% CI: 1.03-2.14).
Early adolescents from families who not owning toilet facilities were 1.7 times more likely to be infected by intestinal parasites compared to their counterparts (AOR � 1.71; 95% CI: 1.18-2.46). Similarly, the odds of IPIs were 1.89 (AOR � 1.89; 95% CI: 1.35-2.66) times higher among early adolescents who did not wash their hands after the toilet visit compared to early adolescents who washed their hands after the toilet visit (Table 5).
Discussion
Intestinal parasite infection continued as major threats to health in the least developed countries. Our study investigated the IPIs and their association with undernutrition among early adolescents in selected districts of Hawassa University Technology Village, Sidama Region.
e key findings of this study were as follows: 32%, 21.5%, and 17.5% of early adolescents were infected with intestinal parasites, stunted, and wasted, respectively. Stunting, wasting, raw meat consumption history, handwashing practices after toilet, and toilet owning were variables that showed a statistically significant association with IPIs.
Assessing the nutritional status of adolescents is essential to improve their health. e prevalence of stunting was 21.5%. is finding was in line with studies conducted in different parts of Ethiopia [9,23,36]. In contrast to our study, studies conducted at Adwa [37], Angolela [30], and Wollo [38] reported a lower prevalence of stunting whereas a higher prevalence of stunting was reported from Ethiopia [39,40] and Bangladesh (46.6%) [41]. ese differences in the prevalence of stunting show prolonged food shortage and recurrent infections (IPIs and others) in the target population. Recurrent and prolonged IPIs in adolescents might aggravate stunting by disturbing gastrointestinal tract pathophysiology, food intake, and absorption of nutrients from the gut in which intern affects physiological growth. e difference may be also explained by the difference in socioeconomic status, time of the study, and methods used in the study. e prevalence of thinness (17.5%) in the present study was comparable with findings reported from Angolela, Ethiopia [30]. Compared with studies conducted in Ethiopia Advances in Public Health [27,37,39], the Philippines [42], and Bangladesh [41], the prevalence of thinness in the present study was low. In contrast to our findings, a low prevalence of thinness was also reported from Ethiopia [23,36] and Argentina [34]. ese reported differences of thinness could be associated with variation in the socioeconomic level, access to food of the family of the target students, time of the study, and methods used in the study.
Increased prevalence of IPIs was identified in adolescents who were stunted.
is finding was in line with studies investigated in Ethiopia [23,30] and Philippines [42]. Similarly, the thinness of the early adolescent was significantly associated with intestinal parasitic infection. e consistent finding was reported by studies conducted in Angolela, Ethiopia [30], and Bahir Dar, Ethiopia [27]. e habit of consuming raw meat among early adolescents increased the odds of intestinal parasite infection. Adolescents who had a habit of at least one-time consumption of raw meat had 1.5 times increased odds of being infected by intestinal parasites. is finding was comparable with studies done in Northwest Ethiopia [25] and Gondar [43]. is could have happened because most of the tapeworms that affect humans come from eating undercooked animal products/meat as well as raw or undercooked fish that is contaminated.
Our study showed that study participants whose families owned the toilet and washed their hands after the toilet visit had a lower chance of being infected by intestinal parasites compared to their counterparts. is finding is consistent with a study conducted in Ethiopia [23,25,26], India [32], and Brazil [31]. is is explained as poor personal hygiene and poor environmental sanitation exposed adolescents to IP infections. Specifically, lack of access to toilet facilities for the safe disposal of human waste can result in intestinal parasites and diseases.
Limitation of the Study
As a limitation, due to the shortage of resources, we applied formal ether fecal concentration techniques with the direct microscopy method which has lower sensitivity to show protozoan species compared to the polymerase chain reaction-(PCR-) based techniques. We recommend researchers should conduct further studies using microscopic examination with PCR assay to increase the possibility of the presence or absence of the infection.
Conclusion
Our study showed that early adolescents in selected districts of Hawassa Technology village were infected with IPIs, indicating that IPIs continued to be major public health problems in disadvantaged communities. A. lumbricoides was the most predominant intestinal parasite identified. Similarly, the prevalence of thinness and stunting was higher and has public health significance in the study area. Being stunted, being thin, eating raw meat, not owning a toilet, and not washing hands after the toilet were the most important identified risk factors for intestinal parasite infection.
Integrated intervention approaches involving decisionmakers, health professionals, teachers, and communities are crucial. Strengthening nutrition counseling and interventions, school and community deworming programs, health education about personal and environmental hygiene, and sanitation are recommended. Coordination of these efforts is likely to yield appreciable and sustainable gains in improving the health and welfare of early adolescents and securing a prosperous future.
Abbreviations
AOR: Adjusted odds ratio BMI: Body mass index CI: Confidence intervals COR: Crude odds ratio EDHS: Ethiopia demographic and health survey HAZ: Height for age Z-score IPI: Intestinal parasite infection IRB: Institutional review board SD: Standard deviation STH: Soil-transmitted helminths WASH: Water, sanitation, and hygiene WHO: World Health Organization.
Data Availability
e data used in this study are available upon request from the corresponding author.
Conflicts of Interest
All the authors have declared that they have no conflicts of interest.
Authors' Contributions
AB and SG conceptualized the study; AB and AP performed data curation; AB and SG performed formal analysis; SG performed investigation; AB and AP developed the methodology; AB and AP provided the software; SG supervised the study; AB and SG performed validation; AB wrote the original draft; AB, AP, and SG reviewed and edited the article. | 2021-10-17T15:07:36.871Z | 2021-10-15T00:00:00.000 | {
"year": 2021,
"sha1": "23f6f6e24ce1cf733b4eac94a2edfc9e97c88c5e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/aph/2021/3937948.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d571c2d7fdf3ddce696f48762d725345f27cc375",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250290176 | pes2o/s2orc | v3-fos-license | How General-Purpose Is a Language Model? Usefulness and Safety with Human Prompters in the Wild
The new generation of language models is reported to solve some extraordinary tasks the models were never trained for specifically, in few-shot or zero-shot settings. However, these reports usually cherry-pick the tasks, use the best prompts, and unwrap or extract the solutions leniently even if they are followed by nonsensical text. In sum, they are specialised re-sults for one domain, a particular way of using the models and interpreting the results. In this paper, we present a novel theoretical evaluation framework and a distinctive experimental study assessing language models as general-purpose systems when used directly by human prompters— in the wild . For a useful and safe interaction in these increasingly more common conditions, we need to understand when the model fails because of a lack of capability or a misunderstanding of the user’s intents. Our results indicate that language models such as GPT-3 have limited understanding of the human command; far from becoming general-purpose systems in the wild.
Introduction
In recent years, remarkable progress in language models such as BERT (Devlin et al. 2018), T5 (Raffel et al. 2019), GPT (Brown et al. 2020) and PanGu- (Zeng et al. 2021) has consolidated a new way of interacting with them through 'prompts': small pieces of text the user supplies for the model to continue. No fine-tuning is required; the model can be used out-of-the-box in new tasks, provided an appropriate prompt (Xu et al. 2020;Izacard and Grave 2020;Hendrycks et al. 2020). A particularly interesting setting is called few-shot inference, where the prompt includes illustrative examples (Brown et al. 2020;Reynolds and McDonell 2021;Scao and Rush 2021;Schick and Schütze 2020;Bragg et al. 2021). But even with zero-shot prompts, amazing applications are reported. For instance, Fig 1 (left) shows a prompt and a useful continuation given by a language model. In Fig 1 (right), however, the model makes a plausible continuation, but it does not understand the 'command'.
A careful design of prompts for a particular task can extract the full potential from these models with some control of the unintended behaviours. However, it also limits the key Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
A song for a two-year-old child about a bird in a cage and a little mouse goes like this: "I'm a bird in a cage, and I'd like to fly away. And the more I say I want to, the more they say I can't".
You need to write a song for a two-year-old child about a cat skateboarding and a dog playing with a ball. This is your chance to be creative! Figure 1: Two prompts (in blue) and continuations (in green) generated by GPT-3. The example of the right shows that getting a language model to do what you want requires more than raw capabilities: 'understanding' the command is important in making these systems useful and reliable.
property of these models: direct model prompting is the closest scenario today to a general-purpose AI system.
This flexibility comes with many risks. Because of this, we see an ongoing debate on whether non-expert users should interact freely with language models (Solaiman et al. 2019). However, the reality is that these systems are now widely available 1 . Second, AI researchers and companies have favoured controlled scenarios for a narrow domain because these systems can be optimised towards the best prompt in terms of intended results (Xu et al. 2020;Izacard and Grave 2020;Hendrycks et al. 2020;Liu et al. 2021;Qin and Eisner 2021). The search for the best prompt includes hyperparameters such as 'temperature' or the unwrapping of results, known as the 'decoding strategy' (Perez, Kiela, and Cho 2021). Unfortunately, even small variations of the prompt make the results much worse (Zhao et al. 2021).
It is only then through direct use of these models for a wide range of tasks-in the wild-, where we can really see the potential of general-purpose AI systems and their risks. In particular, we can properly evaluate when these systems fail because of limited capability or lack of understanding of the user's intentions, usually referred to as 'command understanding' (Ngo et al. 2012;Walker, Peng, and Cakmak 2019) 2 . We also recognise that language models through direct prompting must be evaluated for an average case situation by considering the way humans would interact with these systems. This includes wrapping their commands (e.g. prompting) in an appropriate form such that the system is 'biased' or 'induced' to complete the prompts according to the user's intention, and unwrapping the outputs from the system (extracting the relevant part of the answer). All this is required if we aim at evaluating these models with ecological validity (De Vries, Bahdanau, and Manning 2020).
In this paper, we present a new way of evaluating the usefulness and safety of general-purpose systems that are instructed by natural language prompts. We consider several elements: (1) the human effort involved in devising the appropriate prompts, thinking of wrapping and unwrapping strategies, (2) the human effort when applying these strategies to write the prompt and extract the results, (3) the human cost of validating or discarding the solution given by the model and, ultimately, (4) the usefulness and safety of the solution. We express these terms using a novel theoretical formulation based on the cost of the human solving a problem using the model, , , and compare it to the cost that the human would incur without the model . We see this approach as the most 'ecological valid' way of evaluating a generic use of these models, especially because (as an autonomous system magically guessing what the user wants) would not include all the costs involved in formulating and understanding the command, required in a general-purpose scenario. We use the new evaluation framework as a basis for a series of questionnaires for human users, designed to capture the components , and over several domains. Only by doing this estimation can we accurately calculate the expected gain , − for a range of tasks and assess language models meaningfully in this setting.
The usability and safety of language models as generalpurpose systems to (semi-)automate human tasks in the wild also involves analysing failure as being caused by lack of capability or by misunderstanding the command. The latter is usually more dangerous than the former. For instance, a language model can give the steps to make an actual bomb when queried for 'the ingredients of a brownie bomb'.
The major contributions of this work are: (1) The first theoretical framework of how language models should be evaluated as general-purpose systems in the wild. (2) The decomposition of failure due to lack of capability and lack of command understanding and a difficulty-based approach to disentangle them. (3) A methodology for devising experimental studies that capture the elements that are required in the theoretical framework and how they can be organised into off-line questionnaires for a more systematic control of human prompts and language model results. (4) A com-plete experimental study using the data from three questionnaires on a population of = 36 and = 34 humans, requiring approximately 52 hours of human work and 432 prompts answered by GPT-3, leaving the results as a novel realistic benchmark of human prompting, from which to build more comprehensive and balanced batteries to measure the progress of general-purpose AI systems.
The rest of the paper is organised as follows: first, we summarise the relevant background for this paper. Then we develop the theoretical framework used to evaluate generalpurpose systems. In the methodology section, we present the experimental setup, followed by our findings in the analysis of results. Finally, we close with a discussion of the main takeaways and ideas for future work.
Background
The open interaction with machines via natural language commands has its ground well before the early days of computer science. Ada Lovelace conceived the idea that a machine could do "whatever we know how to order it" 3 . Since Ada Lovelace, the way of instructing machines has been mostly through programming languages, and more recently, through examples, using machine learning. Today, instructing machines using natural language instead of programming languages is usually represented by digital assistants (Campagna et al. 2019;Cho and Rader 2020;Rapp, Curti, and Boldi 2021), which can do many tasks following our orders in natural language. However, these systems are based on a 'task repertoire' (Maedche et al. 2019), which is not fully general, unlike programming languages or even training examples. A fixed repertoire of tasks makes the reliability and safety issues easy to deal with, which gradually resulted in the preferred kind of interaction of digital assistants over time. In fact, this kind of 'task-oriented AI agents' has been advocated as a safe approach to more general AI in the future, such as Comprehensive AI Services (Drexler 2019).
But only when the range of tasks is completely open, we have a real general-purpose system. This way of interacting with machines has not been realised in human-computer interaction (Lazar, Feng, and Hochheiser 2017;Rapp, Curti, and Boldi 2021), but it has been theorised many times. Perhaps the closest vision where machines are openly instructed in natural language is Lieberman and Maulsby's 'instructible machines ' (1996) and the related notion of programming by example (Lieberman 2001). In short, prompts for language models combine these two worlds: instructions in natural language and few-shot learning.
But why are language models instructable? We need to go back to the origins of 'language models', introduced by Shannon in 1949. The notion of compression is grounded by efficiently coding the message based on the idea of a nonentropic distribution of the next bits of information. Today, informational metrics such as entropy or perplexity are still being used to evaluate language models. Their relevance and general use were anticipated by (Mahoney 1999), among others. However, only recently language models have been con-ditioned with prompts to do many different tasks, from language translation to mathematical problems (Hendrycks et al. 2021). This is possible as these language models have been fed with massive datasets of human behaviour in the form of text. By compressing the next tokens in a text on such a diversity of topics and even languages, the model ultimately develops powerful abstraction capabilities. This allows it to make continuations that look as if a human (or an archetypical human of the 21st century) were writing the continuation. Interestingly, when given some appropriate question or command, many contemporary humans follow it with the answer or the task done, which is why these models can act as general-purpose systems. Indeed, prompt-based interaction with language models may be the closest thing to a general-purpose system in the history of AI.
But how can we evaluate generality? By generality, we do not mean a rich and meaningful conversation as could be informally assessed by any variant of the imitation game (Turing 1950), but instead, we are referring to the capability of solving a range of tasks, up to some difficulty (Hernández-Orallo et al. 2021). As mentioned before, digital assistants are able to solve a range of simple tasks, but they are usually restricted to a fixed repertoire. To discuss command generality in depth we need to consider these important elements: • A probability distribution ( ) that captures a wide range of everyday tasks that humans face on a regular basis. • A difficulty metric ℏ( ) for each instance in . For instance, most humans can do additions, but not equally robustly and fast for all numbers of digits. • The process of conceiving the instruction and interpreting the solution, which involves that the human thinks of the best ways of phrasing the command for a particular task and instance, the model understands and solves it, and the human extracts and interpret the result. • The trade-off of semi-automation, finding the balance in the continuum between the cost when the human does the task, and the cost of , when the human just formulates the task. There are situations in between, where the human partially solves or prepares the task for .
• The desired levels of safety and competence for each task not only depend on the robustness and capability of the system, but the degree of understanding of the command. A very capable system doing when ordered may be more dangerous than a very incapable agent.
The last item is related to all the others, and the 'specification problem' in (software) engineering and more recently in AI (Rahimi et al. 2019). In AI safety, this is more commonly expressed in terms of alignment (Leike et al. 2017;Hernández-Orallo et al. 2020). Kenton et al. (2021) mention the classical decomposition of alignment as an intent+competence problem: the system must try to do what the human wants (right system's intent) and the system must be able to achieve it (sufficient capability). However, the capability of the systems to 'understand' commands, separate from the capability to satisfy them, has received little attention until now (Tamkin et al. 2021). Command understanding is still much narrower than the full area of natural language understanding, and a system can still recognise many commands without a full command of natural language. However, in an open interaction against general-purpose systems instructed with natural language, understanding must also be considered as an extra third element, separate from the model capability to solve the task. This just reflects the traditional distinction between validation and verification, one of the fundamental elements of safety. We refine alignment as follows: One can argue that in AI safety, in the context of the misspecification problem (Amodei et al. 2016;Russell 2019), we should also cover for human stupidity or naivety on unexpected consequences (e.g., King Midas problem). We will however not consider here a patronising perspective of the system understanding what the human really wants. On the other hand, language models are not agents, and we can then assume that they always 'want' to do the task. Consequently, we will not consider human-vs-machine intent in this paper and will focus only on whether the system 'understands the command' and has the competence to solve it.
Overall, the problem of alignment for a general-purpose system is complicated. It is very ambitious to construct a framework that considers all these elements precisely, especially because there is limited foundation in the field for this. However, the relevance of language models and its multimodal variants -recently referred to as 'foundation models' (Bommasani et al. 2021)-, requires to make some steps in this direction. This is what we do next.
Framework
In this section we introduce a new framework to measure the utility of language models when solving general everyday tasks. This implies the comparison of two quantities we will define: and , . They will measure the cost of the human solving the task with and without making use of a language model , respectively 4 . The aim is to provide insight on the different effort terms that will be measured in our experiment with humans.
Let us consider a discriminative or generative task with an input space and an output space . Instances are sampled over a distribution ( ), with ∈ . The human, possibly stochastically, produces an output for as defined by ( | ). Our framework has to evaluate the cost of producing this and its quality. The cost of producing or guessing an answer is defined as ( | ), and the loss of such an answer is ( , ) (values of closer to 0 are valid or useful outputs). Note that a single may have many valid outputs, especially in generative tasks. With all these elements, we define ( ) for instances 5 as follows: As loss and effort are rarely expressed in the same units, their relative weight is indicated by a parameter . The expression of cost when a human is assisted by a language model in a few-shot or zero-shot prompt-based setting involves more elements. First we need to consider the processes of wrapping and unwrapping. When providing one or more instances to the model, the user needs to think of a wrapper that can be used for each instance. For instance, if the task is addition, and we have an instance, 1 = 13 + 2, this can be wrapped into prompt ( 1 ) = ⃩ 1 = "The sum of 13 and 2 is:", which is fed to the language model. Using the same wrapping pattern, the instance 2 = 7 + 12 would be wrapped into prompt ( 2 ) = ⃩ 2 = "The sum of 7 and 12 is:". Note that we could use some other wrappers, e.g., a more complex wrapper could transform the first instance into "How much is thirteen plus two?".
As mentioned in the introduction, success or failure with a few-shot use of language models depends on the quality of the prompts. The process of unwrapping is also very important. If a model returns ⃩ 1 ="15" to instance 1 , then it is easy to extract the answer, 1 = 15. However, it is not uncommon to get things such as ⃩ ′ 1 = "the same as the sum of 2 and 13, which is 15". While the answer is correct, it needs more effort and interpretation, and is hard to do automatically. Of course, some other responses are even more difficult to parse, such as ⃩ ′′ 1 = "15, and the sum of 13 and 2 is 17", which would be correct if we stop at the comma, but incorrect (and inconsistent) if we keep on reading. The appendix includes many examples of tasks, wrappers and unwrappers in Table 2. Now we are ready to introduce the components for , ( ). As in the unassisted case, the cost is for instances following a probability distribution of tasks ( ). The first term, (⟨ , ⟩) measures the cost of devising the wrapping and unwrapping strategies ⟨ , ⟩. As ⟨ , ⟩ is produced by the human we need to define the probability of each pair as (⟨ , ⟩). The cost of applying the wrapper to instance is denoted by ( , ); and the cost of unwrapping the output of the model into an answer is ( , ⃩). Finally, the human will need to validate the answer. This does not mean solving it, but checking that the language model completion makes sense and is useful. For instance, if ⃩ 1 = "The sum of 13 and 2 is" is completed by ⃩ 1 = "a number", the completion would not be valid (makes sense but it is not useful). This is especially important for generative tasks where the human validation cost is much lower than the cost the human would incur by solving the task herself (e.g., creating an image) or when there might be fairness and discrimination issues (Bender et al. 2021;Tamkin et al. 2021). We denote this cost of validation as ( , (⃩)). Finally, as in the unassisted case, we measure the quality of the result as ( , ). With all this, the assisted expected cost , ( ) of human with model for instances is defined as: where ( , , , ⃩) def = ( ( , ) + ( , ⃩)) + ( , (⃩)).
In this case we also have parameters , and indicating the relative weight of different terms. Notice that we consider that the conception of the prompt ⟨ , ⟩ has to be done just once, while other terms such as ( , ), ( , ⃩) and ( , (⃩)), integrated into the transformation cost , and ( , (⃩)), represent a per-instance cost. The definition of , may look convoluted, but it really contains the elements that must be considered to evaluate these models in the wild. Looking only at of the solutions is clearly insufficient to make these judgements, as it will disregard all the efforts that are associated, as well as the diversity of prompts. With and , defined, and all their components estimated (as we do in the following sections), we can really assess whether using the model pays off.
It is also important to determine whether the model gives poor results because of lack of capability or command understanding, especially if the validation procedure performed by the human is unreliable or meant to be eliminated. Unfortunately, language models are not very good explaining their answers, so we need to use a different approach.
Let us consider that we have a difficulty or hardness metric ℏ( ) for each instance of a task. In this case, if the model is capable enough for solving very easy instances, we should be able to assign some degree of reliability of the model, as well as some level of understanding of the command. However, if is very high for very easy instances, then the system may have no capability at all, or it is not understanding, or both. On the contrary, if is low initially, but starts increasing at some point, we can disentangle the loss given by lack of understanding (and other reliability issues) and capability.
Methodology
We are going to estimate all the terms appearing in (2) and (3) through well-thought questionnaires with human respondents. With them we will be able to answer the first experimental question about whether there is a gain when humans are assisted by a state-of-the-art language models such as GPT-3. The second major experimental question is to assess whether language models fail to complete the task due to a lack of command understanding or competence.
Relying on human data is powerful but limits the number of tasks that we can consider, especially as we need several instances per task, of a range of difficulties. In order to approximate a diverse group of tasks resembling a distribution over everyday tasks ( ), we chose four tasks covering each of the four main categories in the human capability hierarchy according to Cattell-Horn-Carroll theory (Carroll 1997) that are not specific to humans (e.g., short memory). In particular, we have one task in each of the following categories: • "Numerical abilities", represented by a task where price discounts have to be applied. Instance difficulty is given by how many operations are needed. • "Communication abilities", represented by a task where an email has to be written for a costumer explaining them (2) and (3), and forms from which to obtain them.
whether they made or lost money after an investment. Difficulty is measured by increasingly bad news, as in such situation we expected participants to take more care and time on the framing of the email (MUM effect).
• "Reasoning", represented by the task of proposing a recipe from a list of ingredients and utensils. Difficulty is assessed by the number of ingredients and utensils. • "Creative writing", represented by the task of writing the lyrics of a song for a two-year old child about animals and what they are doing. In this case, difficulty is measured by the number of animals to be included in the lyrics. We built three questionnaires in English with three instances in each domain: Q1 and Q2 (group A) aimed at estimating the parameters in , , and Q3 (group B) for . Q1 starts with some information about what an 'autocompletion' system is and some examples at the beginning. It also collects some information about the participants (English level, age, familiarity with language models, and use of virtual assistants). Then, volunteers are asked to generate prompts to make the language model solve the tasks. After they have finished Q1, we use their prompts to generate GPT-3 completions (using davinci-instruct, with default parameters and 256 tokens), which we use to build Q2, where usefulness of GPT-3's completions are assessed. Q1 and Q2 are paired, such that the users receive the completion to their respective prompts. Q3 is independent. A different group of volunteers complete the same tasks but without using language models. It also collects their age and English level. To ensure similar samples for group A (Q1-Q2) and group B (Q3), and no contamination between groups, volunteers were randomly divided into two groups A and B, with questionnaires Q1 and Q2 sent to group A, and Q3 sent to B. In the end, we had = 36 and = 34 respondents recruited via posts in social networks and internet forums. The tests were administered online using the open-source testing platform Concerto (Harrison et al. 2020).
The way we estimate the value of each term in and , (Eqs. 2 and 3) can be found in Fig. 2. In general, usefulness of the answers is asked to humans through a Likert scale (1 to 5, from least to most useful), which we convert into loss as = 1 − ( − 1)∕4. This loss is estimated by the humans themselves. In addition, we conduct an external evaluation , measured by a member of the research team, and serves to give comparable scores across volunteers, and avoid discounting difficulty. Human effort ( , , , , ) is measured in seconds. The forms are structured in 4 tasks with 3 instances. We assume that the first instance of each task has a prompting cost (measured in time) of + , while for the second and third instances the cost is only . is just the average effort to find the answer in the model completion, and the time to estimate its usefulness. For the different effort components we use the median of the measured time. It so happens that even if volunteers are specifically instructed to avoid making stops in the forms, some of them inevitably get distracted. As a consequence, the median represents a better way to reduce the possible bias in the time estimates. On the other hand we use the mean for assessing the quality of the answers given by a Likert scale.
Analysis of Results
Let us first compare the correlations between all variables. As indicated in the caption of Fig. 3, we can confidently reject the normality hypothesis for all time distributions. Because of this, we use Spearman correlations. In Fig. 2 (and Fig. 3 in the appendix segregated by domain) we see that a good command of English and previous experience with language models seems useful. The use of virtual assistants however seems uncorrelated, which may be due to continuations being frequently expressed differently from commands. Finally, the use of language models is weakly negatively correlated with self-assessed loss, ( , ) but not with externally-evaluated ( , ), suggesting that people without experience may be easier to impress. Tasks without using language model as help G H Figure 3: Effort required (median values, in s) to perform each of the tasks with and without access to GPT-3. In all cases except the last one (lyrics) the effort to generate the prompt and validate the answer is greater than to solve it by themselves. Each distribution was Shapiro-tested ( ≤ 3 ⋅ 10 −4 in all cases). Then we performed Mann-Whitney U test to compare the effort with and without GPT-3. The Holm-corrected (Holm 1979;Aickin and Gensler 1996) p-values are < 1.3 ⋅ 10 −7 for Numeric and Reasoning domains, = 1.15 ⋅ 10 −2 for Communication and = 0.33, that is, no evidence of difference, for the Writing task. specific instance ( ), unwrapping the model completion ( ) and validating it ( ). In all cases, except in the task of writing song lyrics, the sum of the human effort required to interact with the system is larger than the effort to solve the task without making use of it, as shown in Fig. 3 (right).
Effort and Loss
On the other hand, if we compare the loss for different tasks in Table 1 we can see that in the Numeric and Communication tasks, the answers of GPT-3 achieve worse selfassessed usefulness (higher loss). In contrast, loss is similar for the other two. We have to note that in Q3 the usefulness of the answers from humans is also self-assessed, so they may differ from the perceived usefulness of answers by others (Hoorens 1993). Furthermore, in the Reasoning task (recipes), answers from the model and human are different: the human just names the dish; while the model often provides the entire recipe, but with unavailable ingredients.
Overall, considering Fig. 3 more lenient appraisal for Writing has to do with the fact that generative tasks are currently the domain where language models shine the most: tasks that are easy to describe and evaluate but hard to solve. However, the fact that loss values are at best as good as humans' indicate that there is still space for future models to improve. Finally, the communication task was the one the human volunteers found most challenging. Not only does Fig. 3 show to be much larger than for the other tasks, but also the number of prompts not containing enough information for the model is much larger than in other cases (33% vs up to 14%). However, an important caveat to mention here is that this task is perhaps the one where the stakes were the most difficult to emulate (telling a customer bad news).
Difficulty
Now we discuss how the difficulty of the question affects the quality of the answers given by the system. This is a natural question looking at the proposed decomposition of alignment in (1). To measure this, we look at the easiest instances and see whether the loss falls to 0, not the same thing as what volunteers were measuring: 'usefulness'. This difference is reflected in the loss values in Fig. 4 in the appendix. The difference arises because simple questions such as assessing the price of a 2 for 1 offer are too simple to be perceived as useful. As such, the loss does not have a well-defined trend, or even decreases for more complex tasks.
In order to correctly evaluate how well the answer of the model performs the task at hand, we will use an externally evaluated cost, denoted by ( , ). The results, shown in Fig. 4, are very different between the numeric task and the rest. While loss increases with ℏ for Numeric, the easiest possible instance still has an average loss of ( , ) = 0.71, suggesting an understanding gap as indicated in Eq. 1. The other tasks and the self-assessed loss of both GPT-3 and human answers do not show any clear trend, but we think the Figure 4: ( , ) changes with difficulty ℏ. High loss in the simplest instance indicates an 'understanding' gap. Increasing loss (in the numeric task) means that capability may saturate for complex instances. Plots for ( , ) in Fig. 4 in the appendix.
reasons are different: for Reasoning there is an intermediate level of understanding, while for Writing and especially Communication the task overall is hard and the lack of a clear growing trend does not allow us to tell between lack of competence and failing at understanding commands.
Discussion and Future Work
The progress and full democratisation of language models should be based on a better understanding of their capabilities. One key finding of our work is that, despite sometimes providing excellent answers, the use of these models still requires significant effort by the common human to generate good prompts. Indeed, except for the writing tasks, our results indicate that it would be faster and better if the user solved the task without the help of GPT-3. We expect this to change in the future as models become more accurate, but also as the users adapt to the way models understand commands. It is crucial that we evaluate this properly, using human questionnaires like this one, and not only the results from massive batteries of language models where prompts are specialised for each task (Kohler and Daniel Jr 2021). In NLP it is usual to evaluate the quality of responses subjectively, but it is less so to measure times. This is more common in other areas where productivity is key, such as software engineering (Sadowski and Zimmermann 2019) and human-computer interfaces (Lewis 1995;Lazar, Feng, and Hochheiser 2017). It is nevertheless essential to take all factors into account and compare both situations in an ecologically valid experiment (De Vries, Bahdanau, and Manning 2020). For example, one could mistakenly believe that the model used in our experiments, GPT-3, is almost as good as humans in generating recipes from lists of ingredients. Unfortunately, this does not take into account that to make these systems useful, humans would need to be able to prompt and read the model's answer faster than they are to solve the task themselves. The advantage of using these models only seems to appear for generative tasks, such as the song lyrics writing. For future studies, we believe it would be informative to carry out similar research with multimodal models. In fact our tasks were designed with this consideration in mind, such that the tasks could be adapted to multimodal input, including the images of our forms, and output, such as videos. Another extension is to analyse other languages, where the ca-pability of the system and the kind of prompts may differ from English significantly.
Similarly, future studies should focus on ecological validity by considering realistic situations where these models are used. This involves modelling different kinds of users using empirical evidence in the short and long terms, analogous to the way software systems and human-computer interfaces are evaluated. This should include how users adapt to these systems and learn to improve the construction and application of prompts, as well as choosing those tasks whose assistance is more useful and safe.
Our work aimed to shed some light into the decomposition of alignment in Eq. 1. For the numeric task (discount application), we can measure both the understanding gap (the gap that happens when the difficulty of the task is minimal but still requires command understanding) and the capability of the system, which was quickly saturated. Unfortunately, one limitation of this methodology is that it is not always easy to find a good range of instances from very easy cases to more difficult ones, because the range of capability of language models is still limited. We hope that future studies with more powerful models will provide some insight on how to better measure the difficulty increase, or even compare what tasks language models and humans find difficult. Furthermore, with the objective of helping make better models, we open source the data collected in our experiments. This should provide a benchmark of prompts where most of the heavy work (prompt generation and task solving without the model) has already been carried out, and the only remaining task is the evaluation of the answer of new models.
We believe the methodology proposed here opens the door to a fairer and more insightful evaluation of language models and other foundation models of the future, which should help better assess their generality and usefulness. It should also help address a crucial aspect of the reliability and safety of these models such as their understanding of commands: very capable systems with poor understanding of our will may pose risks. As such, we advocate for a more realistic evaluation of these models, as they will be used by humans -in the wild. | 2022-07-06T15:15:15.713Z | 2022-06-28T00:00:00.000 | {
"year": 2022,
"sha1": "e6ecdbbceff06cc1e667e3261596fd0fa6b32c4b",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/20466/20225",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1b085ce901c20b66ec12a17066a7fe1316942124",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258292588 | pes2o/s2orc | v3-fos-license | Risks of leukemia, intracranial tumours and lymphomas in childhood and early adulthood after pediatric radiation exposure from computed tomography
Background: Children are more susceptible to radiation-induced damage than adults, but little research has compared the risk of cancer after exposure to radiation during computed tomography (CT) among children at different ages. We aimed to explore the risk of intracranial tumours, leukemia or lymphoma among children, adolescents and young adults (aged < 25 yr) after radiation exposure from CT at or before the age of 18 years. Methods: We conducted a nested, population-based case–control study using data from Taiwan’s publicly funded health care system. We identified participants younger than 25 years with newly diagnosed intracranial tumours, leukemia or lymphoma, from Jan. 1, 2000, to Dec. 31, 2013. We assigned 10 non-cancer controls for each case, matching by sex, date of birth and day of entry to the cohort. We considered CT scans received at or before the age of 18 years and 3 or more years before the index date (the date of cancer diagnosis for cases) as exposure. We used conditional logistic regression models and incidence rate ratios (IRRs) to estimate the relationship between risk of these cancers and CT radiation exposure. Results: We identified 7807 cases and matched to 78 057 controls. Compared with no exposure, exposure to a single pediatric CT scan did not increase risk of intracranial tumours, leukemia or lymphoma. However, participants exposed to 4 or more CT scans had an elevated incidence (IRR 2.30, 95% confidence interval 1.43–3.71) of one of the cancer outcomes of interest. Receiving 4 or more CT scans at or before 6 years of age was associated with the highest risks of cancer, followed by ages 7–12 years and 13–18 years (p for trend < 0.001). Interpretation: Exposure to a single CT scan was not associated with increased risks of subsequent intracranial tumours, leukemia or lymphoma among children; however, we observed increased cancer risks among those with 4 or more CT scans, especially among younger children. Although these cancers are uncommon, the findings of this study underscore the importance of prudent use of CT in the pediatric population.
shown similar manifestations to those in adolescents in terms of the type of cancer, treatment response and prognosis. [15][16][17] Since head CT is the most common type of CT used for children, 3 and hematopoietic tissues are the most radiosensitive, 18 we sought to investigate whether childhood CT exposure (at or before age 18 yr) was associated with risks of intracranial tumours, leukemia, non-Hodgkin lymphoma and Hodgkin lymphoma among children, adolescents and young adults. We also sought to evaluate whether any incremental increases in risk of these cancers after pediatric CT would last from adolescence to early adulthood.
Study design and setting
We conducted a population-based, nested case-control study using the National Health Insurance (NHI) Research Database (NHIRD) in Taiwan, to evaluate the association of radiation expos ure (by total number of CT scans, cumulative radiation doses and cumulative number of CTs received at different ages) with subsequent risk of intracranial tumour, leukemia, non-Hodgkin lymphoma and Hodgkin lymphoma.
The population of Taiwan is about 23 million, and the NHIRD contains the health records for all NHI beneficiaries. 19 All newborns in Taiwan become beneficiaries of the single-payer NHI program at birth, along with foreign nationals who have established a registered domicile for at least 6 months and those with a regular employer. 20 We reported this study according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for case-control studies. 21
Data source
The NHIRD contains deidentified health information for NHI bene ficiaries, including demographic information, diagnoses and management of each medical visit. 19 Under the NHI program, patients receiving therapies for cancer are eligible to be part of the Registry for Catastrophic Illness (a subset of the NHIRD) to reduce copayments. 19,22 The NHI stipulates that health care providers attach reports written by board-certified specialists before reimbursement for procedure claim, including imaging; 20 therefore, the NHIRD records only procedures that are actually performed. The NHI employs multiple audits to ensure the appropriateness of claims for medical services, and its accuracy and completeness have been shown in several validation studies. 19,23 Cases, controls and matching For our base cohort, we extracted data on all NHIRD beneficiaries who were younger than 25 years from Jan. 1, 20001, , to Dec. 31, 2013 To form our case cohort, we extracted cases of intracranial tumours (grades I-IV under World Health Organization classification), 24 leukemia, non-Hodgkin lymphomas or Hodgkin lymphomas that were newly diagnosed during the study period. We used codes from the International Classification of Diseases, Ninth Revision (ICD-9) to identify these cases from the Registry of Catastrophic Illness; a previous study reported that this approach had a positive predictive value of 94%. 25 We further used incidence density sampling to randomly select non-cancer controls from the base cohort. 26 Before matching, we excluded patients with any malignant disease diagnosed before the study period. We also excluded those with cancer-predisposing conditions that were potential confounders for our exposure of CT imaging. 27 For example, children with Down syndrome, who have an increased risk of leukemia, may receive CT scans for cardiac defects, and children with immunodeficiency may be imaged for recurrent infections. [28][29][30] We identified patients with cancer-predisposing conditions using at least 1 inpatient or 3 outpatient claims. 31,32 The relevant ICD-9 codes are listed in Appendix 1, Supplementary Table 1 and Table 2, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.221303/tab -related-content. Finally, we excluded patients with missing data (i.e., unreported sex, date of birth or cohort entry date).
We pooled the remaining participants to form an at-risk population. In this at-risk population, when 1 cancer case was diagnosed during the study period (index case), we randomly assigned up to 10 participants free from cancer at the index diagnosis date as controls, to form a risk set with matched sex, date of birth (± 1 yr), and cohort entry date (± 1 yr). 26,33 We repeated this process until all cases had been matched. We excluded risk sets when fewer than 5 controls could be assigned to a case. We defined the exposure period as the period from cohort entry to 3 years (the lag period) before the index date (i.e., the index case's date of cancer diagnosis). The length of follow-up and the exposure period of the index case and the assigned controls were equivalent.
Exposure
Exposure was CT-associated radiation received at or before the age of 18 years, quantified by the cumulative number of CT scans [7][8][9][10][11] and the organ-specific cumulative dose. 13 Given that some cancerrelated symptoms would prompt CT, we considered exposure to include only CT scans performed 3 or more years before the index date. A descriptive analysis from the United Kingdom found that the mean interval between symptom onset and diagnosis of lowgrade pediatric brain tumours was 28.2 weeks. 34 Other studies have shown that intracranial tumours associated with radiotherapy in childhood occurred 3-5 years after treatment. 35,36 An increased incidence of hematologic malignant diseases was observed 2 years after occupational radiation exposure among adults. 37 Therefore, we considered a 3-year lag period to be appropriate to minimize reverse causation bias and avoid improperly excluding cases with malignant diseases relating to radiation from CT scans.
Radiation causes direct damage to tissues and the organspecific radiation dose varies in different CT types. 4,5,[38][39][40][41] Dosage calculation for CT exposure was done by consensus of a pediatric hematologist and oncologist (W.H.W.) and a radiologist at Changhua Christian Hospital. We used the cumulative radiation dose to the brain to assess the risk of intracranial tumour and the cumulative dose to red bone marrow for leukemia and lymphoma. We quantified effective radiation doses to the brain and red bone marrow based on sex, age, CT type and radiation dosimetry data, as reported by Gao and colleagues 40 (Appendix 1, Supplemental Table 3). Gao and colleagues 40 combined patient-specific CT parameters with various pediatric-sized phantoms (objects that mimic human tissue) to estimate the organ-absorbed radiation dose. As we were unable to find studies of radiation doses for CTs of extremities, we considered the radiation dose from these procedures to be 0. We divided the calculated cumulative organspecific CT dosage into quintiles to assess the relationship between total radiation dosage and risk of cancers.
We determined the type of each CT scan according to the corresponding ICD-9 code, if not recorded in the NHIRD database. We also extracted data on other high-radiation procedures using claims for cardiac catheterizations and common nuclear medicine procedures. 42,43
Statistical analysis
We calculated the incidence rate ratio (IRR) and built conditional logistic regression models to assess the adjusted odds ratios (ORs) between groups with and without CT exposure. As usage of CT scans increased over our investigation period, 2 we adjusted the ORs for the calendar year of cohort entry as a linear variable. We also adjusted for family income (as a linear variable) and degree of urbanization of place of residence (as a categorical variable), as these variables are associated with leukemia among children. [44][45][46] We used the 1-sided Cochran-Armitage test to investigate the relationship between age at CT exposure (categorized as ≤ 6, 7-12 and 13-18 yr) and cancer risk. We applied the Fisher exact test for categorical variables.
We completed all statistical analyses using SAS statistical software version 9.4 (SAS Institute).
Sensitivity analyses
We conducted sensitivity analyses using lag periods of 1-4 years and including the total number of other high-radiation procedures as a covariable in the model. 42,43 We recomputed the radiation organ dose using data from Kim and colleagues, 41 derived from radiological technician surveys and from realistic, normal-stature, pediatric phantoms ranging in age and body size (Appendix 1, Supplementary Methods and Table 4). We also reassessed the relationship between cumulative organ-specific CT dosage (by quintiles) and risk of cancers using these data. We noted that participants in the top 2% of cumulative organspecific doses had received 4 or more CT scans. Therefore, we conducted additional sensitivity analyses for this high-exposure group to evaluate the associated cancer risks, initially using doses calculated with data from Gao and colleagues 40 and then with data from Kim and colleagues. 41
Ethics approval
This study was approved by the Taipei Medical University -Joint Institutional Review Board (no. N201602055). †Degree of urbanization of townships in Taiwan was stratified into 7 levels according to official statistics. 46 In this study, we classified the townships into 4 categories, namely metropolitan (level 1 and 2), city (level 3 and 4), town (level 5 and 6) and rural area (level 7). ‡Includes positron emission tomography, skeletal scintigraphy, lung perfusion scan, hepatobiliary scintigraphy, Meckel scan and gastric emptying tests.
Results
In total, we initially identified 8055 patients with intracranial tumours, leukemia, non-Hodgkin lymphoma or Hodgkin lymphoma diagnosed during the study period. We excluded 153 patients with malignant diseases diagnosed before the study period, 72 with cancer-predisposing conditions and 19 with missing information (Figure 1). None of the patients with missing information had CT exposure. Demographic characteristics of the remaining 7807 patients and 78 057 matched controls are summarized in Table 1.
The proportion of patients exposed to other high-radiation procedures was similar in the case and control groups (Appendix 1, Supplemental Table 5).
Risk of cancer by number of scans
Compared with no exposure, exposure to a single pediatric CT scan did not increase subsequent cancer risk ( The IRR of participants with one of the cancer outcomes of interest receiving 4 or more CT scans was 2.30 (95% CI 1.43-3.71) relative to nonexposed participants (Table 3).
Cancer risk by cumulative radiation dose
Participants in the top quintile of cumulative brain radiation dose had a significantly higher risk of intracranial tumour compared with nonexposed participants (adjusted OR 3.61, 95% CI 1.93-6.75) ( Table 4). We did not observe this association between the cumulative dose of radiation to red bone marrow and risk of hematologic malignancies.
Cancer risk by age of exposure
Participants who received 4 or more CT scans at or before the age of 6 years had the highest risk of cancer, followed by those aged 7-12 years and those aged 13-18 years (Figure 2). The correlation was statistically significant (p for trend < 0.001).
Sensitivity analyses
Using a 1-or 2-year lag period, we found a stronger association between risk of intracranial tumour and increasing CT exposure than in the primary analysis. This association also held with a 4-year lag period (Appendix 1, Supplemental Table 6).
We included the number of high-radiation procedures in our full model and the results remained consistent (Appendix 1, Supplemental Table 7). The re-estimated organ-absorbed radiation dose using data from Kim and colleagues 41 still showed an association between an elevated cumulative dose of radiation and risk of intracranial tumour and of leukemia. Increased cumulative radiation did not significantly increase the risk of non-Hodgkin lymphoma or Hodgkin lymphoma (Appendix 1, Supplemental Table 8).
Using data from Gao and colleagues 40 and from Kim and colleagues, 41 the highest 2% of cumulative organ-specific doses (> 98th to ≤ 100th percentile) was associated with elevated risk of intracranial tumour, leukemia and non-Hodgkin lymphoma, but not Hodgkin lymphoma (Table 4 and Appendix 1, Supplemental Table 8).
Interpretation
We found that receipt of a single CT scan at or before 18 years of age was not associated with increased risk of intracranial tumours, leukemia, non-Hodgkin lymphoma or Hodgkin lymphoma, to the age of 25 years. However, children who had received 4 or more CT scans at or before 18 years of age had a 2.3-fold increase in the incidence of these cancers compared with those without exposure. It is important to note that these neoplasms are uncommon among children, with an incidence of 15-40 cases per million in Taiwan over a 15-year period from 1996 to 2010. 47 The associated risk of cancer we observed was highest among children who had received 4 or more CT scans at or before 6 years of age, followed by those aged 7-12 years and adolescents aged 13-18 years (Figure 2), suggesting that younger children are more vulnerable to radiation than older children. However, this finding should be interpreted cautiously, as the risks may be overestimates because of residual confounding and the low number of participants in high-radiation groups. 48 Several studies have evaluated the association between pediatric neoplasms and CT scans among children of different ages, but the results were inconclusive. [7][8][9]12 These studies focused on the age of the first CT exposure rather than the cumulative number of CT scans at different ages.
The positive relationship between the cumulative organspecific dose of radiation and the risk of intracranial tumour and leukemia that we observed has also been seen in other studies, 7,8 but the association between childhood CT radiation and lymphoma is still unclear. For Hodgkin lymphoma, the lack of association we observed across the cumulative number of CT scans was consistent with earlier reports, 14,49 except for 1 Australian study. 8 For non-Hodgkin lymphoma, Li and colleagues 12 did not find an association between 2 or more CT scans and non-Hodgkin lymphoma, in contrast to the association we observed for those with exposure to 4 or more CT scans. This discrepancy may have been owing to the different CT protocols used in various countries. 50 Further studies with a standard scanning protocol might produce more conclusive results. It should be noted that the association we observed between non-Hodgkin lymphoma and higher numbers of scans did not hold when the data were reanalyzed by increasing quintiles of radiation exposure to red bone marrow.
When human cells are exposed to low-dose radiation, small DNA breaks are generated and mended by intrinsic DNA repair processes. Therefore, low-level CT exposure appears not to be carcinogenic and animal models have suggested that it may even protect cells from mutagenesis. 4,5 However, when cumulative DNA damage stimulated by recurrent radiation exposure exceeds DNA repair abilities, the risk of carcinogenesis rises. 4,5 This mechanism explains the dose-response relationship between the cumulative organ-specific dose of radiation and the risk of cancers observed in our study and in previous studies. [7][8][9][10][11] The large population of this nationwide cohort is one of the major advantages of this study. In addition, we matched cases and controls on the calendar year of cohort entry and the duration of the exposure period to ensure equal opportunity for CT exposure, avoiding a time-window bias. 51 Furthermore, we excluded patients with cancer-predisposing conditions to avoid related confounding.
Our work reinforces the importance of radiation protection strategies, addressed by the International Atomic Energy Agency. 52 scans. 52 Parents and pediatric patients should be well informed on risks and benefits before radiological procedures and encouraged to participate in decision-making around imaging. 53
Limitations
Given the observational study design, our results should not be interpreted as causal; rather, our study assesses the association between radiation exposure and subsequent risks of cancer. Despite our efforts to control for potential confounders, residual confounding may still be present. For example, data on some risk factors for cancer -such as smoking, alcohol consumption, obesity (i.e., body mass index) and exposure to pesticides or phthalate-containing medications -were lacking in the NHIRD database. [54][55][56] About 10% of patients with childhood or adult cancers have germline genomic alterations for which we had no data. [57][58][59] We were unable to eliminate the influence of highradiation procedures completely; however, the effect should be minimal because the misclassification would have been nondifferential between cases and controls.
Although we adopted a longer lag period than those defined in most of the previous literature, [8][9][10][11][12] we are unable to eliminate the possibility of reverse causality. In addition, more than 10% of participants were born after 2005, which led to a short follow-up time for neoplasm development.
Because of a lack of local data on CT radiation dosimetry, errors may have occurred in our calculation of the organabsorbed dose of radiation. Furthermore, the precision of CT claim data in the NHIRD has not yet been validated, despite previous publications adopting the same approach. 9,12 Similarly, the code definitions we used for each individual cancer type have not been validated in the NHIRD, although this approach has been shown to have a high positive predictive value for all cancers as a group in this database. 25 The algorithm we used to identify cancer-predisposing conditions has not been validated but is a common approach used by researchers extracting data from the NHIRD. 9 Importantly, the small numbers of participants in the highradiation groups reduced our ability to detect potential effects of high radiation doses. The wide OR ranges found in these analyses indicate that these results should be interpreted with caution, as the OR may be inflated because of sparse data. 48 We excluded 19 (0.2%) cases with missing data, but the impact on our results was likely small because none of them had CT scans.
Conclusion
This study found that exposure to a single CT scan at or before 18 years of age was not associated with the development of subsequent intracranial tumours, leukemia, non-Hodgkin lymphoma and Hodgkin lymphoma during childhood, adolescence and early adulthood. Children who received multiple CT scans had higher risks of intracranial tumours, leukemia and non-Hodgkin lymphoma, but not Hodgkin lymphoma. Younger children appeared vulnerable to cancer risks associated with repeated CT exposure. Although these tumours are uncommon, these results indicate that judicious CT usage and radiation-reducing techniques should be advocated. | 2023-04-24T13:04:39.700Z | 2023-04-23T00:00:00.000 | {
"year": 2023,
"sha1": "fd254ba9df85a88932c82429abe5786e4c131b6e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Highwire",
"pdf_hash": "fd254ba9df85a88932c82429abe5786e4c131b6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209432860 | pes2o/s2orc | v3-fos-license | Anticholinergic therapy: A case-based approach
Anticholinergic medication remains integral in the management of women with Overactive Bladder syndrome although there is increasing evidence to support a link with the impairment of cognitive function. This editorial will review the available evidence and discuss the management of patients in order to minimise anticholinergic burden with a particular focus on the elderly.
Anticholinergic medication is commonly used within all branches of clinical practice and over 600 drugs are known to have anticholinergic properties [1], many of which are available as over-the-counter medications [2]. With ageing, the likelihood of exposure to anticholinergic medication increases and polypharmacy is a particular problem in the elderly, with up to 50% of patients prescribed at least one anticholinergic drug. In addition, many elderly women complaining of lower urinary tract symptoms may be prescribed anticholinergic medication for overactive bladder (OAB) syndrome, although the longer-term implications of this anticholinergic burden remain poorly understood by many clinicians in both primary and secondary care [3].
What is Anticholinergic Burden -And Does it Matter?
Anticholinergic burden is defined as the cumulative effect of taking one or more drugs that are capable of developing anticholinergic adverse effects and the load increases with the number of medications prescribed [1].
In a two-year longitudinal study of 13,004 participants over the age of 65 years the use of drugs with an anticholinergic effect was associated with a 0.33-point decline in score on the Mini Mental State Examination (MMSE) (95%CI: 0.03-0.64; p = .03) and an increased risk in terms of 2year mortality (OR = 1.68; 95%CI: 1.30-2.16; p b .001) [4]. These findings are supported by a systematic review of 46 studies including 60,944 participants that demonstrated a significant decline in cognitive ability with increasing anticholinergic load in addition to an increasing, but non-significant, trend in terms of mortality [5].
The evidence would therefore suggest that anticholinergic drugs should be used with caution, particularly in the elderly, and further evidence is provided by a prospective cohort study of 3434 participants from North America investigating the association of total standardised daily dose (TSDD) of anticholinergic and the onset of dementia and Alzheimer's disease. Overall, a 10-year dose-response relationship was observed for both dementia and Alzheimer's disease (test for trend p b .001), with the greatest risk being associated with the highest anticholinergic dose (adjusted hazard ratio 1.54, 95%CI 1.21-1.96) [6]. The impact on cognitive function is thought to be due to the effects on the central nervous system (CNS) of the passage of anticholinergic drugs across the blood-rain barrier (BBB).
What is the Blood-Brain Barrier?
The BBB is made up of endothelial cells lining cerebral capillaries [7] and permeability increases with ageing due to epithelial cell shrinkage and the opening of tight junctions [8]. This may occur because of normal ageing, trauma, diabetes, multiple sclerosis, stroke, hypertension, Parkinson's disease and dementia [9]. Small molecules (b400 kDa) which have a neutral charge and which are lipophilic and hydrophobic are more likely to cross the BBB. In addition, the brain has an efflux transport system, permeability-glycoprotein (P-gp), that pumps molecules out of the CNS and therefore reduces levels within the brain. An anticholinergic drug, which is less likely to cross the BBB or is actively pumped out, is therefore less likely to cause CNS side effects [10].
Should Anticholinergic Drugs be used in the Elderly?
Whilst the use of antimuscarinic medication is not contraindicated in the elderly, it is important before treating OAB to be aware of comorbidities and also the risk of polypharmacy. Given that many medications may have an anticholinergic effect, it is important to be aware of this prior to initiating therapy in order to reduce the overall
Contents lists available at ScienceDirect
Case Reports in Women's Health j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / c r w h anticholinergic load. This may be assessed clinically using an anticholinergic burden scale and there are now several validated measures available [10]. In general, the higher the score, the greater the anticholinergic burden, and therefore the greater the risk of cognitive impairment.
How do We Choose the Right Drug?
Drugs such as oxybutynin, darifenacin, fesoterodine, solifenacin and tolterodine are tertiary amines, meaning they are more likely to cross the BBB, whilst trospium chloride, being a quaternary amine with lower lipophilicity, is less likely to do so [10].
In addition, trospium, fesoterodine and darifenacin are substrates for P-gp, meaning they are actively pumped out of the CNS and brain/ plasma ratios used to assess CNS levels of the drug have been shown to be highest for oxybutynin, intermediate for tolterodine and solifenacin and low for darifenacin, fesoterodine and trospium chloride [10].
Should anticholinergic burden be high, and non-modifiable, then an alternative therapeutic approach may be helpful, by using either a transdermal oxybutynin or a β 3 agonist such as mirabegron.
Conclusion
Emerging evidence would appear to suggest an association between the use of anticholinergic medication and the risk of long-term cognitive dysfunction, with the elderly representing a particularly high-risk population. Assessing anticholinergic burden prior to treating OAB should allow a tailored approach using anticholinergic medications which are less likely to cross the BBB or by using an alternative approach such as a transdermal preparation or a β 3 agonist. A better understanding of the relationship between anticholinergic medication and cognitive function should improve patient outcomes, particularly in the elderly.
Contributors
The two authors had equal input into the writing of this editorial.
Conflict of Interest
Dudley Robinson has undertaken consultancy for Astellas, Allergan, Ixaltis, Femeda and Ferring, and done research work for Ixaltis. George Araklitis has no conflict of interest to declare.
Funding
No funding from an external source supported the publication of this editorial.
Provenance and Peer Review
This editorial was commissioned and not externally peer reviewed. | 2019-11-22T00:54:36.060Z | 2019-11-19T00:00:00.000 | {
"year": 2019,
"sha1": "5dd8db4378ac4e654c47d4293d3102a224ca43c0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.crwh.2019.e00164",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb59f645c7b7be61908bb4b2ead5b7e4a973e1d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
140367948 | pes2o/s2orc | v3-fos-license | A multiphase theory for spreading microbial swarms and films
Bacterial swarming and biofilm formation are collective multicellular phenomena through which diverse microbial species colonize and spread over water-permeable tissue. During both modes of surface translocation, fluid uptake and transport play a key role in shaping the overall morphology and spreading dynamics. Here we develop a generalized two-phase thin-film model that couples bacterial growth, extracellular matrix swelling, fluid flow, and nutrient transport to describe the expansion of both highly motile bacterial swarms, and sessile bacterial biofilms. We show that swarm expansion corresponds to steady-state solutions in a nutrient-rich, capillarity dominated regime. In contrast, biofilm colony growth is described by transient solutions associated with a nutrient-limited, extracellular polymer stress driven limit. We apply our unified framework to explain a range of recent experimental observations of steady and unsteady expansion of microbial swarms and biofilms. Our results demonstrate how the physics of flow and transport in slender geometries serve to constrain biological organization in microbial communities.
Introduction
Bacteria employ sophisticated surface translocation machinery to actively swarm, twitch, glide or slide over solid surfaces (Kearns, 2010;Mattick, 2002;Spormann, 1999;Hö lscher and Kovács, 2017). Collectively, they also aggregate into multicellular communities on hydrated surfaces and exhibit large-scale coordinated movement (Verstraeten et al., 2008). Surface motility in macroscopic colonies on hydrated surfaces such as gels occurs primarily via two distinct modes: either by rapid flagella-mediated swarming expansion (Harshey, 1994;Harshey, 2003), or alternatively by slow biofilm expansion driven by extracellular polymer matrix production (Hall-Stoodley et al., 2004). In both cases, an interplay between mechanical constraints and biological organization sets limits on the overall colony morphology and expansion dynamics (Persat et al., 2015). The forces driving colony expansion are generated by non-homogeneous patterns of biological activity, originating from spatial localizations in cell growth and division (Hamouche et al., 2017), extracellular polymer matrix production (Seminara et al., 2012;Yan et al., 2017;Srinivasan et al., 2018), osmolyte secretion (Ping et al., 2014) and active stresses (Farrell et al., 2013;Delarue et al., 2016). Conversely, the formation of localized biologically active zones is tightly coupled to the heterogeneity of the environment, including the diffusion and transport of nutrients (Wang et al., 2017), accumulation of metabolic by-products (Liu et al., 2015;Gozzi et al., 2017) and presence of quorum sensing and signaling agents that regulate cell-differentiation and development.
Consequently, the dynamics of colony growth requires a mechanistic description that accounts for spatiotemporal inhomogeneities in biological activity, emergent forces, and flows that transport metabolic agents. In bacterial swarming, cells within the colony are actively propelled by the rotation of flagella in a thin layer of fluid extracted from the underlying soft tissue or gel (Kearns, 2010). In contrast, bacterial biofilms are surface aggregates of sessile bacteria embedded in a self-generated extracellular polymer matrix (Flemming and Wingender, 2010). Despite marked differences in regulatory genetic pathways, morphology and cell function (Verstraeten et al., 2008), physical characteristics such as the fluidization of the substrate/tissue, gradients in nutrient availability, the low-aspectratio geometry and the existence of multiple phases (i.e. cells, biopolymer and fluid) are common to both bacterial film and swarm colonies. Motivated by these similarities, we present a unified multiphase framework that couples mechanics, hydrodynamics and transport to explain the dynamics of bacterial swarm and film expansion.
Experimental background Bacterial swarms
Experiments on swarming colonies of E. coli (Darnton et al., 2010;Wu and Berg, 2012;Ping et al., 2014), S. enterica (Harshey and Matsuyama, 1994;Butler et al., 2010;Kalai Chelvam et al., 2014;Chen et al., 2007) and P. aeruginosa (Yang et al., 2017) reveal certain reproducible features associated with this modality of collective behavior. For example, E. coli swarms on agarose gels have a steady front shape that propagates radially at a uniform speed (Wu and Berg, 2012). In these swarms, measurements of the osmotic pressure profiles were found to be consistent with the active secretion of wetting agents in regions of high cell density that serve to fluidize the swarm by extracting water from the underlying tissue, thus allowing it to spread (Ping et al., 2014). These observations are not unique to E. coli; indeed our experiments with B. subtilis swarms, following (Kearns and Losick, 2003), indicate the same phenomena, that is a steady-state front shape and eLife digest Bacteria can grow and thrive in many different environments. Although we usually think of bacteria as single-celled organisms, they are not always solitary; they can also form groups containing large numbers of individuals. These aggregates work together as one super-colony, allowing the bacteria to feed and protect themselves more efficiently than they could as isolated cells.
These colonies move and grow in characteristic patterns as they respond to their environment. They can form swarms, like insects, or biofilms, which are thin, flat structures containing both cells and a film-like substance that the cells secrete. Availability of food and water influences the way colonies spread; however, since movement and growth are accompanied by mechanical forces, physical constraints are also important. These include the ability of the bacteria to change the water balance and their local mechanical environment, and the forces they create as they grow and move.
Previous research has used a variety of experimental and theoretical approaches to explain the dynamics of bacterial swarms and biofilms as separate phenomena. However, while they do differ biologically, they also share many physical characteristics. Srinivasan et al. wanted to exploit these similarities, and use them to predict the growth and shape of biofilms and bacterial swarms under different conditions. To do this, a unified mathematical model for the growth of both swarms and biofilms was created. The model accounted for various factors, such as the transport of nutrients into the colony, the movement of water between the colony and the surface on which it grew, and mechanical changes in the environment (e.g. swelling/ softening). The theoretical results were then compared with results from experimental measurements of different bacterial aggregates grown on a soft, hydrated gel. For both swarms and biofilms, the model correctly predicted how fast the colony expanded overall, as well as the shape and location of actively growing regions.
Biofilms and other bacterial aggregates can cause diseases and increase inflammation in tissues, and also hinder industrial processes by damage to submerged surfaces, such as ships and waterpipes. The results described here may open up new approaches to restrict the spreading of bacterial aggregates by focusing on their physical constraints. speed, as shown in Figure 1A-1E. Close to the spreading front, we observe a multilayer region of width W = 195 mm ± 35 mm, indicated by the dashed white lines in Figure 1B and 1C. The multilayer region correlates with increased colony thickness and local bacterial density (Wu and Berg, 2012). At the edge, and in the interior, there is just a monolayer of cells. The swarm radial expansion velocity is constant at V = 2 mm/hr (see Figure 1D) and the swarm front maintains a steady-state profile during expansion (see Figure 1E). These observations raise a number of natural questions associated with the steady-state velocity and profile of the swarm colony. Given the observations of osmotic gradient-driven flow in the vicinity of the growing front (Ping et al., 2014), coupled with variations in the thickness and activity of bacteria, any framework to explain these requires a consideration of a dynamic bacterial population interacting with ambient fluid, necessitating a multiphase description.
Bacterial films
In contrast with bacterial swarms, the spreading of bacterial biofilms is faciliated by the extracellular polymeric substance (EPS) matrix that expands via osmotic fluid influx, for example in B. subtilis Intensity (a.u (Seminara et al., 2012) and V. cholerae (Yan et al., 2017) biofilm colonies. However, EPS synthesis is not homogeneous, and depends on the local nutrient concentration and environmental heterogeneities experienced by cells within the same biofilm (Vlamakis et al., 2008;Berk et al., 2012).
Recently, it was shown that the EPS matrix production is localized to cells in the propagating front of B. subtilis biofilms (Srinivasan et al., 2018). In Figure 1F-1J, we show the results of repeating these experiments, but now focusing on a peripheral region of a biofilm colony using a B. subtilis strain (MTC832) that harbors the P tapA À cfp construct as a reporter for matrix production activity (Wang et al., 2016;Srinivasan et al., 2018). This highlights a~1 mm zone of matrix production activity at the periphery, seen in Figure 1G and H; indeed plots of averaged matrix production reporter intensity exhibit a distinct peak at the periphery, as shown in Figure 1J. The dynamics of radial expansion shows the existence of an initial acceleration regime followed by a transition to a second regime characterized by a monotonic decrease in expansion velocity, as plotted in Figure 1I. This transient mode of biofilm spreading driven by EPS production and swelling is quite different from that of bacterial swarming, and suggests that we might need a fundamentally different way to address its origins. However, if we now consider the EPS matrix and fluid as distinct phases (Cogan and Keener, 2004;Cogan and Keener, 2005;Winstanley et al., 2011;Seminara et al., 2012), with the bacterial population being relatively small, we are again led to a multiphase description of the system, but with a different dominant balance relative to that seen in bacterial swarms, which we now turn to.
Theoretical framework
Recent theoretical approaches have considered specific physical factors such as the wettability of the biofilm (Trinschek et al., 2016;Trinschek et al., 2017), osmotic pressure in the EPS matrix (Winstanley et al., 2011;Seminara et al., 2012), or Marangoni stresses associated with the swarm fluid (Fauvart et al., 2012), as reviewed by Allen and Waclaw (2019). However, a description that captures the experimental observations described in Figure 1 remains lacking. Here, given the similarities between the bacterial swarming and biofilm systems, we provide a unified description of their spreading dynamics by recognizing that in both cases we need to consider large slender microbial colonies with H=R ( 1, where H is the colony thickness and R is the radius. This approximation results in a quasi-2-dimensional, two-phase model (assuming axisymmetry) of a colony that spreads along the x-axis, with a varying thickness, as shown in Figure 2. The subscript i = (1,2) denotes the actively growing phase and passive phase, respectively. Within the swarm colonies, the highly motile cells constitute the actively growing phase whereas the fluid comprises the passive phase. Similarly, in biofilms, the EPS matrix constitutes the active phase, and the aqueous fluid is the passive phase. In both cases, colony growth occurs over a semipermeable soft gel substrate, as shown in Figure 2. We develop a continuous description of colony expansion in terms of variables which are coarse-grained depth integrated averages (Drew, 1983;Ishii and Hibiki, 2011), The averaged height of the colony interface is hðx; tÞ, the volume fraction of the active phase (i.e., swarmer cells or polymer matrix) is f 1 ¼ fðx; tÞ and the volume fraction of the fluid phase is f 2 ¼ 1 À fðx; tÞ. The 1-D substrate depth-averaged nutrient concentration field within the substrate is cðx; tÞ. As detailed in Appendix 2, combining mass and momentum balances yields the following generalized set of partial differential equations that governs the dynamics of both expanding swarms and biofilms, (2) c t À Dc xx ¼ g 2 ðh; f; cÞ: ( where, Á ð Þ x ¼ q Á ð Þ=qx, etc. Here, Q 1 ðxÞ is the horizontal flux in the active phase, Q 2 ðxÞ is the horizontal flux in the fluid phase and V 0 ðxÞ is the osmotically-driven net vertical fluid influx per unit length across the permeable substrate. Furthermore, g 1 ðh; c; fÞ is the depth integrated active phase growth rate within the bacterial colony, and g 2 ðh; c; fÞ is the depth integrated nutrient uptake rate. The dynamics of swarms and biofilms differ in the details of the expressions for Q 1 ; Q 2 , V 0 , which are provided in Table 1. While a full derivation of each term is provided in Appendix 2, a direct comparison of the swarm: nutrient rich, capillary pressure dominated biofilm: nutrient limited, osmotic pressure dominated In both cases, the total thickness of the microbial colony is hðx; tÞ, the averaged nutrient concentration field is cðx; tÞ, the volume fraction of the active phase is fðx; tÞ, the volume fraction of the fluid phase is 1 À fðx; tÞ, and the fluid influx across the agar/colony interface is denoted by V 0 ðx; tÞ. As shown on the bottom panel, the active phase constitutes swarmer cells in the microbial swarm, and secreted EPS polymer matrix in the biofilm. The pressure in the fluid phase is p f and the effective averaged pressure in the active phase is P. In the swarm cell phase, P ¼ p f , while the EPS phase effective pressure is P ¼ p f þ fÉðfÞ, where ÉðfÞ is the swelling pressure and is related to Flory-Huggins osmotic polymer stress (see Equation A28). The momentum exchange between the two phases is denoted by M, which includes the sum of an interfacial drag term and an interphase term as detailed in Equation (A11) in the Appendix. DOI: https://doi.org/10.7554/eLife.42697.005 Table 1. Definitions of fluxes for swarms and films Definitions of the active phase horizontal flux Q 1 , the fluid phase horizontal flux Q 2 , active phase growth term g 1 ðh; f; cÞ, osmotic influx term V 0 ðxÞ, and nutrient consumption term g 2 ðh; f; cÞ for bacterial swarms and films in the generalized thin film evolution equations described by Equations (1-3). Here, 1 is the biofilm viscosity, 2 is the fluid viscosity, p f is the fluid phase pressure, P is the effective pressure in the active phase, g 0 is effective swarmer cell growth rate, G is the EPS production rate, G is the nutrient consumption rate per unit concentration, K is the nutrient half-velocity constant and d is the thickness of the substrate. For swarms, the active phase corresponds to the swarmer cell phase, and for biofilms, the active phase is the EPS polymer matrix.
Variables
Swarms Biofilms Nutrient uptake g 2 ðh; c; fÞ -terms listed in Table 1 reveals a number of structural similarities and differences.
Nutrient uptake
For both swarms and biofilms, the active phase (i.e., swarm cells or the EPS matrix) is generated within the bacterial colony by converting nutrient in the underlying substrate to biomass. The rate of change of nutrient concentration within the substrate depends on diffusion and nutrient uptake (see Equation (3) and Equation (A2)), and is derived in Appendix 2. When the substrate concentration is scaled by the initial concentration c 0 , the nutrient depletion rate depends on G=c 0 , the ratio of the specific nutrient consumption rate to the initial concentration. Bacterial swarming is typically associated with nutrient rich conditions, where c 0 ) G. As a result, the nutrient uptake term can be neglected in bacterial swarming as g 2 ! 0, and the concentration c » c 0 throughout swarm expansion. In contrast, biofilm growth occurs under nutrient limited conditions where G=c 0~O ð1Þ, resulting in a corresponding uptake term shown in Table 1. Therefore, biofilm expansion is necessarily unsteady and driven by the dynamics of the transient nutrient field.
Growth
In both swarms and biofilms, the generation of the active phase drives colony expansion and is described by the growth term in Equation (1) using a logistic function g 1 ¼ g 0 hfð1 À hf=ðHf 0 ÞÞ to model the active phase growth, where Hf 0 is the limiting thickness, and g 0 indicates a specific growth rate. In bacterial swarms, g 0 is independent of the nutrient concentration (as c » c 0 during swarm expansion). Therefore, the spreading swarm films have a steady-state structure that exhibits a central spatial plateau about hf ¼ Hf 0 . In contrast, biofilm growth corresponds to a nutrient poor environment. We model the biofilm growth dependence on nutrient concentration via a minimal Michaelis-Menten form g 0 ¼ Gc=ðK þ cÞ, . Unlike in nutrient rich conditions associated with swarms, this implies that biofilm growth is fundamentally transient; once the nutrient field at the interior is depleted as c ! 0, biofilm growth term in that region is arrested and g 1 ! 0 independently of the vertical thickness (i.e., even if hf 6 ¼ Hf 0 ). As a result, the biofilm does not give form a central plateau and the dynamics of the biofilm rim is fixed by the dynamics of nutrient depletion. Eventually the effect of the finite-size of the system (the petri dish) also becomes important it determines the overall dynamics of nutrient depletion.
Active and passive fluxes
The terms Q 1 ðxÞ and Q 2 ðxÞ that represent the horizontal flux of the active and passive phases are obtained by depth integrating the momentum balance equations in the thin-film lubrication limit, as described in Appendix 2 (c.f. Equations (A9)-(A11)). Within bacterial swarms, the passive aqeuous fluid phase is modeled as a Newtonian liquid with viscosity 2 . The first term of Q 1 ðxÞ and Q 2 ðxÞ in Table 1 for swarms is generated by viscous and capillary stresses within the swarm fluid. The active swarmer cells are treated as inviscid and subjected to a hydrodynamic frictional drag force. Specifically, we assume that individual bacteria within the swarm are undergoing a random walk process with zero net displacement (upon averaging over sufficiently large time-intervals). Even though there is no overall displacement, there is a net time-averaged drift that arises from viscous stokes drag interaction between the fluid and the active bacteria. The second term for Q 1 ðxÞ in Table 1 represents this timeaveraged drift arising from frictional drag interaction of the bacteria with the swarm fluid. In biofilms, the EPS matrix phase constitutes an active viscous hydrogel network with viscosity 1 , whereas the passive aqueous fluid phase is treated as a solvent with viscosity 2 . The dominant stress within the EPS phase in the biofilm model arises from a Flory-Huggins swelling pressure in the polymer chains (Cogan and Guy, 2010;Winstanley et al., 2011). In the fluid phase, the pressure p f is set by surface tension and curvature of the swarm fluid. Both these stresses contribute to the effective EPS phase pressure term PðxÞ, as described in Appendix 2. Consequently, the first term for Q 1 ðxÞ and Q 2 ðxÞ in Table 1 for biofilms is related to the gradient of the effective pressure. Moreover, following Winstanley et al. (2011), we assume that the capillary and viscous stresses in the swarm fluid are negligible when compared to the frictional drag due to flow between water and the EPS polymer chain network in the biofilm model. Therefore, the second term for Q 2 ðxÞ in Table 1 represents a Darcy-type flow of the aqueous phase within the EPS matrix. The osmotic influx terms are considered separately in the following sections when describing the equations governing swarm and biofilm expansion.
Bacterial swarms
Species of bacteria that swarm on hydrated surfaces are known to secrete distinct wetting agents. For example, B. subtilis secretes the lipopeptide surfactin, whereas P. aeruginosa secrets rhamnolipids as the wetting agent. Consequently, existing thin-film models to describe bacterial swarming assume that gradients in wetting agent activity generate Marangoni stresses that drives swarming motility (Fauvart et al., 2012;Trinschek et al., 2018). However, E. coli exhibits swarming behavior despite the absence of lipopeptides or other agents that act as surfactants. Moreover, recent experiments (Yang et al., 2017) demonstrates that P. aeruginosa swarms robustly even after exogenously eliminating gradients in surfactant concentration within the swarm fluid, eliminating Marangoni flows as the principal mechanism that drives swarming. Here, we take a different approach based on experiments that show that steady-state swarm colony expansion maybe mediated by secretion of agents that are osmotically active (Wu and Berg, 2012). As we will see, this leads to fluid being extracted from the substrate near the front, then driven into the colony by capillary and viscous stresses, and eventually returns into the substrate in the interior of the swarm.
Within the bacterial swarms, the dominant phases are the swarmer cell phase, and the viscous aqueous phase, as shown in the bottom panel of Figure 2A. Fluid uptake from the substrate is regulated by the secretion of osmotically active agents by the swarmer cells (Ping et al., 2014). We represent the osmotic agent in the fluid by a concentration field, c osm ðfÞ that is proportional to the local volume fraction of cells such that c / f=ð1 À fÞ, and gives rise to an osmotic pressure described by van't Hoff's law as (van't Hoff, 1887), DÉ ¼ É 0 f=ð1 À fÞ À É eq À Á , that drives the fluid intake. Here, É 0 is the osmotic pressure scale in the swarm fluid and É eq is the equilibrium osmotic pressure within the underlying tissue/gel substrate. Away from the front, in the interior of the swarm colony, there is no net fluid influx (Ping et al., 2014). Therefore, the equilibirium volume fraction of the swarm cells at the interior is, f 0 ¼ É eq =ðÉ 0 þ É eq Þ. At the front itself, the difference in osmotic pressure results in a net Darcy-type fluid influx into the swarm, V 0 ðxÞ, expressed as, where Q 0 is a velocity scale associated with fluid inflow from the substrate. Measurements of cell replication within swarms reveals that growth is restricted to swarmer cells at the periphery (Hamouche et al., 2017), which we model using a modified logistic growth term g 1 ðh; fÞ as listed in Table 1, that localizes all cell division to the periphery. Here, Hf 0 is the limiting thickness of the swarmer-cell phase at the interior, and g 0 is an effective specific growth rate, related to true specific cell growth rate by a geometric factor (see discussion in Appendix. [2]).
Parameters and scaling laws for bacterial swarms
To make sense of the scales in the problem, we use the dimensionless variablesx ¼ x=L,ẑ ¼ z=H and t ¼ tg 0 where H is the vertical length scale, L is a horizontal length scale and 1=g 0 is the time-scale associated with bacterial growth. The resultant horizontal velocity scale in the swarm colony is U ¼ Lg 0 . Swarm expansion is fluid driven, and therefore balancing the viscous stresses generated in the swarm fluid, with the curvature pressure due to surface tension (Levich and Landau, 1942) results in 2 U=H 2~g H=L 3 , where 2 is the viscosity and g is the surface tension of the aqueous phase. As a result, the natural horizontal length scale is L ¼ HðCaÞ À1=3 , where Ca ¼ ð 2 U =gÞ is a capillary number associated with the microbial swarm fluid. Consequently, in our model the expansion speed of the swarm colony, V ¼ dR=dt, is determined by the product of the horizontal length scale and an effective growth rate, and is predicted to scale as, whereas, the swarm front itself is analogous to a capillary ridge in thin fluid film with a width W that is predicted to scale as, where, C 1 and C 2 are dimensionless prefactors that require a detailed numerical calculation, and are discussed later. There are two important dimensionless parameters that describe swarm colony expansion. The first dimensionless parameter, a 1 , relates the magnitude of capillary forces to the viscous drag acting on cells within the swarm and is defined as a 1 ¼ ðgH=L 2 Þ=ðzLUÞ. Here, z ¼ z c =V c where z c is the friction coefficient of a single swarmer cell and V c is its volume. The second dimensionless parameter a 2 is defined as the ratio of a vertical fluid influx velocity Q 0 , to a thickness velocity scale Hg 0 associated with bacterial growth as a 2 ¼ Q 0 =ðHg 0 Þ. The vertical length scale and equilibrium fluid volume fraction are estimated from the interior monolayer region as H = 0.5 mm and f 0 ¼ 0:5 (Wu and Berg, 2012). We assume values of 2 ¼ 10 À3 Pa.s for the (aqeuous) swarm fluid viscosity, and g ¼ 10 À2 N/m as its surface tension. The friction coefficient of a single cell is estimated from Stokes law as z c ¼ 3p 2 a, and its volume is approximated as V c ¼ pa 3 =6, where a = 1 mm is the cell diameter. Therefore, the friction coefficient is z ¼ z c =V c » 18 2 =a 2 . As a result of substituting the values of known parameters above, the dimensionless parameter a 1 reduces to a constant geometric ratio, a 1 » 2a 2 =H 2 » 2=9 » 0:22.
The value of a 2 depends on the ratio Q 0 =g 0 . Direct experimental measurements of the vertical influx fluid velocity profile V 0 ðxÞ and the spatial profiles of cell division in swarm colonies remain scarce (Hamouche et al., 2017). In order to make progress in validating our model with real experimental data, the vertical fluid influx velocity scale is chosen as Q 0 ¼ 10 À2 m/s. Consequently, we have chosen g 0 as the only fitting parameter in our study, as detailed in Appendix 2. As an example, in the following section we will show that a choice of g 0 ¼ 0:013 s -1 in our model reproduces the experimental swarm expansion speed shown in Figure 1D, and leads to a horizontal length scale of L ¼ HðCaÞ À1=3 ¼ 100 m, velocity scale of U ¼ Lg 0 ¼ 1:3 m s -1 , Ca ¼ 1:3 Â 10 À7 and a value of a 2 » 1:5. A complete set of parameters for three experimental measurements of swarm expansion in B. subtilis, and two existing measurements in E. coli previously reported by Darnton et al. (2010) and Wu and Berg (2012) are summarized in Appendix 2.
Steady state swarms
With these assumptions, and assuming that the nutrient concentration is constant, Equations (1-3) reduce to the following scaled equations in the swarming limit, To complete the formulation of the problem, we need five boundary conditions which arê hxð0Þ ¼ĥxðR P Þ ¼ 0,ĥxxxð0Þ ¼ĥxxxðR P Þ ¼ 0, and fð0Þ ¼ f 0 , where R P is the dimensionless size of the petri-dish and is set much larger than the colony size (R ¼ 150) in our simulations. The initial condition corresponds to a circularly inoculated swarm colony, along with a thin pre-wetting film where no bacterial growth occurs (see Appendix 3-figure 1). Solving Equations (7) and (8) with the prescribed initial and boundary conditions numerically results in a steady state solution that advances at a constant speed (see Figure 3). In Figure 3, we plot a representative steady state solution in the frame of the advancing front for a 1 ¼ 0:2, a 2 ¼ 1:5 and f 0 ¼ 0:5. At the interior of the swarm, the average cell volume fraction is f » f 0 . Near the leading edge of the swarm, there is a region of enhanced thickness as indicated by the red line in Figure 3A. Immediately behind the leading edge, where the cell concentration is highest, so is the osmolyte concentration leading to fluid extraction from the substrate, while further behind, fluid is reabsorbed, as indicated by the arrows in Figure 3A. In Figure 3B, we show the steady-state osmotic flow solution and see that it correlates well with the experimentally measured osmotic pressure profile by Ping et al. (2014) in E. coli swarms. As shown in Appendix 3-figure 3, our numerical horizontal flow profiles are also consistent the scaled radial fluid velocity measurements of Wu and Berg (2012). In Figure 3C, we see that the radial expansion velocity scales as Hg 0 and shows quantitative agreement with experiments and is insensitive to the fluid influx velocity scale when Q 0 ) g 0 H. Note that our model uses a coarse-graining procedure and represents the swarm thickness field using a continuum approximation. As a consequence, we are not able to quantitatively capture the decreasing height of the swarm (i.e., of the order of a few cells), that is experimentally observed over hundreds of micron towards the interior (see Figure 1E).
Furthermore, we corroborate our scaling law in Equation (5) by fitting our model to five independent experimental measurements of swarm expansion velocities for different systems, as shown in Figure 3D. These include measurements in B. subtilis swarms in this work, and in E. coli swarms previously reported by Darnton et al. (2010) and Wu and Berg (2012) that are summarized in Table A2 in Appendix 2. The expansion velocity follows the À1=3 exponent predicted by Equation (5) for Ca varying from~5 Â 10 -8 to 10 -6 . For each experiment, we have fit our theoretical model using the effective growth rate g 0 as the fitting parameter and find that the numerical prefactor C 1 » 0:42. However, as shown in Appendix 3- figure 5 in the Appendix, the measured multi-layer width does not follow the predicted scaling. From an experimental point of view, the width of the multi-layer region is not sharply defined in Figure 1E, and will depend on the choice of threshold. However, our multi- (4). On the right axis are experimental measurements of the steady-state osmotic pressure within an expanding E. coli swarm (filled circles), reproduced from Ping et al. (2014), with the baseline reference value shifted to zero, and with distances normalized by L = 50 mm. (C) Predicted steady-state radial colony expansion speeds within the swarm for values of a 2 ¼ Q 0 =ðHg 0 Þ ¼ 1, 10 and 100 respectively. The data points are expansion speeds in B. subtilis swarms measured over 20 min, and scaled using U = 1.3 mms À1 and g 0 0.013 s -1 . (D) Comparison between the swarm expansion velocities dR=dt measured for five separate colonies (see Appendix 2) and the estimated capillary number. For each experiment, g 0 was obtained by fitting the steady state solution of Equations (7) and (8) phase model is able to describe the zone of cellular and osmolyte activity near the leading edge that drives the advancing swarm front. This leads to a picture wherein the combination of a fluid-filled substrate and swarm front work together like a localized active circulatory system, quantitatively rationalizing the experimental observations of Wu and Berg (2012) and Ping et al. (2014).
Bacterial films
In bacterial biofilms, the EPS matrix secreted by bacteria constitutes the active phase and undergoes swelling, drawing in the fluid that acts as the passive phase. As shown in Figure 2B, the EPS is initially synthesized in a partially swollen, out-of-equilibrium state at the periphery. The polymer chains gradually relax to an equilibrium fully-swollen configuration by the generation of a swelling pressure É within the biofilm, and via fluid uptake V 0 ðxÞ from the substrate. As discussed in Appendix 2, the swelling pressure is ÉðfÞ ¼ ðfÞ=f, where ðfÞ ¼ 0  f 3 is the osmotic pressure in the EPS matrix using the Flory-Huggins model for a polymer network in a -solvent (Rubinstein and Colby, 2003), where 0 ¼ kT=ðb 3 Þ is the osmotic pressure scale, kT is the product of the Boltzmann constant with the temperature and b is the approximate size of the monomer unit. The net effective pressure term driving biofilm expansion is, P ¼ 0 f 3 þ p f , where p f is the capillary pressure, so that the water influx across the substrate is where Q 0 is the influx fluid velocity scale, f 0 ¼ ðÉ eq =É 0 Þ 1=3 is the fully-swollen EPS polymer volume fraction and É eq is the osmotic pressure of the substrate over which the colony grows. Finally, nutrient uptake is modeled by a Monod growth law, while the synthesis of the EPS matrix is modeled by a logistic term as listed in Table 1.
Parameters and scaling estimates for bacterial films
where H is now the maximum biofilm thickness, G is the rate of EPS production, and c 0 is the initial nutrient concentration in the substrate. As biofilm growth is nutrient limited (Liu et al., 2015), the dimensionless length scale is determined from Equation (3) and is expected to scale as L ¼ ðD=GÞ 1=2 and the corresponding velocity scale is U ¼ ðDGÞ 1=2 . Using these scales, we can define the ratio of osmotic stresses relative to viscous stress in the EPS phase in terms of the dimensionless parameter, b 1 ¼ ðÉ 0 =LÞ=ð 1 U=H 2 Þ, the ratio of capillary stresses relative to the EPS viscous stress in terms of another parameter, b 2 ¼ ðgH=L 3 Þ ð 1 U=H 2 Þ, the ratio of capillary stress to the interfacial drag in the aqueous fluid phase, b 3 ¼ ðgH=L 2 Þ ðzULÞ, and the ratio of the fluid influx velocity to the EPS swelling velocity, b 4 ¼ Q 0 =ðHGÞ. As shown in Appendix 2, the effective nutrient uptake rate is S ¼ ðGHf 0 Þ=ðc 0 dÞ, where G is the nutrient consumption rate per unit concentration and d is the substrate thickness. Consequently, we define b 5 ¼ S=G as the ratio of the effective nutrient uptake rate to the EPS production rate.
We set the EPS production time-scale as G ¼ 1=40 min À1 , resulting in a horizontal length scale of L ¼ ðD=GÞ 1=2 ¼ 1:1 mm and velocity scale U ¼ ðDGÞ 1=2 ¼ 0:5 m/s. The effective nutrient uptake rate is estimated as S ¼ 1=25 min À1 , where we have taken d ¼ 7 mm as the substrate thickness (Srinivasan et al., 2018), G ¼ 10 À2 mM/s as the nutrient uptake rate (Zhang et al., 2010), and c 0 ¼ 35 mM as the initial concentration of the carbon source. The friction coefficient is z~ 2 = 2 , where the EPS mesh size is ¼ 50 nm (Yan et al., 2017). Using measured estimates of the biofilm viscosity 1 ¼ 10 5 Pa.s (Stoodley et al., 2002;Lau et al., 2009), fluid phase viscosity 2 ¼ 10 À3 Pa.s, surface tension g ¼ 10 À2 N/m, an osmotic scale É 0 ¼ 2100 Pa (Yan et al., 2017) (i.e., f 0 ¼ 0:04), biofilm thickness H ¼ 400 m, and nutrient diffusivity in agarose gels of D ¼ 5 Â 10 À10 /s (Zhang et al., 2010) implies that b 1 » 7, b 2 » 0:01, b 3 » 0:02, b 4 » 1 and b 5 » 2. Consequently, within the context of our model, it is evident that osmotic stresses, fluid influx and biomass growth are the dominant forces that drive colony expansion. Moreover, in the nutrient limited regime, our model predicts the transient maximum biofilm expansion velocity to scale as, whereas, the width of the propagating fronts of EPS production experimentally observed by Srinivasan et al. (2018) is predicted to scale according to, where C 3 and C 4 are once again dimensionless prefactors that require a detailed numerical calculation, as discussed later.
Unlike in the case of swarms, the solutions to Equations (12)-(14) are transient, and exhibit two distinct expansion regimes: initial acceleration phase untilt c ¼ 5, followed by a decelerating phase beyond. Fort <t c , colony expansion arises as the microbes rapidly consumes locally available nutrient at the interior and synthesize fresh EPS matrix, generating spatial gradients in nutrient availability (see Figure 4A). In Figure 4B, we show that the newly synthesized EPS generates a large osmotic pressure differential between the biofilm and the substrate, and osmotic fluid influx gradually relaxes the biofilm matrix to a swollen configuration. For t > t c , the localized zone of EPS production near the film front propagates with a fixed shape as shown in Figure 4C, consistent with the observed spatial localization in tapA gene activity (see Figure 1J and Srinivasan et al., 2018). Moreover, the radial colony expansion profile in Figure 4D is also consistent with the non-monotonic front speed observed experimentally (Srinivasan et al., 2018). For the specific experimental conditions we consider, our detailed theory allows us to estimate the prefactors in the scaling laws Equations (10)-(11) so that C 3 » 0:2 and C 4 » 1:8.
These results are hallmarks of a transition from a bulk to an edge biofilm growth mode, triggered by nutrient limitation (Pirt, 1967). In the deceleration regime, diffusive transport of nutrients from a region external to the colony continues to sustain EPS production at the biofilm periphery, analogous to Stefan-like problems in solidification. Our generalized multiphase model is thus able to quantitatively rationalize the expansion curves, transition time and localized biological activity observed experimentally, and demonstrates that nutrient availability and diffusive transport governs the dynamics of Bacillus subtilis macrocolonies grown on agar.
Discussion
Analysis of collective microbial expansion in thin film geometries often prioritizes biological mechanisms, such as genetic regulation, developmental programs and cellular signaling/competition, over the role of the heterogeneous physical micro-environments. Here we have presented a multi-phase theory that quantitatively describes the expansion dynamics of microbial swarms and biofilms and considers variations in the colony thickness, an aspect of colony expansion that has often been overlooked in many theories (Korolev et al., 2012;Ghosh et al., 2015;Wang et al., 2017). The resulting unified description of both steady-state swarms and transient biofilm spreading leads to simple estimates and scaling laws for the colony expansion rate that are validated via comparison with experimental measurements for different systems. In swarms, exudation of water from the permeable substrate via bacterial osmolyte secretion facilitates steady state colony expansion. Numerical solutions of our model demonstrate that the shape of the swarm front is determined by capillarity, and its expansion speed by cell-division and growth, leading to scaling laws validated by comparison with previous experiments. In contrast, transient biofilm macrocolony expansion on agar is driven by osmotic polymer stresses generated via EPS matrix production in a spatially localized zone at the periphery. Nutrient transport and depletion leads to the formation of these heterogenous zones, and results in two regimes in biofilm expansion.
However our depth-integrated theory also has certain limitations. For example, we are unable to capture discrete thickness variations of the order of a few cells, which might require an agent-based approach. For bacterial swarms, our model is unable to quantitatively account for the region of enhanced thickness (i.e., the multilayer region in Figure 1C and E), likely because the multilayer width is difficult to experimentally ascertain, owing to the large tail distribution seen in the mean intensity trace in Figure 1E, and the arbitrariness in the choice of threshold in Appendix 3- figure 5. Similarly, in the context of biofilm colony expansion, our model does not account for sliding and frictional contact between the cells/EPS matrix and the substrate (Farrell et al., 2013). More generally, our mean-field picture neglects fluctuation-driven effects during colony expansion, such as the formation multicellular raft structures (Kearns, 2010) and synchronized long-range interactions (Chen et al., 2017). Natural next steps of our approach include (i) adding three-dimensional effects by allowing for spatial variations in the mechanical stresses, flows and nutrient fields in the vertical direction, (ii) accounting for orientational order in the bacterial swarms and films, and (iii) accounting for interfacial tension on the stability of the growing swarm/biofilm-fluid interface, especially in the context of fingering instabilities in microbial colonies Trinschek et al. (2018).
A rigorous multi-phase approach may also be relevant in revisiting pattern formation phenomena in microbial colony expansion (Matsushita et al., 1999), that so far been addressed primarily using various non-linear diffusion models (Golding et al., 1998;Allen and Waclaw, 2019) that ignore the third dimension. Finally, from an experimental and theoretical perspective, our results naturally raise the question of controlling biofilm and swarm expansion by manipulating water and nutrient availability, complementing the better studied approaches of manipulating colonies by the genetic regulation of EPS production, cell division, and chemical signaling in microbial colonies. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Experiment Strains
In this study, we used two B. subtilis strains, MTC822 and MTC832, that were both previously constructed from a wild-type NCIB 3610 B. subtilis strain using a standard transformation protocol (Sinha, 2013). The MTC822 strain is used for fluorescence visualization in the swarming experiment, where the mkate2 red fluorescent protein reports on the activity of the constitutive hyperspank promotor via the amyE::Phyperspank-mkate2 construct. The MTC832 strain was used in the biofilm experiments in order to visualize localized matrix production activity and harbors the amyE::PtapA-cfp construct. In the MTC832 strain, the cfp cyan fluorescent protein reports on the activity of the tapA gene that is associated with exopolysaccharide production activity.
Materials and methods
Swarm plates were prepared using 0.5 wt% agarose gel (A1296, Sigma) infused with 25 ml of Luria-Bertani (Miller) medium (i.e. 10 g/L Tryptone 10 g/L NaCl 5 g/L Yeast Extract, Sigma) and 25 mg/ml Chloramphenicol. Biofilm plates were prepared using 1.5 wt% agarose gel (A1296, Sigma) infused with the standard MSgg biofilm-inducing growth medium (Branda et al., 2001) (i.e. 50 mM MnCl 2 , 5 mM KH 2 PO 4 , 1 mM ZnCl 2 , 50 mM FeCl 3 , 2 mM MgCl 2 , 700 mM CaCl 2 , 50 mg/ml threonine, 50 mg/ml tryptophan, 50 mg/ml phenylalanine, 0.5 wt% glutamate, 0.5 wt% glycerol, 2 mM thiamine and 100 mM MOPS (pH 7)) and 50 mg/ml Spectinomycin. Note that all plates underwent an identical drying protocol prior to use. Freshly poured plates were initially dried with the lid open under a laminar flow hood for 15 min. Subsequently, the lid was closed and the dish was cooled at 25 C overnight for a period of 10 hr. All strains were initially grown in fresh Luria-Bertani (Miller) broth medium (Sigma) until mid-exponential phase in a shaker/ incubator at 37C. The cultures were diluted to OD 650 ¼ 0:1 and~1 l drop was deposited onto the corresponding swarm (for MTC822) or biofilm (for MTC832) plates. The petri-plates were transfered to a 30C incubator chamber during growth. Fluorescence imaging was performed using a Zeiss Axiozoom.V16 microscope with a PlanNeoFluar Z 1.0x objective (NA 0.25), with a Zeiss 63 HE filter to image the red mkate2 protein, and a Zeiss 47 HE filter to image the cyan cfp protein. For swarm profile measurements, images of the advancing swarm front were captured every 10 s over a period of 10 min. For biofilm colonies, expansion velocities were measured every 10 min over a period of 72 hr following the protocol described in Srinivasan et al. (2018).
g 1 ðh; f; cÞ ¼ g 0 f 1 À hf Hf 0 ; where H is the swarm colony thickness at the interior, and g 0 is an effective growth rate that accounts for spatial localization in cell division. More specifically, if LðxÞ describes the spatial profile of cell growth within a swarm colony, then g 0 ¼ R R 0 LðxÞdx= R R 0 f½1 À ðhf=ðHf 0 ÞÞdx, where R is the radius of the swarm colony. Measurements of the spatial distribution of cell growth rates within the colony during swarming remain lacking. Consequently, in our model, we determine the value of g 0 by fitting it to the experimental data. We use a nonlinear leastsquares solver to match steady-state expansion speeds obtained from solving Equations (A21)-(A22) to the experimental data for steady-state B. subtilis swarms (see Figure 1D in main text).
Swarm equations
Combining Equations (A5)-(A8) with Equations (A17)-(A19) results in the dimensional thickness averaged equations for swarm colonies, biofilm and swarm simulations respectively. We use quintic Lagrange basis functions with element sizes below 0.04 and use the general form PDE solver. To handle the moving contact line, we introduce a precursor film of thickness h p =H = 0.0125, where H is the vertical length scale (see Appendix 3-figure 1). We follow the regularization described in Trinschek et al. (2016) to introduce a minimum threshold for growth and a stable fixed point in the precursor film. Specifically, the growth terms in (A2), (A20) and (A32) are multiplied by a factor F ¼ ½1 À exp 5ĥ p Àĥf  1 Àĥ p =ðĥfÞ , where F » 1 everywhere except near the precursor film where F ¼ 0.
Experimental data
Appendix 2-table 2. Summary of the comparision between the experimental data and model. The experimentally measured quantities are the colony expansion speed V ¼ dR=dt and multilayer region thickness W. The value of a 2 is determined by fitting Equations (7)-(8) to the expansion velocity, leading to estimates of the effective growth rate g 0 , the horizontal length scale L and the capillary number Ca. | 2019-05-01T13:04:06.129Z | 2019-04-30T00:00:00.000 | {
"year": 2019,
"sha1": "c70e8206bbf98056cf5562a48ce79445bf549a37",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.42697",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c70e8206bbf98056cf5562a48ce79445bf549a37",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
1289581 | pes2o/s2orc | v3-fos-license | BP180 Is Critical in the Autoimmunity of Bullous Pemphigoid
Bullous pemphigoid (BP) is by far the most common autoimmune blistering dermatosis that mainly occurs in the elderly. The BP180 is a transmembrane glycoprotein, which is highly immunodominant in BP. The structure and location of BP180 indicate that it is a significant autoantigen and plays a key role in blister formation. Autoantibodies from BP patients react with BP180, which leads to its degradation and this has been regarded as the central event in BP pathogenesis. The consequent blister formation involves the activation of complement-dependent or -independent signals, as well as inflammatory pathways induced by BP180/anti-BP180 autoantibody interaction. As a multi-epitope molecule, BP180 can cause dermal–epidermal separation via combining each epitope with specific immunoglobulin, which also facilitates blister formation. In addition, some inflammatory factors can directly deplete BP180, thereby leading to fragility of the dermal–epidermal junction and blister formation. This review summarizes recent investigations on the role of BP180 in BP pathogenesis to determine the potential targets for the treatment of patients with BP.
Bullous pemphigoid (BP) is by far the most common autoimmune blistering dermatosis that mainly occurs in the elderly. The BP180 is a transmembrane glycoprotein, which is highly immunodominant in BP. The structure and location of BP180 indicate that it is a significant autoantigen and plays a key role in blister formation. Autoantibodies from BP patients react with BP180, which leads to its degradation and this has been regarded as the central event in BP pathogenesis. The consequent blister formation involves the activation of complement-dependent or -independent signals, as well as inflammatory pathways induced by BP180/anti-BP180 autoantibody interaction. As a multi-epitope molecule, BP180 can cause dermal-epidermal separation via combining each epitope with specific immunoglobulin, which also facilitates blister formation. In addition, some inflammatory factors can directly deplete BP180, thereby leading to fragility of the dermal-epidermal junction and blister formation. This review summarizes recent investigations on the role of BP180 in BP pathogenesis to determine the potential targets for the treatment of patients with BP.
Keywords: BP180, bullous pemphigoid, autoantibody, dermal-epidermal junction, cytokine iNTRODUCTiON Bullous pemphigoid (BP), by far the most common autoimmune blistering disease, is induced by autoantibodies against the structural components of the dermal-epidermal junction (DEJ) (1). In most cases, the disease develops cryptically (2). The suggested causes of BP include silicosis (3), psoralen and ultraviolet A therapy (4), infections (5), physical or chemical insults (6)(7)(8), certain fruits (9), and medications (10,11). However, the validation of these factors in the pathogenesis of BP remains be established. BP mainly affects the older age group of both sexes, or those 70 years old and above, but it can also affect infants, children, and adolescents (1,12). This disease mainly involves the skin but occasionally the eyes, mouth, and genitals (1,2). The cutaneous manifestations of BP are polymorphic and can be classified into three groups, namely classical BP, non-bullous cutaneous pemphigoid, and various rare variants (13,14). Classical BP is clinically characterized by large (1-3 cm), tense, serous, or hemorrhagic blisters that appear on erythematous, urticarial, or eczematous lesions and even on apparently normal skin (1,13). The biopsied lesions exhibit subepidermal splitting or blisters, which is the hallmark of BP, with dense inflammatory infiltration of eosinophils, basophils, neutrophils, lymphocytes, and mast cells in the dermis (1). Immunofluorescence analysis is necessary for the diagnosis of BP (15). Direct immunofluorescence is the most sensitive method for BP diagnosis, in which the lesion shows linear deposition of immunoglobulin G (IgG), C3 complement, and even IgE at the DEJ (16)(17)(18). Indirect immunofluorescence using the patient's sera and a substrate, especially salt-split skin, reveals a linear deposition of IgG along the roof of the artificial split (18).
One typical serologic characteristic of BP is the presence of circulating autoantibodies, which are mostly against BP180 (collagen XVII) and BP230 (15,19,20). BP180 is a 180 kDa transmembrane glycoprotein with a 16th non-collagenous (NC16A) domain, which is the immunodominant part in BP (14). BP230 is an intracellular constituent of the hemidesmosomal plaque and belongs to the spectraplakin family (20,21). The autoantibodies reported in BP include IgG and IgE (1,22). Usually, IgG autoantibodies to BP180 are the ones first to be detected, and then IgG autoantibodies to BP230 subsequently appear (23). IgE antibodies to BP230 can also be detected in the blood of BP patients (24). Given the existence of autoantibodies, there have been commercially available enzyme-linked immunosorbent assay (ELISA) kits that target BP180 and BP230 antibodies for BP auxiliary diagnosis (25,26).
Due to the age group involved and the application of more sensitive and specific diagnostic assay systems, the reported BP morbidity has increased (14,19,27,28). Moreover, for disease-specific factors, due to the concomitant occurrence of neurodegenerative disorders, use of higher doses of oral corticosteroids, and the propensity to malignancies and venous thromboembolism, BP mortality showed an increasing trend as well (19,(29)(30)(31)(32)(33)(34)(35)(36). These findings suggested the contributory role of activation of blood coagulation in the pathogenesis of BP (35,36). Presently, topical or systematic corticosteroids, with or without immunosuppressive agents, are still the mainstays for BP treatment (1,14,37,38). Intravenous Ig has also been introduced as an alternative therapy for BP (39)(40)(41), however, its effectiveness is still questionable (42,43). Therefore, it is of highly importance to discover new targets to reduce BP morbidity and mortality. Recently, increasing evidences show that autoimmune responses to BP180 are important in the initiation and evolution of BP (44). The binding of autoantibodies to BP180 is a central step for blister formation. Moreover, BP180 is associated with severe and extensive lesions that require higher dose of steroids, which is a key risk factor for death (14,28,45). The serum level of anti-BP180 NC16A autoantibody correlates with the more active and severe disease, as well as poorer prognosis (33,46). We, thus, consider BP180 as the most important culprit in the pathogenesis of BP and focused this review on recently updated knowledge on BP180 and its autoantibodies in BP.
THe BASiC STRUCTURe OF BP180
BP180 is a type II transmembrane protein with a cytosolic NH2 terminal and an extracellular COOH domain (47). The N-terminal domain, transmembranous stretch, and extracellular C-terminus have 466, 23, and 1,008 amino acids (aa) in length, respectively (48). The ectodomain contains 15 collagenous subdomains (COL1-COL15) interspersed by 16 non-collagenous sequences (NC1-NC16). The NC16A domain, a juxtamembranous linker region, appears to be biologically important, as it serves as the nucleus for the formation of a collagen-like triple helix (49,50). The extracellular domain contains coiled-coil structures, which are physiologically shed from the cell surface by a disintegrin metalloproteinase (ADAM) (50). The ectodomain forms a loop structure as it spans the lamina lucida, extends to lamina densa, and then kinks back into the lamina lucida (49). BP180 contains multiple binding sites for hemidesmosome proteins, including the extracellular domains of integrin α6 and laminin-332 (laminin-5) and the cytoplasmic domains of integrin β4, plectin, and BP230 (20). The structure and location of BP180 indicate that it acts as a core anchor protein that connects the intracellular and extracellular hemidesmosomal proteins and plays a key role in the pathogenesis of BP.
THe ePiTOPe PROFiLeS OF BP180
Previous studies mainly focused on extracellular NC16A domain (aa residues 490-562), which is the main target of BP autoantibodies. The NC16A domain has seven antigenic sites, including NC16A1, NC16A1-3, NC16A1-5, NC16A2, NC16A2.5, NC16A3, and NC16A3-4 (51-53) (Figure 1). Among these sites, NC16A2 and NC16A2.5 are the major antigenic sites, which can be targeted by all IgG and IgE antibodies. However, recent studies have described additional autoantibody-binding domains of BP180, such as the intracellular domain (ICD) and ectodomain (44,54). The ICD (aa 1-452) has five target sites, namely ICD A, ICD B, ICD C, ICD D, and ICD A-D, and a central region (aa 112-199) (Figure 1). A previously published study reported that out of 18 sera of BP patients, 16 reacted with recombinant ICDs and that most of the antibodies bind to the central portion (55). A great number of sera combined with at least one of the ICD regions. With regard to ectodomain, it has been reported that 7.8-47% of BP sera recognized the C-terminal regions of the ectodomain (54,56). Further mapping identified the six regions outside of NC16A that were recognized by the sera of the patients: aa 809-1106, aa 1080-1107, aa 1280-1315, aa 1331-1404, aa 1365-1413, and aa 1048-1465 (11,52,54,57). aa 809-1106 and aa 1080-1107 were at the midportion, whereas aa 1331-1404 and aa 1365-1413 were at the COOH-terminal (Figure 1). Other epitopes embracing more than one domain, such as aa 467-567, aa 490-812, and aa 490-1497, were also reported (11,52). It has been suggested that the pattern of epitope recognition may influence the course of the disease (23). Therefore, the recognition of target regions within BP180 is substantial in understanding the disease initiation and clinical characteristics of BP.
THe SOURCe OF AUTOANTiBODieS TO BP180
The etiology of BP is complex, but the presence of autoantibodies was widely accepted as the sine qua non of the condition. Anti-BP180 autoantibodies also exist in healthy people, even though these antibodies are conformationally different from pathogenic ones; however, only those bound to skin basement membrane can induce BP-suggesting that autoantibodies in the healthy may not be pathological per se (58,59). The autoantibodies may assume function of surveillance and self-tolerance (60). In pathologic conditions, self-tolerance of the autoantibodies is dysfunctional, thus leading to the production of a higher-level of autoantibodies that bind to skin basement membrane and give rise to the occurrence of BP. The development of BP suggests that there is a threshold or checkpoint in terms of autoantibody generation (61). It remains unclear why immune tolerance to BP180 is dysfunctional in some individuals. Previous study suggests that CD4+ CD25+ Foxp3+ regulatory T (Treg) cells play an indispensable role in maintaining self-tolerance and in suppressing excessive production of autoantibodies deleterious to the host (62)(63)(64)(65). The reduction of CD4+ CD25+ Foxp3+ Treg cells in BP, as induced by triggers that are variants of pre-existing genetic factors, such as HLA-BQB1*0301, CYP2D6, MT-ATP8, and so on, leads to the breakage of self-tolerance, followed by the increase in autoreactive Th2, Th1, and B cells that can recognize different domains of BP180 mediated by epitope spreading to produce different autoantibodies (14,44,59,(65)(66)(67)(68)(69). The pathogens can exacerbate the process by sensitizing B cells via binding to toll-like receptors. The autoreactive T cells can interact with autoreactive B cells via combinations of CD40L-CD40, B-cell activating factor-transmembrane activator and CAML interactor (TACI)/B-cell maturation antigen, and proliferation-inducing ligand-TACI to further break peripheral tolerance and induce Ig production and class switching (70)(71)(72)(73)(74) (Figure 2). Moreover, the reactivity of T and B cells that target the NH2-terminal portion of the BP180 ectodomain is associated with severe BP, whereas the crosstalk of T and B cells targeting the central portion of BP180 is more frequently recognized in limited BP (75). The exploration in gene therapy might provide clues to retrieve Treg-mediated tolerance and to hinder the production of autoantibodies in skingrafted animals (76).
AUTOANTiBODieS TARgeTiNg NC16A OF BP180
Previously, most studies pointed out that the NC16A might be the major pathogenic epitope in BP (47,74). ELISA analysis using recombinant BP180 NC16A demonstrated that 22-100% of BP sera reacted to BP180 NC16A peptides and that autoantibodies targeting NC16A domain are associated with tense blisters, severe urticarial erythema, extensive lesions, and elevated eosinophils (45,77). Therefore, there is a variety of autoantibody types that act on this domain and mediate various pathogenesis.
Anti-NC16A igg
Anti-NC16A IgG is associated with BP-affected areas and with the occurrence of erosions and blisters in BP (46). High titers of anti-BP180 NC16A IgG at the time of therapy cessation represented the main factor in the prediction of risk of relapse in BP (78). Passive transfer of rabbit antimurine IgG antibodies against BP180 can lead to the development of BP-like skin phenotype, in which the mechanisms involved are complement activation, mast cell degradation, neutrophil infiltration, production of reactive oxygen species and proteases, and BP180 degradation (14,79); and these mechanisms suggest a complement-dependent inflammatory pathway in BP development. The pathways induced by antimurine BP180 NC16A domain is further verified in studies using mast cell-deficient (80), C5-null (16), C4-null, alternative pathway component factor B-deficient (28,81), membrane CD46 upregulated (82), Fab-IgG-deficient (83), and FcγR-deficient (84) mice. All these studies were able to identify the complementdependent inflammatory pathway of anti-BP180 NC16A IgG ( Figure 3A). There are complement-independent mechanisms that account for the induction of BP by anti-NC16A IgG. Nearly one-fifth of BP cases may develop blisters in a complement-independent manner mainly through BP internalization (16) Immunofluorescence microscopy revealed that BP180 content in BP lesions is reduced by approximately 40% (85). As demonstrated by vibration assay in vitro, keratinocytes stimulated with anti-NC16A IgG demonstrated BP180 internalization and significant decrease in cell-plate adhesion (86). Further supporting data stem from an in vivo study using neonatal C3-deficient BP180-humanized mice without complement activation (87). The effects are attributed to the internalization of BP180/anti-BP180 complex via a macropinocytic pathway, which involves ICD phosphorylation by protein kinase C and potential degradation of BP180 through a ubiquitin/proteasome pathway (85,88,89). As BP-IgG-induced BP180 internalization is insufficient to induce blister formation, various inflammatory responses mediated by FcγR-independent and FcγR-dependent pathways must be involved, which further lead to a BP-specific split (85). At least interleukin (IL)-6 and IL-8, which are induced by autoantibodies, participate in the inflammatory responses (28,90) In addition, neutrophils partly recruited by IL-8 are also essential for blister formation (91) (Figure 3B). These studies emphasized the complement-independent inflammatory pathway of anti-BP180 NC16A IgG.
However, the role of complements in BP pathogenesis, as mediated by anti-BP180 NC16A IgG autoantibodies, is still controversial. Negative C3 deposition along the epidermal basement membrane zone was found in 16.9% of BP lesions (16). Antihuman BP180 NC16A IgG4, which has low ability to bind to the Fc receptor and fixing complement, can induce dermalepidermal separation in in vitro cryosection assays and blister formation in patients (89,92). IgG4 autoantibodies are also the major IgG subclasses of autoantibodies found in more than 54.4% of BP patients, and it is parallel with the disease severity (93). An in vitro study found that anti-NC16A IgG4 might prevent the induction of BP blistering by competitively inhibiting the binding of IgG1 and IgG3 autoantibodies to the NC16A region and by blocking IgG1-and IgG3-induced complement fixation and neutrophil infiltration (94). Another study reported that anti-NC16A IgG4 has a protective role in BP (94). However, the provided C5a complement could successfully induce BP through anti-NC16A IgG4 (94). The revealed discrepancies may be explained by the different research methods used in the studies, as well as the complexity of BP, or by the possibility that the protective role of IgG4 autoantibodies in BP is due to the competitive blockade of IgG1 and IgG3 autoantibodies, which in turn gives rise to the suppression of complement-dependent blister formation. However, the "IgG4-dominant complement-independent BP" cannot be excluded. When the abovementioned studies are summarized, as well as the findings of complement fixation at basement
Anti-NC16A ige
In addition to the IgG autoantibodies, 22-100% BP patients also produce IgE autoantibodies against BP180 NC16A (24,46,96,97). The level of anti-NC16A IgE is correlated with disease activity (24,46), occurrence of urticarial lesions and erythema (46,98,99), higher prednisolone dosage, longer duration before remission, and more intensive therapies (100). Immunofluorescence revealed the deposition of IgE autoantibodies along the DEJ in up to 41% of BP patients (101). Moreover, the early pathological changes in BP, including urticaria, eosinophil infiltration, and spontaneous blistering, can only be observed in models that utilized IgE autoantibodies from patient sera or recombinant monoclonal IgE antibodies specific for BP180 (102). These observations indicate that IgE autoantibodies may also be involved in the pathogenesis of BP and correlate with certain distinct clinical features. Furthermore, epitope mapping studies have demonstrated that these IgE autoantibodies preferably target the NC16A domain of the BP180 protein as IgG (46,53,103). Injecting purified anti-BP180 NC16A IgE autoantibodies into human skin grafted on nu/nu mice can induce histologic dermal-epidermal separation, as well as erythematous and urticarial plaques; and the mechanisms of these processes include mast cell infiltration and degranulation and influx of eosinophils, lymphocytes, and neutrophils (104). An in vitro investigation showed that the injection of IgE into the dermis of a human cryosection model led to histologic separation at the DEJ through the binding of FcεRI on mast cell surface, which triggered mast cell degranulation, subsequent eosinophil infiltration, and direct activation of eosinophils and basophils mediated by high-affinity FcεRI (95,105,106). Interestingly, the amount of circulating eosinophils is correlated with the levels of both NC16A-specific IgG and IgE in BP sera (106). These results provide indirect evidence that anti-BP180 NC16A IgE autoantibodies contribute to BP-like damage and to certain distinct clinical features by triggering mast cell degranulation and basophil histamine release that is FcεRI dependent (106, 107) ( Figure 3A). The successful use of omalizumab in preventing the interaction of IgE with FcεRI in BP patients further verifies the FcεRI-dependent pathways (108,109). However, recent studies also revealed that IgE autoantibodies from BP patients could be internalized into cultured human keratinocytes or skin tissues where they stimulate production of IL-6 and IL-8 and lead to the depletion of hemidesmosomes, as observed through BP IgG autoantibodies and as the effect of anti-NC16A IgG on keratinocytes in vitro (110)(111)(112) (Figure 3B). These studies suggest that the direct function of anti-BP180 NC16A IgE autoantibodies is to promote inflammation and fragility of the DEJ in BP. Further studies utilizing IgE monoclonal antibody are necessary to explore the mechanisms underlying NC16A-specific IgE autoantibody-mediated tissue damage in BP (113).
Anti-NC16A igA
An increasing number of studies reported the potential role of anti-BP180 IgA, aside from anti-NC16A IgG and IgE, in BP pathogenesis (52,107,114,115). Comparable to IgG and IgE, IgA autoantibodies mainly target the NC16A domain (106). Anti-BP180 NC16A IgA can be found in sera of 20-65% of BP patients (51,113); and it can also be detected in the saliva of 36%, parotid gland of 44%, and in sera of 28% of mucous membrane pemphigoid patients (114). Moreover, IgA basement membrane zone deposition has been reported in 13% of BP patients (17,116). However, investigation that mechanistically elucidates the functions of IgA autoantibodies in BP are still lacking. Epitope spreading or antibody class switching are likely to be involved in the pathogenesis of BP, as there is a determined clinical association between BP and linear IgA bullous disease (LAD) (114,117). Recent studies reported that there is a linear IgA deposition in basement membrane zone, which is dapsone-responsive and characterized by a flexural distribution of intensely pruritic subepidermal bullae, thus suggesting that IgA might be associated with specific clinical features of BP or that BP may have comparable or overlapping pathomechanisms with LAD (118,119). Like LAD, the anti-BP180 IgA autoantibodies directly act on NC16A domain, leading to the release of inflammatory factors and neutrophils, degranulation of neutrophils and mast cells, and release of proteolytic enzymes-all of which are similar to the effects of IgG and IgE (118) (Figure 3A). In fact, most serum samples from LAD and BP patients contain both IgA and IgG antibodies against BP180 (114,120,121). Thus, the two diseases could be regarded as different ends of a continuous spectrum of autoimmune responses to BP180 in subepidermal blistering diseases (119). Further studies using cell and animal models are needed to comprehensively unveil the pathogenic role of anti-BP180 NC16A IgA autoantibodies.
AUTOANTiBODieS TARgeTiNg iCD AND eCTODOMAiN OF BP180
Recent studies reported that 59-82% of BP sera can recognize the ICD of BP180, while 7.8-49% of BP sera are reactive against the ectodomain of BP180 (54,77,122,123). All autoantibodies, including IgG, IgE, and IgA, can target ICD; however, these autoantibodies bind to different sites (55,114,122,123). The autoantibodies can penetrate live cells, reach their intracellular targets, and alter cellular functions (124) (Figure 4). The central region of BP180 ICD harbors binding sites that are critical for the interaction of BP180 with β4 subunit of the α6β4 integrin, which is vital for the incorporation of the protein into the hemidesmosome (49). Thus, it implicates that autoantibodies against BP180 ICD impair the interaction of BP180 with other molecular constituents of the hemidesmosome. Otherwise, the damaged basal keratinocyte induced by the binding of autoantibodies to BP180 ectodomain leads to the exposure of the ICD to the immune system, which is referred to as "epitope spreading" (125) (Figure 4). BP180 In addition, the COOH-terminal region of the BP180 ectodomain is shown to be recognized by 47% of BP sera (56). IgG, IgE, and IgA autoantibodies can all bind to the terminal region (52,54,103,122). The presence of autoantibodies against N-or C-terminal portions of the BP180 ectodomain is associated with the mucosal lesions in BP patients (56,126). In addition, there are existing autoantibodies against the midportion of BP180; and these are associated with the occurrence of hemiplegia, clinical presentation of lack of erythema around the bullae, and histopathologic eosinophil infiltration inside and around subepidermal bullae (57). Other studies revealed that high levels of autoantibodies against C-terminal portions are associated with older age, administration of dipeptidyl peptidase-4 inhibitors before BP onset, and a positive response to moderate doses of oral prednisolone (11,123). However, there is also a report refuting the association of autoantibodies with dipeptidyl peptidase-4 inhibitors (127). As BP180 extends from the cytoplasm of the basal keratinocyte to the lamina densa, it is presumed that the autoantibodies against this region might be responsible for the scarring phenotype observed in cicatricial pemphigoid patients (56) (Figure 4). The development of novel ELISA kits to detect the autoantibodies against the ectodomain, or even ICD, is beneficial in diagnosing BP without NC16A domain (56,128).
More novel animal models have been recently constructed, thus making it possible to determine the role of different domains. One of the animal models is the ΔNC14A mice, which have BP180 NC14A replaced with the homologous human BP180 NC16A epitope cluster region (129). BP lesion develops in these ΔNC14A mice after passive transfer of BP IgG (129). The NC14A region can also be genetically deleted in C57BL/6 mice, which then have less amount of BP180 in skin but have normal ectodomain shedding (130). They spontaneously produce IgG and IgA autoantibodies against BP180 and present eosinophilic infiltrations, as well as the clinical features of pruritus and crusted erosions (130). Hence, the ΔNC14A mice may be an ideal experimental model for investigating the early clinical changes in BP. However, in the absence of NC16A domain, it is impossible to explore the detailed functions of anti-NC16A autoantibodies. It is also presumed that the pruritus and eosinophils are associated with the ectodomain. Therefore, the ΔNC14A mice may be utilized as a model for the exploration of autoantibodies acting on the ICD or on the ectodomain. However, the mechanisms involved remain to be confirmed. Another animal model is the COL17-humanized mice, which can express human BP180, and it is suitable for the analysis of the pathogenesis of BP in humans (131). The spontaneous production of high titers of anti-BP180 antibodies in blisters and erosions on erythematous skin lesions makes the observation of dynamic immune reactions possible. The pathogenicity of autoantibodies against ICD and ectodomain of BP180 remains unclear, and further studies are warranted. The development of novel ELISA system to detect such autoantibodies is necessary (77).
igM AUTOANTiBODieS iN BP
An IgM-mediated BP has been recently reported (132,133). Direct immunofluorescence microscopy showed that linear deposition of IgM can be found at the DEJ of 6-22% of BP patients (17,134,135). However, the target of IgM autoantibodies is unknown, and immunoblotting with recombinant protein of BP180 C-terminal domain showed multiple non-specific bands (136). IgM is mainly associated with BP caused by lupus erythematosus (132); however, it is rarely associated with BP due to infections (137), macroglobulinemia (136,138), and surgical factors (139). The presence of IgM autoantibodies seems to not influence the course or outcome of the disease; and the role of IgM autoantibodies in the pathophysiology of BP remains elusive.
THe CLeAvAge AND DePLeTiON OF BP180
Followed by various autoantibody-mediated inflammatory responses, the BP180 cleavage and depletion have been proposed as the terminal effect that causes reduced adhesion and blister formation. In vitro, the cleavage and shedding of BP180 ectodomain is an event related to detachment, migration, proliferation, differentiation, and wound healing of keratinocytes (50,(140)(141)(142)(143)(144). Generally, the cleaved ectodomain does not generate pathogenic epitopes. However, excessive cleavage, shedding, or depletion can lead to reduced adhesion and blister formation.
Bullous pemphigoid autoantibody-induced infiltration of mast cells, eosinophils, and neutrophils can lead to the production of various inflammatory factors and proteases that contribute to the induction of blister formation. Increased levels of IL-1β, IL-2, IL-4, IL-5, IL-6, IL-8, IL-10, IL-13, IL-17, IL-22, IL-23, IL-31, IL-36, interferon-γ, tumor necrosis factor (TNF)-α, transforming growth factor-β, RANTES (regulated on activation, normal T cell expressed and secreted), monocyte chemotactic protein 1, interferon gamma-induced protein 10, and C-C chemokine ligand (CCL) 17 have been detected in skin lesions, serum, or blister fluid of BP patients (14,19,97,(145)(146)(147)(148)(149)(150). In addition, C-C chemokine receptor 3 ligands, such as CCL 11, CCL13, CCL18, CCL26, and CCL28, have been shown to be increased in skin and/or sera of BP patients (43,146,151,152). Increased levels of CCL1, CCL2, and chemokine C-X-C motif ligand-10 were detected in sera of BP patients (153,154). Moreover, increasing data revealed their functional involvements in BP (97,149,151,153,(155)(156)(157)(158) (Figure 5A). The proteases produced by inflammatory cells are functionally involved as well (79,159) (Figure 5B). The inflammatory cells can release mast cell protease (MCP)-4, matrix metalloproteinase (MMP)-9, neutrophil elastase (NE), plasmin, and eosinophil cationic protein (ECP), which cleave and degrade BP180, thus leading to dermal-epidermal separation and blister formation (20,149,157,(160)(161)(162)(163)(164) Pathogenic anti-BP180 IgG failed to induce subepidermal blistering in mice that were deficient in either NE or MMP-9 (89). MMP-9 can regulate NE activity by inactivating α1-proteinase inhibitor (α1-PI) (159). Furthermore, α1-PI serves as a chemoattractant for neutrophils once it is cleaved and exacerbates tissue damage (165). MMP-9 can also cleave BP180 into small tripeptides Pro-Gly-Pro, which significantly enhance neutrophil chemotaxis and NE release (149). These infiltrated cells also release IL-17, which significantly upregulates the production of MMP-9 and elastase in The autoactive Th2 cells secrete type II cytokines, which act on macrophages, keratinocytes, and fibroblasts and induce the chemotaxis of mast cells. The mast cells produce various cytokines and chemokines, which act on the regulatory T (Treg) cells, neutrophils, Th2, eosinophils, and macrophages. All these inflammatory cells can release interleukin (IL)-17, which not only act on neutrophils and Th2 but also prompt inflammatory response and proteases release. (B) The keratinocytes synthesize tissue plasmogen activator (tPA), which activates plasmin followed by the activation of matrix metalloproteinase (MMP)-9. The eosinophil cationic protein (ECP) and mast cell protease (MCP)-4 can also activate MMP-9. MMP-9 inhibits production of α1-proteinase inhibitor (α1-PI) while promote generation of neutrophil elastase (NE). All these proteases act on BP180 and dermalepidermal junction (DEJ). The cleaved BP180 can release pro-gly-pro tripeptides and attract neutrophils. neutrophils (149,166). The released IL-17 could, in turn, stimulate neutrophils to produce more IL-17 and form an amplified loop (167) (Figure 5A). Therefore, inflammatory factors and proteases induced by inflammatory cells play key roles in the cleavage and depletion of BP180, and targeting these inflammatory networks may be a promising therapeutic strategy in the treatment of BP. However, BP180 cleavage may also occur in the absence of anti-BP180 autoantibodies (140). Such physiological cleavage is mediated by ADAMs (140). Our study further reveals that TNFlike weak inducer of apoptosis (TWEAK), which is a multifaceted cytokine that participates in various skin inflammatory responses, can exacerbate the BP180 reduction and keratinocyte adhesion (19). Moreover, the effect of TWEAK on BP180 cleavage involves the activation of extracellular signal-regulated kinase and nuclear factor-κB pathways as well as the downstream ADAMs, in which ADAM 8, 9, 10, 15, and 17 have been suggested to participate in BP180 cleavage or BP development (19,168,169). We also found high expression of MMP-9, ADAM9, ADAM10, and ADAM17 in BP lesions and in keratinocytes upon TWEAK/Fn14 activation (19). The upregulation of MMP-9 and ADAM10 is responsible for the shedding of membrane CD46, which further enhances BP180 NC16A IgG-mediated complement activation and blister formation (82). Therefore, the role of TWEAK in BP development can be mainly ascribed to the abnormally high expression of ADAMs and other proteases. By considering the absence or insignificant expression of TWEAK in noninvolved skin, we conclude that TWEAK likely plays a secondary inflammatory role rather than being a primary participant (19,170). Further investigations are required to establish the clear-cut function of TWEAK in BP.
POTeNTiAL THeRAPeUTiC TARgeTS
Considerable progress made by recent studies updated our understanding of BP pathogenesis. The availability of novel BP animal models provides important tools to further gain insights on the pathophysiology of the autoimmune disease. However, there is a limited progress regarding BP therapy. As BP180 is a molecule with multiple epitopes, a better insight on the mechanisms of immune responses induced by binding of autoantibodies to BP180 on different epitopes is crucial for the design of novel and more specific therapeutic strategies for this life-threatening autoimmune disorder ( Table 1).
The Recovery of immune Tolerance
Targeting immune tolerance is a coveted approach for the treatment of various autoimmune diseases, as current treatment options often involve non-specific immunosuppression. BP is closely associated with the disturbance of self-tolerance, in which the reduction in Treg cells plays a key role. Therefore, the increase in Treg cells will help to recover immune tolerance and prevent BP development. Previously, recombinant IL-10 has been used to increase circulating Treg cells and to lower CD4+ T cells (171). The use of low-dose recombinant IL-2 could also induce significant expansion of Treg cells in vivo and preferentially restore Treg cells (172). Low-dose IL-2-induced Treg cell proliferation is subsequently followed by increased programmed cell death 1 (PD-1) expression (173). PD-1 inhibitor causes BP eruptions, thus suggesting the value of targeting PD-1 upregulation in BP treatment (102,189). Oxymatrine, a monosomic alkaloid extracted from the Chinese herb Sophora flavescens Ait, can upregulate FOXP3+ Treg cells and reduce the production of TNF-α and IL-17A, thus aiding in the recovery of immune tolerance (174). Previously, nanotechnology is therapeutically used to inhibit the detrimental immune responses in autoimmunity through its direct immunosuppressive effect on antigen-presenting cells B and T cells, or indirectly by delivering compounds that result in immunotolerance (190). Gene gun delivery of NC16A-encoding DNA on gold particles results in Treg cell-mediated tolerance to BP180 (175). Antigen-coupled biodegradable poly (lactic-coglycolic acid) nanoparticles have been used to induce antigenspecific T cell tolerance, which is a promising method that targets organ-specific BP (176). All aforementioned methods could improve immune tolerance and block the potential production of autoantibodies.
Therapeutic Prevention of excessive Antibody Production
Targeting the effector B and T cells to prevent the production of "pathogenic" autoantibodies may be a promising method in BP treatment. Rituximab used for depleting CD20+ B cells can reduce all subclasses of anti-BP180 IgG antibodies and has shown efficacy in case reports of patients with refractory BP (39,177,178). Autoreactive T cells are also associated with IgG autoantibodies production. Targeting autoreactive T cells using anti-CD25 antibodies and calcineurin inhibitors could modulate immune responses (181,191). Anti-CD25 antibodies bind to high-affinity heterotrimeric IL-2 receptor on activated T cells, block the IL-2/IL-2 receptor signaling, and inhibit the propagation of T cell activation, thereby limiting the damaging effects of further T cell recruitment in autoimmune diseases (180). Calcineurin can dephosphorylate and inhibit nuclear factors of activated T cells and regulate T-cell activation and differentiation (181). The inhibition of nuclear factors of activated T cells may directly suppress skin injuries by blocking T-cell-dependent production of IgG, as IgG deposition is central to the development of bullae in BP. Additionally, the interaction between T and B cells needs co-stimulatory factors. Hence, targeting co-stimulatory molecules using special monoclonal antibodies could also disrupt the interaction of T and B cells and block the synthesis of autoantibodies (182)(183)(184)(185)192). For pathogen-induced BP, the suppression of dendritic cell-mediated autoimmunity or toll-like receptor antagonist is also practicable (193,194).
Neutralization of Pathogenic Antibodies
Immunoglobulin G autoantibodies are the main pathogenetic antibodies that act on FcγR to induce blister production. SM101, a soluble FcγR, competes with the interaction of IgG and membrane FcγRs and prevents the development of BP (186). Omalizumab, which targets IgE autoantibodies, can neutralize the activity of IgE in BP and control the disease activity (108). Furthermore, therapies targeting IgE-mast cells-eosinophils/ BP180 in BP Frontiers in Immunology | www.frontiersin.org December 2017 | Volume 8 | Article 1752 basophils interaction may also demonstrate promising results in the treatment of BP (112). Moreover, immunoadsorption with high-affinity matrices that selectively bind to human IgG and IgE provides an alternative way of removing autoantibodies (187,195).
Prospective
Despite the complexity and diversity of the dermatosis, there is still hope for BP patients. Novel promising agents targeting different mechanisms of BP development are necessary. In addition, a multifactorial animal model for BP is warranted as well, and it should mimic not only the presence of specific pathogenic autoantibodies but also the additional triggers, such as environmental factors, medications, comorbid conditions, and infections, in disease initiation. Furthermore, future investigations are required as there may be the presence of unidentified antigenic epitopes that are indispensable for disease development.
CONCLUSiON
Bullous pemphigoid has been regarded as a well-characterized, organ-specific, mainly anti-BP180 autoantibody-mediated blistering skin disorder. Both IgG and IgE play vital roles in BP development via complement-dependent or -independent inflammatory pathways. However, the roles of IgA and IgM are still uncertain, and further investigation is needed. Knowledge of the BP180 target sites and of the interaction between BP180 and anti-BP180 autoantibodies is pivotal for the exploration of novel and more specific therapeutic methods so as to reduce BP morbidity and mortality. The translation of bench findings into bedside strategies for the treatment of this complex disease still remains to be a challenge. Although BP180-based therapy appears not to be close at hand yet, a better understanding of the role of BP180 would further approximate that to practice.
AUTHOR CONTRiBUTiONS
YL and YX conceived this paper. YL and LL wrote this manuscript. All the authors read and approved the final manuscript. | 2017-12-08T18:05:18.770Z | 2017-12-08T00:00:00.000 | {
"year": 2017,
"sha1": "13d48973a7cc421d6ba800ff62aeba3aacd68314",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01752/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13d48973a7cc421d6ba800ff62aeba3aacd68314",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.