id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
256634425
pes2o/s2orc
v3-fos-license
A gravity-based three-dimensional compass in the mouse brain Gravity sensing provides a robust verticality signal for three-dimensional navigation. Head direction cells in the mammalian limbic system implement an allocentric neuronal compass. Here we show that head-direction cells in the rodent thalamus, retrosplenial cortex and cingulum fiber bundle are tuned to conjunctive combinations of azimuth and tilt, i.e. pitch or roll. Pitch and roll orientation tuning is anchored to gravity and independent of visual landmarks. When the head tilts, azimuth tuning is affixed to the head-horizontal plane, but also uses gravity to remain anchored to the allocentric bearings in the earth-horizontal plane. Collectively, these results demonstrate that a three-dimensional, gravity-based, neural compass is likely a ubiquitous property of mammalian species, including ground-dwelling animals. Head direction neurons constitute the brain’s compass, and are classically known to indicate head orientation in the horizontal plane. Here, the authors show that head direction neurons form a three-dimensional compass that can also indicate head tilt, and anchors to gravity. G ravity is a ubiquitous force that profoundly affects life on earth. Gravity assists or resists movements 1,2 , accelerates free-falling objects such as a ball 3 and shapes our habitations' architecture. As such, graviception represents one of the most ubiquitous sensory modalities of living organisms 4,5 . Animal species across a wide range of classes 6 orient themselves and navigate in three dimensions (3D). Preeminent neuronal classes of the mammalian's brain, such as place cells 7,8 and head direction (HD) cells 9 operate in 3D. By providing verticality information 10,11 , gravity may mitigate the complexity of orienting in 3D 6,12 ; yet gravity signals have never been identified in the brain's navigation system. Here we tested whether mouse HD cells, which encode allocentric head orientation analogous to a neural compass, use gravity-anchored tilt signals (orientation relative to vertical) and azimuth signals (orientation in the gravityhorizontal plane, measured in a so-called tilted frame [13][14][15] during 3D motion, Fig. 1a) to yield a sense of 3D head orientation. Unlike bats 9 , tuning to tilt has never been shown in rodents, and some researchers report that it may be absent in grounddwelling species like rodents 16,17 . Thus, we first show that HD cells in the mouse anterior thalamus and retrosplenial cortex are tuned to combinations of azimuth and tilt. We also confirm that 3D HD signals travel across brain regions by recording from the cingulum fiber bundle, which connects areas of the navigation system 18 . Next, we present a conceptual and mathematical framework to model 3D HD responses, where tilt and azimuth tuning interact multiplicatively to encode 3D orientation. Finally we show that, not only does gravity anchor tilt tuning, but it also defines the earth-horizontal plane to which the azimuth compass is referenced 14 . Thus, a 3D, gravity-based orientation compass is not a specialized property limited to areal species but may instead be ubiquitous throughout many chapters of animal evolution. Results A 3D compass in the mouse brain. We used tetrodes to record extracellularly from the antero-dorsal nucleus of the thalamus (ADN; n = 4 mice; Supplementary [18][19][20][21] . Cells were exclusively selected based on spike isolation ( Supplementary Fig. 1) and recording locations were verified post-mortem (Supplementary Fig. 2). On the basis of their responses during free foraging in a horizontal arena ( Fig. 1b; summarized in Supplementary Fig. 3), cells were characterized as azimuth-tuned (i.e. traditional HD) cells in light (Fig. 1c, red) and darkness (see example in Fig. 1c, black) or azimuth-untuned (see example in Fig. 1d). Neurons were then characterized as animals walked on a platform orientable in 3D (Fig. 1e) that could be tilted up to 60°( Supplementary Fig. 4). We represented tilt tuning curves in spherical coordinates, with 2 degrees of freedom: absolute tilt angle from upright: α (range: 0-180°in the pitch, roll or ig. 1 Three-dimensional response of two example cells. a Proposed framework for 3D orientation. Top: tilt is measured by sensing the gravity vector (green pendulum) in egocentric coordinates, resulting in a 2D spherical topology. Bottom: azimuth has a circular topology, and is measured by rotating an earth-horizontal compass in alignment with the head-horizontal plane (TA frame). b Schematic of the arena used to identify azimuth-tuned cells in the horizontal plane. c, d Example azimuth tuning of a traditional HD cell, i.e. tuned to azimuth (Az-tuned) in the ADN (c) and another cell not tuned to azimuth (non-Az-tuned cell) in the cingulum (d), as the mouse walks freely in light (red) and darkness (black) in a horizontal arena (shown in b), on a platform oriented horizontally (shown in e, left; broken pink lines) and in the rotator (shown in h; gray lines). The azimuth-tuned cell showed significant tuning with different preferred directions (PD) in all setups, although response was strongly attenuated in the rotator (compare gray with red/pink lines). e Schematic of a 3D orientable platform used to measure 3D tuning. f, g Tuning curves for the two cells in c and d, obtained from responses as the mouse foraged on the orientable platform (shown in e). Firing rate is shown as a heat map in 3D space (Supplementary Movies 1, 2). The peak and trough of the average tilt response (across all azimuths) are indicated with arrows on the color scale; NTA = (peak-trough)/peak. Note that tuning curves are restricted to 60°tilt (Methods). h Schematic of a rotator used to measure full 3D tuning curves. i, j Tuning curves for the two cells in c, f and d, g as the mouse was passively re-oriented uniformly throughout the full 3D space using the rotator (Supplementary Movies 3-5). respectively, when averaged across all azimuth angles. For better comparison between cells, we divided these amplitudes by the cells' peak firing to compute their normalized tuning amplitudes (NTA; 0.92 and 0.66, respectively). In addition, with the platform in the earth-horizontal orientation, the cell classification as azimuth-tuned or azimuth-untuned persisted (Fig. 1c, d, dashed pink curves). We used identical criteria to classify neurons as azimuth-tuned or tilt-tuned (Methods; Supplementary Fig. 7). When tested on the platform, tilt tuning was widespread in azimuth-tuned (traditional HD) cells classified based on their responsiveness in the horizontal arena (of Fig. 1b), as summarized in Fig. 2. Specifically, out of 29 ADN neurons recorded on the platform, 25 (86%) were classified as azimuth-tuned (Fig. 2a, Venn diagram). Among these, 24 (96%) were also tuned to tilt and are subsequently called conjunctive (azimuth and tilt) HD cells (solid red symbols in Fig. 2). A sizeable population of azimuth-tuned cells were also recorded in RSC and CIN (49% and 40% of recorded cells, respectively). Of these, 58% (RSC) and 76% (CIN) were conjunctive cells. Tilt tuning on the platform was also seen in azimuth-untuned cells (solid black discs and symbols in Fig. 2a). Thus, tilt tuning was common, observed in all areas, regardless of azimuth tuning, with 92/139 (66%) tilt-tuned cells. A total of 75/139 (54%) cells were azimuth-tuned, and tilt and azimuth tuning overlapped across neurons. Tilt-tuned cells were slightly (7%) more likely to be azimuth-tuned and reciprocally azimuth-tuned cells were slightly (8%) more likely to be tilt-tuned (chi-square test, p = 0.02, χ 2 = 0.02, 1 dof). The NTA of tilt was lower than that of azimuth in conjunctive ADN cells, and similar in other regions ( Fig. 2a; Wilcoxon-paired rank test; p = 10 −3 in ADN, p = 0.2 in RSC, p = 0.9 in CIN, see also Supplementary Fig. 7d-f). These results indicate that tilt signals are an inherent component of the mouse HD system during natural behavior; thus, the term HD cell should refer to both tilt-tuned cells as well as azimuthtuned cells. Head-direction tuning in the full 3D space. To characterize tuning uniformly in 3D space, 549 (60 ADN, 202 RSC, 287 CIN) neurons were tested in the rotator. Seventy-one percent (388) of these cells were significantly tuned to tilt (88% ADN, 66% RSC and 70% CIN). Similar to responses obtained on the platform, tilt-tuned cells were slightly (5%) more likely to be azimuth-tuned and reciprocally azimuth-tuned cells were slightly (7%) more likely to be tilt-tuned (chi-square test, p =< 10 −3 ). Before analyzing the 3D properties of HD tuning, we first verified that spatial responses were similar in freely moving animals and in the rotator. In line with previous studies 22,23 and the example cell in Fig. 1c, i, azimuth tuning modulation amplitude was attenuated when animals were restrained in the rotator (scatterplots in Fig. 2b vs. Fig. 2a; Supplementary Fig. 8a, b). As a result, only a minority of neurons tuned to azimuth when moving freely were significantly tuned to azimuth in the rotator (63/286; Fig. 2c, left panel). Nevertheless, the PDs of multiple azimuth-tuned HD cells had consistent angles relative to each other when moving freely and in the rotator ( Supplementary Fig. 8f), indicating the structure of the population of azimuth-tuned cells was maintained in the rotator. Thus, other than the smaller magnitude, azimuth responses measured in the rotator are representative of the neurons' natural responses. Tilt tuning magnitude was also attenuated in the rotator, although to a lesser extent ( Supplementary Fig. 8c, d). A minority of cells were only tilt tuned in the rotator because of the larger sampling of 3D space. Furthermore, some cells were significantly tuned only when moving freely because the response magnitudes were larger ( Supplementary Fig. 8c, d) Fig. 2c, right panel), indicating that tilt tuning is conserved across free locomotion and restrained, passive motion conditions. We then compared tuning curves when moving freely and in the rotator by computing their pixel-by-pixel correlations. This revealed an important difference between azimuth and tilt tuning. Because azimuth curves shifted randomly between environments ( Supplementary Fig. 8e), their pixel-by-pixel correlations were uniformly distributed (median = 0.07; Kolmogorov-Smirnov test p = 0.15; Fig. 2d, left). In contrast, tilt tuning was preserved (median = 0.58; p < 10 −5 ; Fig. 2d, right), as expected if the tilt compass was anchored to a common reference: gravity ( Fig. 1a; see below). Azimuth tuning in 3D. To investigate how tilt and azimuth components work together to encode 3D head orientation, we first questioned how to define azimuth when the head tilts away from upright. The brain may simply project head direction onto the earth-horizontal (EH) plane and encode azimuth in that plane (Fig. 3a). Alternatively, it may measure the orientation the head would have if it were rotated back to upright (Figs. 1, 3b, Supplementary Fig. 9a), which is equivalent to rotating the EH compass to align with the head-horizontal plane, resulting in a tilted azimuth (TA) compass 13,14 . Early models 16,17 proposed that azimuth is updated in the head-horizontal plane by tracking rotations in this plane (yaw; Fig. 3c, cyan), ignoring other movements (Yaw-only model, YO). However, this would not maintain allocentric invariance in 3D 14 . For example, when completing the trajectory in Fig. 3c (red), the compass would register only three right-angle turns (Fig. 3c, cyan), i.e. 270°. To maintain allocentric invariance, a TA compass must use a dual updating rule 13 , which includes both yaw (Fig. 3c, cyan) and earth-horizontal rotations (Fig. 3c, green). In the example of Fig. 3c, this allows totaling 360°when completing the trajectory. Thus, we emphasize that a YO compass would loose allocentric invariance during 3D motion, even when returning to upright (see for example Fig. 3c). In contrast, EH and TA frames remain invariant when the head tilts 14 (Supplementary Movie 6). The 3D motion protocol allows testing the YO, EH and TA models. First, we expressed azimuth in all three frames and tested whether cells were significantly tuned when the 3D trajectory brought the head close to upright (<45°tilt). As predicted, almost no cells exhibited significant tuning in a YO frame (6/285 tuned cells, consistent with false positives at p = 0.01). In contrast, 63/ 285 (22%) cells were tuned when azimuth was expressed in either the EH or TA frames (ADN: n = 17; RSC: n = 7; CIN: n = 39; this relatively low percentage of significantly tuned cells is due to the attenuation of azimuth responses in the rotator, see Supplementary Fig. 8a, b). Second, when expressed in the appropriate reference frame, the cells' azimuth PD should be invariant at all head tilts. To test this, we compared the cells' azimuth tuning curves near upright (<45°t ilt) or when tilted (>60°tilt) (Fig. 3d). We observed that these curves were highly correlated when expressed in a TA frame Fig. 9a, c), expressing azimuth in a EH frame leads to a reversal of the cells' PD when pitching beyond 90°( Supplementary Fig. 9b, d, similar to previous observations 9 ), but not when rolling ( Supplementary Fig. 9b, d). In contrast, azimuth PDs are invariant in a TA frame and therefore this reversal did not occur ( Supplementary Fig. 9b, d). We also found that, regardless of reference frame, azimuth tuning decreased when the animal was tilted beyond 90°from upright (Supplementary Movies 6-9). As illustrated with an example azimuth-only cell in Fig. 3e (animated curve in Supplementary Movie 7), azimuth tuning was strong (PD at −85°) for small tilt angles (lowest portion of the tuning curve) but vanished at large tilt angles (i.e. upper portion of the 3D tuning curve, Fig. 3e). This was consistent for all azimuth-only cells: the average HD tuning curve (computed in a TA frame and aligned to peak at PD = 0°) had a higher modulation when computed for head tilts close to upright (Fig. 3f, red) and almost no modulation close to upside-down (Fig. 3f, gray). Thus, for azimuth-only cells, the response amplitude of azimuth tuning was dependent on the tilt angle, even though the cells were not tuned to tilt. This was not limited to azimuth-only cells (Fig. 3g, gray): the azimuth tuning amplitude of conjunctive cells decreased similarly, irrespective of whether the cell's tilt tuning favored upright orientation (Fig. 3g, red), intermediate tilt (i.e. 90°; Fig. 3g, blue) or upside-down (Fig. 3g, green). Normalized tuning amplitudes for all azimuth-tuned cells were affected by tilt angle (two-way ANOVA, p < 10 −10 , F 12,767 = 61) and varied between groups of cells (p < 10 −10 , F 3,767 = 12.8); however there was no significant interaction effect (p = 0.9, F 36,767 = 0.42), indicating that the azimuth tuning of all cells was equally affected by tilt. We conclude that HD cells encode azimuth in a TA reference frame, and that their azimuth response decreases when the head tilts away from upright, irrespective of tilt tuning. Tilt and azimuth tuning follow multiplicative interaction. To further understand 3D tuning, we created a 3D HD model that incorporates the following properties ( Fig. 4): (1) tilt tuning curves are generated by feeding the gravity vector (or any other reference vertical vector) into Gaussian tuning functions (Fig. 4a), (2) azimuth-tuned cells encode TA with a tilt-dependent gain, Fig. 2 Population azimuth and tilt tuning in freely moving vs. restrained animals. a Summary of tuning prevalence during unrestrained motion. Azimuth tuning was derived from data in the freely moving arena (Fig. 1b). Tilt tuning was derived from data on the 3D platform (Fig. 1e). For each panel, Venn diagrams (top) indicate the number of tilt-tuned (filled black discs) and azimuth-tuned (red discs) cells. Conjunctive cells appear at the intersection of these discs. Open discs illustrate cells responsive to neither tilt nor azimuth. The scatterplots (bottom) indicate the normalized modulation amplitude of responsive cells. The boxes and whiskers represent the median (white line), 95% confidence interval (boxes) and upper/lower quartiles (whiskers) of the azimuth modulation of azimuth- In general, gravity is a 3D vector, sensed in egocentric 3D Cartesian coordinates (e.g. by the vestibular system; Fig. 4a, top), but can be restricted to a sphere surrounding the head (Fig. 4b) because its magnitude on earth is constant. In the proposed model, we applied a Gaussian tuning in 3D Cartesian coordinates before restricting the tuning curve to a spherical space. Remarkably, this allowed modeling the tuning curve of 36% of the recorded cells that peaked at two distinct head orientations, for instance NU and ND ( Supplementary Fig. 10). Thus, the proposed model of Fig. 4a reflected the sensory processes underlying gravity sensation while parsimoniously accounting for seemingly complex tilt tuning. Note though that the model does not necessarily assume that tilt tuning is anchored to gravity, as another reference vertical could be used as an input. This model could fit conjunctive cells well ( The second example cell exhibited a PD at a large tilt angle (α = 105°), where azimuth tuning had already substantially decreased. As a consequence, the cell appeared azimuth-tuned at small tilt angles, where tilt tuning was minimal (Fig. 4f, lower horizontal plane) and tilt-tuned at large tilt angles (Fig. 4f, upper horizontal plane). This was characteristic of conjunctive cells with a large preferred tilt angle. Model fits were significantly lower when 3D curves were computed with azimuth in an EH frame ( Supplementary Fig. 11), confirming that the TA frame captures the cell's response better than either the EH or YO frames could. The same model also fitted 3D tuning curves for azimuth-only (median ρ = 0.75, [0.7-0.8] CI) and tilt-only cells (median ρ = 0.87, [0.85-0.88] CI). Spatial properties of tilt tuning. As illustrated in Fig. 5a, the PDs of tilt-tuned cells were widely scattered. Yet, the distribution was not uniform, with an over-representation of PDs around ND and an underrepresentation of PDs in the roll (LED/RED, gray sectors) plane (chi-square test, p < 10 −5 in AND, p < 10 −7 in RSC, p < 10 −9 in CIN; χ 2 = 28; 37; 45, respectively; 3 dof). In contrast, PDs were distributed uniformly between tilt angles lower or higher than 90°(chi-square test, p = 0.4 in AND, p = 0.016 in RSC, p = 0.1 in CIN; χ 2 = 0.47; 5.8; 2.6, respectively; 1 dof). The gravity tuning curve of cells with PD located in the pitch (NU/ ND, white sectors) plane had stronger peak firing rate ( Fig. 5b; median = 12.4 vs. 7 Hz) than those with PD in the roll plane. However, both cell types have similar tuning amplitude relative to their peak firing rate (i.e. NTA; Fig. 5c). These results, showing a dominance of pitch-tuned over roll-tuned cells, are consistent with those previously described for bats 9 and monkeys 24 . Azimuth tuning of HD cells persists in darkness 25,26 . To test whether tilt tuning also persists, we recorded the responses of 210 (23 ADN; 54 RSC; 133 CIN) tilt-tuned cells in complete darkness. Angular differences between PDs recorded in light and darkness were close to zero (PD difference <45°in 151/210 cells, Kolmogorov-Smirnov test vs. the expected distribution if PD were uniformly distributed: p = 10 −11 in all areas, Fig. 5d). In addition, the tilt modulation amplitude was highly correlated 14 , where azimuth is defined by rotating head direction (gray vector) towards the horizontal plane instead of projecting it. The head has the same 3D orientation in a and b but its azimuth is different in the two frames (EH: 165°; TA: 135°). c Dual-axis rule for updating TA, illustrated by an example trajectory (red) where the animal travels in 3D across three orthogonal surfaces (numbered 1 to 3). Head azimuth is updated when the head rotates in yaw within one surface (first rule, cyan arrows) or in the earth-horizontal plane (second rule, green arrow when transitioning from surface 2 to 3). The first rule tracks azimuth and the second rule ensures that the brain compass always matches the EH compass along the line intersecting the two planes (the 0-180°line in b). d Correlation between azimuth tuning curves when upright (<45°tilt) vs. tilted (>60°) in YO, EH and TA frames. . Similar findings were also reported in gravity-tuned cells in the monkey anterior thalamus 24 , suggesting that these tilt-tuned neurons may be found along a broad range of animal evolution. In further agreement with findings in monkeys 24 , a small fraction of cells also responded to tilt or azimuth velocity ( Supplementary Fig. 12). In addition, tilt tuning in the rotator could be reproduced using traditional single-axis rotations like pitch and roll ( Supplementary Fig. 13). Thus, tilt tuning is anchored to allocentric space, independent of the exact motion trajectory. Finally, we verified that 3D tilt tuning curves were highly reproducible across repetitions of the rotation protocol across different days ( Supplementary Fig. 14). 3D tuning is anchored to gravity. The invariance of tilt tuning in light and dark conditions (Fig. 5d, e), and across setups (Fig. 2d), supports the hypothesis that gravity-rather than visual landmark cues-represent the allocentric vertical reference for tilt tuning. Yet, the model in Fig. 4a does not strictly assume that tilt signals originate from gravity-mathematically it could apply to any vertical reference, such as the visual scene, or a combination of gravitational and visual cues. A distinct but related question is whether gravity anchors TA tuning. During 3D motion, azimuth is measured in a TA compass defined by rotating the earth-horizontal compass in alignment with the head-horizontal plane. But is that earth-horizontal plane defined by visual or gravitational cues? To test these hypotheses, we recorded 148 (22 ADN; 46 RSC; 80 CIN) tilt-tuned cells with the 3D rotation protocol after tilting the rotator and visual surround together 60° (Fig. 6a, protocol 3T). This dissociated the vertical axis defined by the visual cues inside the sphere (Fig. 6a, blue) from the gravitational vertical (Fig. 6a, green; see Supplementary Fig. 15a-e). We first investigated which modality anchors tilt tuning, and tested which modality anchors the TA signal in a second step. To answer the first question, we assumed that tilt-tuned HD cells are referenced to a weighted mean (weight w) of gravity and vision (Fig. 6a, black). At this stage, we assume that the TA signal is anchored to gravity. We computed each cell's tuning curve for ig. 4 Modeling 3D responses. a Modeling tilt tuning. Top: Gravity is a 3D vector (green) sensed in egocentric Cartesian coordinates by the otolith organs. Middle: to model tilt tuning, we first assume a 3D Gaussian function (orange ellipsoid) in this Cartesian space. Bottom: on earth, the magnitude of gravity is constant. Therefore, we restrict the tilt tuning curve to a 2D sphere surrounding the head, which corresponds to the egocentric gravity vector experienced when tilting on earth. b Modeling azimuth tuning. Azimuth is expressed in a TA frame (top), and tuning is modeled as a von Mises distribution combined with a tilt-dependent gain factor (Fig. 3g). c 3D tuning defined by the product of these two curves. d Distribution of the model's coefficient of correlation (ρ) across areas. e, f Experimentally measured 3D tuning curves (top) from two conjunctive cells that maintain their azimuth tuning in the rotator and fitted tuning curves (bottom), represented as color maps in 3D space (animated in Supplementary Movies 8,9). The cell is e is the same as in a, b. Source data are provided as a Source Data file. each value w (e.g. with w = 0 and w = 1 in Fig. 6b) and tested how it correlated with the fitted tuning curve recorded with the rotator upright (Fig. 6c), which was used as a reference since the gravity-and visually referenced verticals are identical. For the example cell in Fig. 6a-c, the correlation peaked at a value ρ peak = 0.81 for a gravity weight w peak = 1 (Fig. 6d, red), indicating that this cell encodes gravity-referenced tilt. At the population level, the peak gravity weight w peak clustered around a median value of 1.01 (Fig. 6e, [0.95-1.04] CI; data from 125/148 cells where the peak correlation was significantly higher than 0; see Supplementary Fig. 15k for details). The peak gravity weight was identical in all recorded areas (Krusall-Wallis nonparametric ANOVA, p = 0.17) and between cells with PD in the pitch or roll plane (Wilcoxon-rank sum test, p = 0.66). We also tested whether the peak gravity weight was significantly different from 1 on a cell-by-cell basis (Supplementary Fig. 15l) and found that this was the case in only one cell (likely a false positive). Likewise, no neuron correlated better with a visually referenced frame compared to a gravity-reference frame (Supplementary Fig. 15m). Thus, our data indicates that tilt-tuned cells encode exclusively gravity-anchored tilt signals, as opposed to visually anchored signals or a mixture thereof. These findings are identical to tilttuned cells in the macaque anterior thalamus 24 . Next, we investigated whether TA is anchored to the earthhorizontal plane defined by visual cues. First, we repeated in analysis in Fig. 6d, e but assumed that TA is referenced to vision to confirm that the conclusion that tilt tuning is anchored to gravity still held. The correlation still peaked at a value close to 1 in the example cell (Fig. 6d, broken gray line, w peak = 0.89) and, at the population level, w peak was still centered on 1.03 ([0.97-1.09] CI; Fig. 6f). Accordingly, we fixed the gravity weight w to 1 in the following analysis. We computed 3D tuning curves assuming that TA is referenced to the gravity-based or visually based horizontal plane, and compared them to the curves measured with the rotator upright by computing the partial correlation of azimuth tuning (where the correlation attributable to gravity tuning is eliminated, see Methods). We analyzed 19 cells (5 ADN, 1 RSC, 13 CIN) that were tuned to azimuth when moving freely and during Experiment 3-L (same inclusion criterion than in Fig. 3) and were recorded with the rotator tilted (Fig. 6g). At the population level, correlations were higher in a gravity-referenced frame (Wilcoxon-signed rank test, p = 5.10 −4 ). On a cell-by-cell basis, the partial correlation was higher when TA was referenced to gravity 8 cells (3 ADN, 5 CIN) and was not significantly different between the two frames was non-significant in all other cells (markers with gray border; note that the correlations in Fig. 6g are not significantly different from 0 in 8/19 cells because azimuth tuning is weak in the rotator). We conclude that the earth-horizontal plane that anchors TA is defined by gravity, which thus provides a vertical reference for all aspects of 3D orientation. Discussion In summary, these findings demonstrate that HD cells in two areas of the mouse navigation system, as well as their output fiber bundle, are tuned in 3D. HD cells encode 2D tilt either in isolation or conjunctively with 1D azimuth (Fig. 2). The spatial properties of azimuth tuning are independent of tilt tuning (Fig. 3g), and the two are separable; i.e. a cell's entire 3D head orientation tuning curve can be computed given its tilt and azimuth tuning (Fig. 4). Tilt tuning is referenced to the gravitational vertical ( Fig. 6d-f). Finally, azimuth tuning is anchored to visual landmarks 25 but, during 3D motion, it is defined by rotating the gravitationally defined earth-horizontal compass in alignment with the head-horizontal compass 13,14 (Fig. 3; Fig. 6g). A recent study by Shinder and Taube 17 concluded that HD cells encode only azimuth computed by integrating rotations in the head-horizontal (yaw) plane. However, in our assessment 15 , this study is in fact supportive of the tilted azimuth model, which was not directly tested in that work. Furthermore, we argue that their study is inconclusive with respect to tilt tuning 15 (Supplementary Fig. 16). The proposed 3D model is compatible with the toroid topology proposed for HD cells in bats 9 when azimuth is expressed in a TA frame and tilt is restricted to the pitch plane ( Supplementary Fig. 17). Tilt PD are not uniformly representing 3D space, as the pitch plane is over-represented, consistent with previous findings in both macaques 24 and bats 9 . This over-representation is observed in tilt-only and well as in conjunctive cells, indicating that it is not linked to the 3D properties of azimuth tuning. We found that azimuth tuning subsides when mice are close to upside-down, in agreement with previous findings in rats 16,17 , possibly because upside-down in a singularity point at which TA is mathematically undefined 13 (Supplementary Fig. 9). In contrast, half of the HD cells in bats retain an azimuth tuning when upside-down 9 that may be explained by the toroidal model, where the singularity is lifted by restricting head tilt to the pitch plane. The HD system of bats may have adapted by using a simpler coordinate system to encode azimuth in upside-down orientation based on ethological demands. A recent imaging study 27 indicates that the human RSC encodes pitch orientation in a virtual navigation task, although the ADN was found to encode mainly azimuth. It is possible that visually driven tilt signals arise in the RSC in a virtual environment where visual, but not inertial gravity cues, are present. We conclude that 3D tuning may be a ubiquitous feature of the mammalian HD system. We suggest that the denomination head direction cell should also apply to tilt-tuned and conjunctive cells as well as previously described azimuth-tuned cells. Together with Finkelstein et al. 9 , the present study reveals that HD cells tuned in 3D exist in the ADN, RSC and presubiculum. ADN HD cells project to layer III pyramidal neurons in the presubiculum 28,29 , and both populations discharge coherently 26 , suggesting that presubicular HD cells may inherit their 3D properties form the ADN. The function of presubicular HD cells likely extend beyond relaying ADN HD signals, as indicated by the presence of egocentric information in presubicular but not ADN HD cells 30 and the importance of the presubiculum for visually anchoring the HD network 31 . The RSC is involved in visual processing, often hypothesized to transform visual landmarks from an egocentric to an allocentric reference frame 32,33 , and RSC HD cells may combine HD signals with visual 34 or egocentric spatial information 35,36 . Our findings (and Kim and Maguire's study 27 ) raise the possibility that the RSC may use gravity-referenced tilt signals to transform visual signals in 3D. It is notable that Finkelstein et al. 9 observed a functional gradient of azimuth-tuned to tilt-tuned HD cells in the presubiculum. We observed no difference between the granular and dysgranular RSC, nor any obvious functional gradient in the ADN (Supplementary Fig. 18). Our study is the first to record HD cells directly in the cingulum fiber bundle that conveys ADN and RSC projections to parahippocampal regions (ADN and RSC projections), to RSC (ADN projections) and to the cingulate cortex (RSC projections) [18][19][20][21] . Recordings of axonal spikes with tetrodes are uncommon but possible 37 . Furthermore, histology clearly demonstrates that recordings occurred in white matter ( Supplementary Fig. 2g-j) and units recorded in the cingulum exhibited short spike duration consistent with axonal spikes 38 (Supplementary Fig. 18b). The existence of tilt-tuned HD cells in the cingulum bundle indicates 3D signals are communicated between various regions of the limbic system. Gravity is a fundamental vertical allocentric cue 12 , which dominates vision in human verticality perception 10 , even though visual signals can replace gravity cues in microgravity 39 . Further, gravity sensing represents one of the most ubiquitous sensory modalities of terrestrial living organisms 4,5 . Gravity is likely sensed ig. 6 Tilt tuning is anchored to gravity. a, b Rationale of the analysis and 3D tuning curve of an example cell when tilt is expressed relative to visually referenced vertical (b, upper tuning curve) or gravitational vertical (b, lower tuning curve). c Reference curve computed with the rotator upright. d Red: Coefficient of correlation between the tuning curves in a and the reference in b, as a function of the weight w, assuming TA is referenced to gravity. The red band indicates 95% confidence interval. Broken gray line: same correlation, computed assuming that TA is anchored to vision. e, f Gravity weight at which the correlation is maximal for all tilt-tuned cells where w peak is significantly higher than 0, computed when TA is referenced to gravity (e) or vision (f). g Comparison of the partial correlation (with the effect of gravity removed) of azimuth-tuned cells, assuming that TA is anchored to a gravity or visually referenced frame. Open/filled symbols: azimuth-only and conjunctive cells. Red/gray: cells where the difference between the two frames is significant or not. Source data are provided as a Source Data file. by a combination of proprioceptive and vestibular inputs 40,41 , and its computation likely involves the vestibulocerebellum [42][43][44] . Prominent views of azimuth-tuned HD cells posit that they form a neuronal attractor that can memorize azimuth in the absence of sensory inputs 14,26,45 , although some HD cells in the RSC 34 , and parahippocampal regions 46 may not contribute to the attractor network. Fundamentally, the 1D attractor model implies that cells with similar azimuth PD are constrained to fire together. However, most azimuth-tuned HD cells are also tilttuned, and cells with similar azimuth PD may have different tilt PD (Supplementary Fig. 19c). Such cells must fire at different head position when the head tilts, contradicting the principle of an attractor. Indeed, we found that 1D attractor activity weakened when animals walked on the platform with 60°tilt (Supplementary Fig. 19), and we hypothesize that it would weaken further if tilt was increased. This suggests that the HD system follows 1D attractor dynamics when the head is upright, but this may not generalize to 3D motion. This raises the question of dimensionality of the HD system during 3D motion. Previous studies 47,48 have used unsupervised approaches to reveal the 1D attractor. We did not attempt to generalize these approaches to 3D because our data is currently restricted to 60°tilt in freely moving animals, and because HD responses are largely attenuated in the rotator. Establishing whether tilt-tuned cells also form a 2D attractor will be challenging, especially since gravitational input that anchors tilt tuning is not easily altered. Alternatively, there may not be a gravity attractor, as there is no need to: the vestibular system can directly compute gravity orientation, and no mathematical integration may be necessary 44,49 . Tilt and 3D orientation tuning had previously only been identified in aerial (bats) and tree-dwelling (macaques) species, raising the question of whether a 3D compass would be ethologically relevant to rodents. Although laboratory mice (Mus musculus) and rats (Rattus norvegicus) are primarily landdwelling, they exhibit a rich 3D behavioral repertoire in the wild [50][51][52] , easily learn 3D spatial orientation tasks 53 and are physiologically related to tree-dwelling rodents 54 , including other muroids (e.g. harvest mice, Micromys minutus) and nonmuroids (e.g. squirrels). It is therefore not surprising that rodents, like bats 9 and likely macaques 24 and humans 27 , possess a three-dimensional compass, whose properties may be shared across mammals. Methods Animals. A total of 13 male adult mice (C57BL/6J), 3-6 months old, were used in this study (Supplementary Table 1). Animals were prepared for chronic recordings by implanting a head-restraint bar and a microdrive/tetrode assembly under general anesthesia (Isoflurane) and stereotaxic guidance. Two skull screws were implanted in the vicinity of the target region, and a circular craniotomy (~1.5 mm diameter) was performed above the target region. Animals were single-housed on a reversed [12/12] light/dark cycle. Experimental procedures were conducted in accordance with US National Institutes of Health guidelines and approved by the Animal Studies and Use Committee at Baylor College of Medicine. Neuronal recordings. Neurons were recorded using 6 (mice AA1/AA2) or 4 (all other mice) tetrode bundles constructed with platinium-iridium wires (17 micrometers diameter, polyimide-insulated, California Fine Wire Co, USA) and platinum-plated for a target impedance of 200 kΩ using a Nano-Z (Neuralynx, Inc) electrode plater. Tetrodes were cemented to a guide tube (26-gauge stainless steel) and connected to a linear EIB (Neuralynx EIB/36/PTB). The tetrode and guide tube were attached to the shuttle of a screw microdrive (Axona Ltd, St Albans, UK) allowing a travel length of~5 mm into the brain. The stereotaxic coordinates for each tetrode implant was based upon Bregma as a reference point. The coordinates used to target both the ADN and the CIN were 0.2 mm posterior and 0.7 mm lateral to Bregma. The granular/dysgranular RSC were targeted by implanting 2.0 mm posterior and 0.07/0.7 mm lateral to Bregma, respectively. Raw neuronal data was manually clustered based on spike waveform and amplitude, using custom Matlab scripts. Spike clusters with similar spike waveform and firing characteristics (inter-spike interval distribution and mean firing rate) were attributed to a single neuron. When we observed that neurons recorded over successive days, and on a single tetrode, had similar firing characteristics and similar tuning to 3D head direction, we merged the data to avoid introducing duplicate data points in our analyses. At the end of tetrode recordings from each animal, the brain was removed for histological verification of electrode location. The animals underwent transcardial perfusion with 4% paraformaldehyde (PFA). The brains were postfixed in 4% PFA and then transferred to 30% sucrose overnight. Brain sections (40 μm) were stained (Nissl or neutral red staining), and examined using bright-field microscopy to localize tetrode tracks ( Supplementary Fig. 2). Photographs of histological slides were corrected for brightness, contrast, gamma and color balance. Experimental apparatus. In order to identify traditional HD cells, we first recorded as mice explored freely in a circular arena (50 cm diameter, 30 cm height; Fig. 1b). The walls of the arena were white with a 45°black card to provide a visual orientation cue. To record tilt tuning in freely moving mice, the arena was replaced by a movable platform that was constructed by mounting an oblong nylon mesh (20 × 30 cm, 1.5 cm mesh) onto a manually operated threeaxis gimbal system (Supplementary Fig. 4a). The system was placed at the center of a large cylinder (130 cm diameter, 2 m height), its door was left open during recording to provide a visual landmark and to allow the experimenter to monitor each mouse. In both systems, neuronal data were acquired at 22 kHz using a MAP system (Plexon Inc.). The microdrive's EIB was plugged to a tethered head stage that included two LEDs (one red and one infrared, 4 cm apart) for optical tracking (Cineplex, Plexon Inc.). In addition, mice's head were equipped with a digital 6-degree-of-feedom inertial measurement unit (IMU; SparkFun SEN-10121) for measuring head tilt relative to gravity. Perspective effects that could affect optical tracking when the head tilted away from horizontal were corrected based on the IMU data. To measure the 3D orientation tuning using a uniform representation of tilt angles, we tested animals using a motorized rotator. It also allowed us to separate visual from gravity representations. We gently restrained each mouse's body and fixed its head rigidly, and placed it in the center of a rotation simulator ( Supplementary Fig. 6a) composed of a motorized three-axis motion system (Axes I-III in Supplementary Fig. 6a) inside a visual surround sphere (1.8 m diameter) (Acutronics Inc., Switzerland). The inside of the sphere was painted in white, with three horizontal lines of dots (10°diameter, 30°spacing) to provide horizon and optokinetic cues. Three vertical LED stripes, affixed to black vertical bands, were placed 22.5°apart to provide a horizontal orientation cue. A fourth rotation axis (Axis IV) allowed tilting the rotator and the sphere together (sphere door closed). Neuronal data were acquired at 30 kHz using a neural data acquisition system (SpikeGadget, San Francisco, California). The position of the rotator's axes (and therefore the 3D orientation of the head) was measured with potentiometers installed in each rotation axis and digitized at 833 Hz. All recorded data was organized in a custom-made database using Datajoint 55 . Experiment 1: Characterization of HD tuning in the arena. We recorded neuronal responses during five 8-min sessions. A first recording session was performed in light (Experiment 1-L0). We then performed the other protocols described below on a moving platform and rotator, before returning the mouse to the same arena and performing three separate 8-min sessions, first in light (Experiment 1-L1), then in darkness (Experiment 1-D), then we repeated a session in light (Experiment 1-L2). Experiment 2: Tilt tuning in freely moving animals. We recorded neural responses when mice walked freely on a platform. Recordings were performed in 5min blocks during which the setup's axis II and III were fixed. Within a single block with the platform tilted, active locomotion on the platform's surface changed the mice's head azimuth (Az) and tilt orientation (angle γ) together, and these variables are therefore correlated ( Supplementary Fig. 4b, yellow, magenta). Rotating the base (rotation along the blue arrow) between blocks added an offset to azimuth, while leaving the range of head tilt unchanged (e.g. Supplementary Fig. 4b, yellow vs. magenta). This manipulation allowed coverage of all possible head azimuth and tilt orientations (plane in Supplementary Fig. 4b), thereby allowing coverage of a large portion of 3D space relatively uniformly (up to α = 60°; Supplementary Fig. 4c). We perform one additional manipulation of the space covered by the animal: half-way through each block, the platform was rotated using axis I ( Supplementary Fig. 4a). This manipulation served as control for the following potential confounding factor: As long as only Axis III is operated, then local azimuth on the platform is anchored to gravity, e.g. the same side of the platform is always placed downward. Therefore, if a cell's firing was anchored to the azimuth on the platform itself, and not to the tilt, its response could be misinterpreted as a tilt response. Changing Axis I multiple times within each block randomizes local azimuth relative to tilt, which prevents this potential confound. We performed 17 blocks (~68 min) with the following organization: (1) one block where the platform was horizontal (duration: 8 min), (2) eight blocks where the platform was tilted 45°and the base was rotated in steps of 45°(duration: 2.5 min) and (3) eight blocks where the platform was tilted 65°and the base was NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-15566-5 ARTICLE rotated in steps of 45°(duration: 5 min). Note that mice tended to upright their head, therefore tilting the mesh 45°and 70°resulted in average head tilts of~35°a nd 60°, respectively (e.g. Supplementary Fig. 4c). Together, these 13 blocks allow a relatively uniform sampling of 3D head orientation (at tilt angles up to~60°) while mice were unrestrained and locomoting freely. To ensure that tilt space was adequately sampled, we computed the occupancy distribution d (i.e. the time spent) across 73 tilt positions (uniformly distributed in tilt space for up to 60°). Next, we computed the entropy of d E(d) = −Σp(d).log 2 (p (d)), ranging from log 2 (73) = 6.19 (uniform distribution) to 0 (if the mouse occupies a single point). We excluded cells where E(d) < 5.6, which corresponds to mice sampling less than 2/3 of the tilt space. 45% of recorded cells (not counted in Supplementary Table 1) were excluded based on this criterion. Experiment 3: Three-dimensional tuning in the rotator. The rotator was programmed to scan 3D rotation space uniformly using preprogrammed trajectories that sample 200 head tilt orientations uniformly ( Supplementary Fig. 6b, red; Supplementary Movie 3); the distance between adjacent points being~15°. We computed four distinct trajectories (no overlap, Supplementary Fig. 6c, different colors), each of which visited all points once, and in different order. Trajectories traveled through each point in a straight line at a constant velocity (30°/s) and changed direction between points (Supplementary Fig. 6b, c). All trajectories were replayed forward and backward. This technique ensures that the 2D space of head tilt is covered uniformly. While the desired head tilt is achieved by controlling the two innermost axes (I and II), azimuth is varied by rotating axis III (outer) of the rotator at a constant velocity (±15°/s; Supplementary Fig. 6d, red; the velocity is reversed every four rotations). During the trajectory, mice always faced at least 90°a way from the second axis (black in Supplementary Fig. 6a) to ensure that the visual field in front of the mouse is not obstructed. We performed the following variants of the protocol: (i) with the LED stripes (placed inside the visual enclosure) on (Experiment 3-L), (ii) off (Experiment 3-D), and (iii) LED on, after the rotator and the visual enclosure were tilted en bloc 60°relative to vertical by operating Axis IV ( Supplementary Fig. 6 ) (Experiment 3-T). Experiment 4: Yaw/pitch/roll rotations. The rotator was programmed to rotate each mouse back and forth in yaw, pitch or roll at a constant velocity of 30°/s. Starting from a velocity of 0°, each movement included an acceleration phase of 1 s to 30°/s, then 380°of rotation at constant velocity and finally a deceleration period of 1 s. To exclude any potential response to accelerations or decelerations, only data recorded during the central 360°of constant-velocity rotation period was used in the analysis. Data analysis. All well isolated neurons recorded during at least one foraging session in light in the arena (Experiment 1-L0, L1 or L2), and during Experiment 2 or Experiment 3-L have been included for analyses, with the following exceptions: • Recordings in animals H51M, H54M and H59M were performed in an early version of the rotator where the vertical LED stripes and black bands were absent. Cells never exhibited azimuth tuning when recorded in this setup, but could otherwise be classified as azimuth-tuned based on Experiment 1. These animals were excluded from all analyses, except in Supplementary Fig. 18. • We designed a coverage criteria in Experiment 2, as described above. Neurons that did not pass this criteria were still considered for analysis in Supplementary Fig. 19 only. The corresponding number of neurons are shown in Supplementary Table 1. We first classified neurons as azimuth-tuned or non-azimuth-tuned based on their responses in the freely moving arena. Neurons could also be classified as azimuth-tuned or azimuth-untuned based on their responses in the platform and rotator. However, because azimuth responses have lower amplitude in the rotator, they often did not reach significance level. Therefore, throughout the study, azimuth-tuned refers by default to the classification based on freely moving data in the arena (Experiment 1). Similarly, neurons were classified twice as tilt-tuned or not, based on recordings on the orientable platform and in the rotator independently. As with azimuth tuning, neurons that exhibited significant tilt tuning when moving freely may not be significantly tuned in the rotator, because responses in the rotator had lower amplitude. On the contrary, some neurons that exhibited significant tilt tuning in the rotator were not significantly tuned when moving freely because this protocol sampled a limited range (~1/3) of head tilt. Nevertheless, differences were small, and the majority of neurons that were significantly tilt-tuned in one setup were also tuned to the other (Fig. 2). Importantly, we confirmed that azimuth and tilt tuning in the rotator and freely moving were correlated in terms of amplitude and consistent in terms of spatial characteristics for neurons that were significantly tuned to azimuth or tilt in both experiments (Fig. 2c, d; Supplementary Fig. 8). For each recorded neuron, we computed the following tuning curves: (1) To evaluate azimuth tuning, we computed 1D azimuth tuning curves in all conditions of Experiment 1, in Experiment 2 when the platform is horizontal, in Experiment 4-Yaw; and in data points where head tilt was less than 45°during Experiment 3-L. (2) To evaluate tilt tuning for Experiment 2 and Experiment 3-L,D,T, data were averaged across azimuth. We also computed pitch/roll tuning curves based on Experiment 4. Note that tilt tuning is different from a recent finding that ADN HD cells encode azimuth in a tilt-dependent manner 13,14 , which has been explained by a framework called the dual-axis rule or tilted azimuth (TA); see Supplementary Fig. 9. Although tilt is used to compute TA, TA signals do not carry any information about head tilt since, since, given any value of TA, all head tilts are still possible. Reciprocally, the tilt tuning identified here does not carry any information about TA since all azimuth are still possible. Thus, 2D tilt tuning and 1D tilted azimuth encode different dimensions of 3D head orientation (see also Supplementary Movie 1). Neuronal responses were evaluated by computing tuning curves, which were smoothed using Gaussian kernels with standard deviation of 15°on both azimuth and tilt (we used an equivalent standard deviation of sin(15°) when tilt is expressed in Cartesian coordinates). We computed 3D azimuth tuning curves in 3 different ways, by expressing azimuth in a yaw-only (YO), an earth-horizontal (EH) or a tilted (TA) frame, and found that the latter accounted for neuronal responses better ( Fig. 3; Supplementary Fig. 9). Earth-horizontal azimuth is computed by defining a forward pointing vector N, aligned with the head's naso-occipital axis, and encoding its orientation in an earth-fixed reference frame (i, j, k), i.e. N = (N i , N j , N k ). Earth-horizontal azimuth is defined as the orientation of N on the earthhorizontal (I, j) plane, i.e. EHAz = atan2(N j , N i ). EH azimuth can be transformed into tilted azimuth by the following equation: Tuning curve fitting. To quantify tuning curves, von Mises and/or Gaussian functions were fitted and standard shuffling analysis was used to evaluate the statistical significance of azimuth and tilt tuning. 2D tilt tuning curves were fitted with Gaussian distributions ( Fig. 4; Supplementary Fig. 10), where tilt was expressed in Cartesian coordinates and where N M,C (G X , G Y , G Z ) is a 3D Gaussian distribution centered on M and with covariance matrix C. Azimuth tuning curves were fitted with circular normal (von Mises) distributions. Preliminary analysis revealed that the PD of azimuth tuning is maintained when the head tilts (when azimuth is expressed in a tilted frame) but that its gain changes. To account for this, we defined a tilt-dependent gain g(α) and expressed azimuth tuning as: where κ is the parameter of the van Mises distribution. For convenience, we normalized FR Az (Az,α) such that its average value across all azimuths is 1 (by setting l to the average value of exp(κ.cos(Az-PD))). Finally, we evaluated the interaction between azimuth and tilt tuning by fitted 3D tuning curves defined as the product of the azimuth and tilt tuning cures defined above, i.e. Tilt tuning curves had 11 free parameters: FR 0 , A, M (3-dimensional) and C (a covariance matrix, i.e. 6-dimensional). The normalized azimuth tuning curves had 2 free parameters (κ and PD). The tilt-dependent gain g(α) was fitted independently at 13 tilt angles α ranging from 0 to 180°by increments of 15°, resulting in 13 additional free parameters. The 3D tuning curves were computed from experimental data at 184 uniformly distributed tilt orientations and 24 azimuth orientations, i.e. 4416 points, and fitted to the 3D curve model by gradient ascent (Matlab function lsqnonlin). Note that since the average value (across all azimuths) of FR Az (Az, α) was 1, the average tilt tuning curve (across all azimuth) was FR Ti (α, γ). We tested an additional model that assumes that azimuth and tilt tuning interact additively, i.e.: We found that this model did not fit 3D tuning curves as well as the multiplicative model: its correlation coefficient was significantly lower in 33/53 conjunctive cells, and better only in 1/53 conjunctive cell (Fisher r to z transform, at p < 0.01), and significantly lower at the population level (median ρ = 0.85, [0.80-0.87] CI vs. 0.88, [0.86-0.91] CI, p < 10 −8 , paired Wilcoxon test). Therefore, we used the multiplicative model to model 3D responses in this study. The length of the mean vector |R | (i.e. the normalized Rayleigh vector), ranging from 0 to 1, is used commonly to assess how strongly a cell is tuned (|R | = 0, untuned cell; |R | = 1, maximally tuned cell) independently from its average firing rate. It allows comparing cells with a large range of peak firing rates. Thus, azimuth tuning was quantified consistently with previous studies by computing the mean vector: where FR(Az) was sampled at 100 positions separated by 3.6°and c = 3.6 × π/180/ 2/sin(1.8) 56 . Mean vectors can be generalized to a 2D distribution by expressing tilt in Cartesian coordinates G = (G X , G Y , G Z ) and computing: The resulting 2D vector has a length of 1 if all spikes occur at the same tilt and 0 if spikes are distributed uniformly or symmetrically. However, because mean vectors computed in 1D and 2D cannot be compared directly, we developed an alternative measure called normalized tuning amplitude (NTA; Supplementary Fig. 7). The normalized tuning amplitude of an 1D azimuth or 2D tilt tuning curve was defined based on the maximum (FR max ) and minimum (FR 0 ) firing rate, as NTA = (FR max − FR 0 )/FR max . Thus, normalized tuning amplitude ranged from 1 (when a tuning curve ranged from 0 Hz to a peak value) to 0 (when a cell was unmodulated). Note that normalized tuning amplitude measures the cell's modulation amplitude, but not the sharpness of the tuning curve. We defined a cell's tilt preferred direction (PD) as the orientation at which the fitted 2D tilt tuning curve is maximal. In cells with bimodal tilt tuning ( Supplementary Fig. 10), the PD corresponded to the highest peak. We compared the differences between PD recorded in light and darkness in Fig. 5d. If a bimodal cell has two peaks with approximately equal amplitude, the highest peak measured in light may become smaller in darkness, resulting in an apparent change in spatial tuning. To prevent this, we compared the PD in light with the direction of the two peaks measured in darkness, and the difference in PD was defined as the smallest difference. Statistical procedures to determine significant tuning. We used a shuffling procedure 57,58 to assess the statistical significance of azimuth or tilt tuning. Each sample was generated by (1) shifting the entire spike train circularly by a random value of at least ±10 s, (2) recomputing the tuning curve, (3) performing the Gaussian fit and (4) computing the azimuth and/or tilt normalized tuning amplitude. We computed the mean value m and standard deviation σ of the normalized tuning amplitude across 100 shuffled samples. The statistical p-value of the normalized tuning amplitude NTA measured in the un-shuffled data was computed as 1 − F(NTA, m, σ), where F is the cumulative Gaussian distribution with average m and standard deviation σ. The standard deviation of the model's correlation across all samples is also used as an estimate of the standard error of the model's correlation. We considered azimuth or tilt tuning to be significant if (1) the p-value computed as described above was less than 0.01 and (2) the normalized tuning amplitude NTA was equal to or higher than 0.25. The second criterion was equivalent to selecting cells where the modulation was at least one third of the baseline firing rate, and was used to eliminate cells with very small but significant modulation (how this threshold compares to criteria used in other studies is analyzed and discussed in Supplementary Fig. 7). We combined data from multiple repetitions of Experiment 1-L to assess if cells were significantly tuned to azimuth when the animal was moving freely. We used two techniques to combine multiple repetitions: (1) we analyzed each repetition independently and computed the median p-value and amplitude across repetitions, and (2) we computed a p-value and amplitude based on data pooled across all repetitions. We tested if the values obtained with technique (1) or with technique (2) passed the criteria described above, and classified cells as azimuth-tuned if they passed any of these tests. This two-technique approach was used because pooling data yields a greater statistical power but fails if the cell's PD shifted between sessions, whereas the second approach is not affected by shifts in the PD. In total, 37% cells passed both tests; 6.5% passed the first test only, 7.5% passed the second test only; therefore, in total, 51% cells passed one or the other and were classified as azimuth-tuned. We used data from Experiment 3-L to assess if cells are significantly tuned to tilt, by testing if the normalized tuning amplitude of the average tilt tuning curve (across all azimuths) passed the criteria described above. We also tested if cells are significantly tuned to azimuth in the rotator by computing the azimuth tuning curve based on data for up to 45°tilt and testing if its normalized tuning amplitude passes the criteria described above. In some cells, Experiment 2 or Experiment 3-L were repeated multiple times. We found that the preferred direction of tilt and azimuth tuning were stable across repetitions, and pooled data across all repetitions. Partial correlation of azimuth tuning. We compared models of azimuth tuning (e.g. in Fig. 6g; Supplementary Fig. 11) by fitting 3D neuronal responses with 3D models that included the respective azimuth models and comparing the models' coefficient of correlation. However, in most cells, this correlation is largely determined by the neuron's tilt tuning. To better reveal the specific contribution of azimuth responses, we computed the partial correlation specifically attributable to azimuth tuning by removing the contribution of gravity. We first fitted the 3D model normally and computed the sum of squared residuals ssr. Next, we removed azimuth tuning from the best-fitting model (by setting g(α) to zero) to create a tiltonly model, and computed the sum of squared residuals ssr G . We defined the partial correlation coefficient of azimuth as ρ = sqrt((ssr G − ssr)/ssr G ). Intuitively, this coefficient measures how much the tilt-only model's error is reduced by the addition of azimuth tuning. Tilt and azimuth velocity analysis. We performed another Gaussian curve fitting to test whether neurons carry a mixture of tilt and tilt derivative information. We expressed head tilt measured during Experiment 3-L in Cartesian coordinates (G X , G Y , G Z ) and then computed the time derivative of the gravity vector (dG X /dt, dG Y / dt, dG Z /dt) as a measure of tilt velocity. Next, we fitted neuronal firing rate with FR = FR 0 + A.N M,C (G X , G Y , G Z ) + A'.N M',C' (dG X /dt, dG Y /dt, dG Z /dt). We computed the normalized tuning amplitude of tilt and tilt velocity and used the same shuffling method and criterion (p < 0.01, NTA > 0.25) as in other Gaussian fits to assess whether cells were significantly tuned to the gravity derivative. Note that the tilt tuning curves obtained with this method were identical to those obtained when fitting the 3D model that includes tilt and azimuth tuning. We investigated whether cells encode azimuth velocity (dAz/dt) by computing azimuth velocity tuning curves for each cell (i.e. average firing rate as a function of dAz/dt) using all data from Experiment 1-L. The tuning curves were evaluated at all velocities ranging from −200 to 200°/s by increment of 20°/s and smoothed using a 8°/s Gaussian kernel. We computed the amplitude and normalized tuning amplitude of these curves, and used the same shuffling method and criterion (p < 0.01, NTA > 0.25) to assess whether cells were significantly tuned. Experiment 3-T. To analyze the results of Experiment 3T, we expressed 3D head motion (tilt and azimuth) in both gravity and visual reference frame. Motion in a gravity frame was computed based on the actual 3D position of the head, and decomposed into gravity-referenced tilt (G, in Cartesian coordinates) and tilted azimuth (TA G ). Motion in a visual frame was computed as if Axis IV of the system had not been tilted, and decomposed into visually referenced tilt (V, in Cartesian coordinates) and tilted azimuth (TA V ). Next, we assumed that HD cells encode a tilt signal computed as a weighted average of G and V (T(w), computed as w.G + (1−w).V and normalized to a length of 1) and an azimuth signal equal to TA G or TA V . We computed the cells' 3D tuning curves for each possible value of w, ranging from −1 to 2, and based on TA G or TA V . Each tuning curve was compared to the tuning curve fitted to the data measured with the rotator upright (Experiment 3-L) by computing their pixel-by-pixel correlation. We used the shuffling procedure described above to generate 60 shuffled samples of the tuning curves in Experiment 3-T. The standard deviation of the correlation between the curve fitted to Experiment 3-L and these samples was used as an estimate of the standard error of the correlation, and 99% confidence intervals were set to 2.56 times the standard error. To compare the two models of tilted azimuth (TA G and TA V ), we computed the partial correlation of azimuth tuning as described above. Experiment 4P/R. Using the full 3D tuning curve data from Experiment 3-L, we predicted the responses to pitch and roll rotations by sampling the 3D tuning curve at the head orientations visited during pitch and roll rotations. The pitch and roll tuning curves measured during Experiment 4 and predicted based on Experiment 3 were then fitted with 1D Gaussians, and the resulting modulation amplitudes and preferred direction were compared. Data during Experiment 3 and 4 in light and darkness were averaged. Cross-correlation analysis. We performed a similar analysis as in Peyrache et al. 26 in Supplementary Fig. 19. Neuronal firing rates were sampled in 33 ms time bins and smothered using a Gaussian filter with 300 ms standard deviation. Crosscorrelograms between pairs of simultaneously recorded cells were computed using the Matlab function xcoeff with the normalization option "coeff". Finally, the median value of the cross-correlograms for time lag >3 s was subtracted from the cross-correlograms. Firing properties. We confirmed that the distribution of average firing rates and CV2 were similar for all cells in the CIN, ADN, and RSC ( Supplementary Fig. 18a). Reporting Summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Source data underlying Figs. 2a-d, 3d, g, 4c, 5a-e and 6e-g are provided as a Source Data file. Further data are available from the corresponding author upon reasonable request. A reporting summary for this Article is available as a Supplementary
2023-02-08T15:42:25.155Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "cf3ae828d2033b28ba2dad272540507078abdb57", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-15566-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "cf3ae828d2033b28ba2dad272540507078abdb57", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
222177589
pes2o/s2orc
v3-fos-license
Diagnosing mild traumatic brain injury using saliva RNA compared to cognitive and balance testing Abstract Background Early, accurate diagnosis of mild traumatic brain injury (mTBI) can improve clinical outcomes for patients, but mTBI remains difficult to diagnose because of reliance on subjective symptom reports. An objective biomarker could increase diagnostic accuracy and improve clinical outcomes. The aim of this study was to assess the ability of salivary noncoding RNA (ncRNA) to serve as a diagnostic adjunct to current clinical tools. We hypothesized that saliva ncRNA levels would demonstrate comparable accuracy for identifying mTBI as measures of symptom burden, neurocognition, and balance. Methods This case‐control study involved 538 individuals. Participants included 251 individuals with mTBI, enrolled ≤14 days postinjury, from 11 clinical sites. Saliva samples (n = 679) were collected at five time points (≤3, 4‐7, 8‐14, 15‐30, and 31‐60 days post‐mTBI). Levels of ncRNAs (microRNAs, small nucleolar RNAs, and piwi‐interacting RNAs) were quantified within each sample using RNA sequencing. The first sample from each mTBI participant was compared to saliva samples from 287 controls. Samples were divided into testing (n = 430; mTBI = 201 and control = 239) and training sets (n = 108; mTBI = 50 and control = 58). The test set was used to identify ncRNA diagnostic candidates and create a diagnostic model. Model accuracy was assessed in the naïve test set. Results A model utilizing seven ncRNA ratios, along with participant age and chronic headache status, differentiated mTBI and control participants with a cross‐validated area under the curve (AUC) of .857 in the training set (95% CI, .816‐.903) and .823 in the naïve test set. In a subset of participants (n = 321; mTBI = 176 and control = 145) assessed for symptom burden (Post‐Concussion Symptom Scale), as well as neurocognition and balance (ClearEdge System), these clinical measures yielded cross‐validated AUC of .835 (95% CI, .782‐.880) and .853 (95% CI, .803‐.899), respectively. A model employing symptom burden and four neurocognitive measures identified mTBI participants with similar AUC (.888; CI, .845‐.925) as symptom burden and four ncRNAs (.932; 95% CI, .890‐.965). Conclusion Salivary ncRNA levels represent a noninvasive, biologic measure that can aid objective, accurate diagnosis of mTBI. INTRODUCTION Mild traumatic brain injury (mTBI) is characterized by brief confusion, loss of consciousness, posttraumatic amnesia, and/or other transient neurological abnormalities (eg, seizure), with a Glasgow Coma Scale score of 13-15 after 30 min postinjury or later. 1 Nearly 3 million mTBIs occur in the United States each year, and the majority occur in adolescents and young adults. [2][3][4] The prevalence of mTBI in adolescents is on the rise, resulting in increasing economic and healthcare system burden. 5,6 mTBI is associated with significant morbidity, including headaches, fatigue, and difficulties with concentration. 2,7 mTBI is also associated with missed school or work, and increased healthcare utilization. 8 mTBI can have a wide range of effects on physical, cognitive, and psychological function, negatively impacting cognitive abilities, academic performance, behavior, social interaction, and employment. [9][10][11] It can be difficult to identify the physical and neurocognitive effects of mTBI, as some effects may be attributed to other causes such as anxiety, depression, attention deficit hyperactivity disorder (ADHD), exercise-related fatigue, or chronic headache disorder. Because of this, physical and neurocognitive measures have limited specificity and utility for diagnostic purposes when administered alone. 12 As such, identifying reliable objective biomarkers is necessary to effectively screen, diagnose, and treat mTBI. Another obstacle when screening for mTBI is that symptoms and deficits often have delayed emergence. 10 This temporal pattern can be attributed to the natural progression in pathophysiologic changes, underdiagnosis, and/or underreporting. 12,13 As early diagnosis and intervention can minimize latent adverse events (such as persistent symptoms) and improve one's quality of life, the ability to quickly and accurately diagnose mTBI is critical to improving outcomes. 5,13 However, studies show that mTBI is both underdiagnosed and underreported. 14,15 The 2018 guidelines on the diagnosis of mTBI from the Centers for Disease Control and Prevention (CDC) advise healthcare professionals to use age-appropriate, validated symptom rating scales as a component of mTBI evaluation (moderate, level B evidence), which can be accompanied by computerized cognitive testing (moderate, level C evidence). 16 The guidelines specifically state that healthcare professionals should not use biomarkers outside of a research setting (high, level R evidence), and the utility of balance testing for mTBI diagnosis is not discussed. Unfortunately, individuals seeking to expedite or delay return-to-activities may manipulate cognitive measures by "sandbagging" baseline tests, or exaggerate postinjury symptom reports. [17][18][19] In addition, the signs and symptoms of mTBI can be subtle and easily missed in acute care settings. 14 To add to that challenge, many acute care and emergency medicine providers report insufficient training to effectively diagnose and manage mTBI. 20 Thus, a reliable and objective mTBI diagnosis remains a critical unmet need. A number of alternative technologies for traumatic brain injury (TBI) diagnosis have been proposed, including neuroimaging, electrophysiology, and blood biomarkers. 21,22 Though these approaches have shown promise, they also carry limitations that may impede clinical adoption. 23 For example, neuroimaging and electrophysiology technologies require expensive, nonportable equipment and specialist interpretation. Changes in blood-based proteins and lipids following TBI demonstrate utility for determining risk of intracranial bleeding and the utility of computed tomography in adults. 24 Recent advances in blood-based biomarkers suggest they may soon provide clinical utility for mTBI diagnosis. [25][26][27] Yet, most mTBIs do not result in intracranial bleeding. Further, protein biomarkers are typically present at low concentrations (fM to pM), are susceptible to degradation, and may have difficulty crossing the blood-brain barrier in cases of mild injury. [28][29][30] Further, athletic trainers who lack venipuncture training cannot employ blood-based biomarkers field-side, where over a third of mTBIs in adolescents and young adults occur. 3,10 Micro-ribonucleic acids (miRNAs) are a class of small, noncoding RNAs (ncRNAs) that regulate protein transcription through sequence-specific binding and degradation of messenger RNA. Like other ncRNAs, including small nucleolar RNAs (snoRNAs) and piwi-interacting RNAs (piRNAs), miRNAs play an important role in brain development. 31 They have also been implicated in both severe and mild forms of TBI. 32,33 Neurons can package ncRNAs within protective vesicles, allowing them to traverse the extracellular space, dock at distant cells, and influence gene expression. 34 Thus, measurement of salivary ncRNAs arising from the five cranial nerves in the oropharynx may provide a noninvasive molecular window into the physiology of mTBI. 35,36 Previously, we showed that miRNA levels in cerebrospinal fluid are mirrored in the saliva of pediatric patients with mTBI. 37 In a followup study of 52 children with mTBI, saliva miRNA levels accurately predicted duration of symptoms. 38 Finally, we showed that miRNA levels change within hours of mTBI, and this response occurs in saliva before serum. 39 To date, the diagnostic potential of salivary ncRNAs has not been assessed in a large mTBI cohort with adequate controls (ie, individuals with orthopedic injury [OI], chronic headache, recent exercise, and neuropsychological comorbidities). Further, the diagnostic accuracy of ncR-NAs relative to current standard-of-care assessments has not been tested. We hypothesized that measurement of salivary ncRNAs would demonstrate comparable accuracy for discerning mTBI in children and young adults, relative to symptom burden, neurocognitive assessment, or balance measures. We also posited that a composite assessment, combining saliva ncRNA measures with other objective tools and subjective symptom reports, would increase diagnostic accuracy. These hypotheses were tested in a multisite, case-control study, involving 538 children and young adults (251 with mTBI and 287 controls). Participants This prospective multicenter, case-control study included a convenience sample of 538 individuals, ages 5-66 years. There were 251 individuals with a clinical diagnosis of mTBI, defined by the 2016 Concussion in Sport Group criteria as rapid-onset, short-lived, spontaneously resolving impairment in neurologic function, typically reflected by functional disturbance rather than structural injury, and characterized by a range of clinical symptoms (eg, headache, dizziness, confusion, and amnesia) that may or may not involve loss of consciousness. 40 This broad definition of mTBI was chosen to include "mild" cases that did not result in loss of consciousness, and potentially improve the sensitivity of a resulting ncRNA diagnostic algorithm in future investigations. The mTBI group included individuals with sport-related and nonsport mechanisms of injury, enrolled from emergency departments (EDs), sports medicine clinics, urgent care centers, concussion specialty clinics, and outpatient primary care clinics at initial clinical presentation (within 14 days of injury). Following enrollment, saliva and survey data were collected from mTBI participants across five time points: <72 h (n = 129), 4-7 days (n = 120), 8-14 days (n = 190), 15-30 days (n = 105), and 31-60 days postinjury (n = 135). The first sample from each participant was used to generate the diagnostic algorithm, whereas longitudinal samples were used only to explore the physiologic characteristics of ncRNA candidates postinjury (see Section 2.8, below). The mTBI group was compared to a control group of 287 individuals with absence of mTBI in the previous 12 weeks and clinical resolution of any previous mTBI. The control group was enrolled from outpatient primary care clinics, emergency departments, outpatient specialty care clinics, and sports medicine clinics. To provide comparable rates of OI, recent exercise (within 60 min of sample collection), and neuropsychological conditions (eg, depression and anxiety) between control and mTBI groups, 25 control participants were excluded from downstream analysis ( Figure S1). OI was defined as an upper/lower extremity sprain, contusion, or fracture within 14 days of enrollment. Recent exercise was defined as ≥30 min of mild/moderate physical activity on the day of enrollment. A subset of control participants with recent exercise (25/38) included collegiate football athletes, for whom head impacts were recorded. Research staff directly observed and recorded helmet-toground, helmet-to-helmet, and helmet-to-body impacts for each player during a full-contact practice immediately prior to saliva collection. None of the football athletes were diagnosed with mTBI by athletic training staff in the course of the practice. All participants were enrolled from April 2017 through February 2020 at 11 institutions: Adena Health System Exclusion criteria for all participants were primary language other than English, pregnancy, active periodontal disease, neurologic disorder (eg, epilepsy, multiple sclerosis, and hydrocephalus), drug or alcohol dependency, current upper respiratory infection, legally appointed guardian, or inability to provide consent/assent due to intellectual disability. Participants were excluded from the mTBI group for Glasgow Coma Score (GCS) ≤12 at the time of initial injury, penetrating head injury, symptoms attributable to underlying psychological disorder (eg, depression, anxiety, and posttraumatic stress disorder), overnight hospitalization for current mTBI, presentation for clinical care >14 days after the initial injury, skull fracture, or findings of intracranial bleed on CT or MRI (if performed). The proportion of participants who underwent intracranial imaging was not recorded, but anecdotally, the majority of mTBI participants had no imaging performed. Participants were excluded from the control group for ongoing rheumatologic or neoplastic condition, mTBI in the previous 90 days, or persistence of symptoms from a previous mTBI. Participants were divided into a training set (80% of samples; n = 430 [mTBI = 201 and controls = 229]) used for ncRNA exploration and creation of predictive algorithms, and a naïve testing set (20% of samples; n = 108 [mTBI = 50 and controls = 58]) used only to validate the accuracy of predictive algorithms. Samples were assigned randomly to training and testing sets to ensure equal representation of age, sex, mTBI status, and symptom severity across cohorts. Only one sample from each participant was used. For mTBI participants, the first sample collected postinjury was employed. Demographic and medical data collection Participant characteristics were collected via survey, administered by research staff in the clinical care setting. For children ≤12 years of age, parents assisted with survey completion. When possible, survey responses were confirmed through review of the electronic medical record. For all participants, the following medical and demographic characteristics were collected: age (years), sex (male/female), ethnicity (White, Black or African American, Asian, Hispanic or Latino, American Indian or Alaskan Native, and Native Hawaiian or Pacific Islander), weight (kg), height (cm), body mass index (kg/m 2 ), dietary restrictions (presence/absence), and chronic medical issues (presence/absence of ADHD, anxiety, depression, and chronic headache disorder). All participants reported presence/absence of any previous TBI, time since most recent TBI (days), and number of previous TBIs. Participants in the mTBI group reported presence/absence of loss of consciousness, and antero-or retro-grade memory loss associated with recent mTBI. All participants reported presence/absence of current OI. Symptom assessment For a subset of all participants (n = 387; mTBI = 208 and control = 179), 22 symptoms were self-reported on a 7-point Likert scale using the Post-Concussion Symptom Scale (PCSS). 41 Total symptom severity (sum of all Likert scores) and total symptom burden (sum of all symptoms >0) were calculated for each participant. Presence or absence of symptoms >3 weeks after initial injury was assessed for mTBI participants through self-report on the PCSS, or through electronic medical record review (where available). Balance assessment Balance was also assessed for a portion of participants (n = 321; mTBI = 176 and control = 145) using the validated ClearEdge system 42 , and tandem stance eyes closed on a foam pad (TSECFP). The eight balance tests were scored in terms of power spectral density of acceleration, scaled from 0 (poor balance) to 100 (superior balance). There were 21 participants who were unable to complete balance testing due to OI, or extreme postural instability that posed an increased risk for falling. Neurocognitive assessment Computerized neurocognitive assessment was performed in the same subset of participants (n = 321) using the following tests: simple reaction time (SRT1) in which the participant recognizes the presence of an object on the screen and taps it, procedural reaction time (PRT) in which the participant recognizes one of four numbers and taps one of two buttons, go/no-go (GNG) in which the participant recognizes a green or gray object and only taps in response to a gray, and a repeat of the simple reaction time test (SRT2). This battery of tests is part of the Defense Automated Neurobehavioral Assessment (DANA). 44,45 Though a more extensive neurocognitive battery may demonstrate improved detection of mTBI, these four tests were selected to reduce participant dropout rates, and promote generalizability within busy clinical settings. The four cognitive tests were scored for speed and accuracy using a mean throughput measure of mental efficiency, calculated as the mean number of correct responses per minute for each test. Mean-throughput is not a scaled score and is test dependent, with higher scores reflecting better performance (eg, faster reaction time). All individual scores were objectively calculated with computerized software. Salivary ncRNA assessment Saliva was collected from all participants (n = 538) in a nonfasting state, following a tap water rinse. Highly absorbent OraCollect Swabs (DNA Genotek, Ottawa Canada) were applied under the tongue and near the parotid glands bilaterally for 10-15 s. Swabs were submerged in nucleic acid stabilizing solution and stored at room temperature, prior to shipping via priority mail to the Molecular Analysis Core Facility at SUNY Upstate Medical University. RNA was isolated from each saliva sample using the miRNeasy Kit (Qiagen, Inc, Germantown, MD) according to the manufacturer's instructions, as we have previously described. 39 RNA quality was assessed using the Agilent Technologies Bioanalyzer on the RNA Nanochip. RNA-sequencing libraries were prepared using the TruSeq Stranded Small RNA Kit (Illumina) according to manufacturer instructions. Samples were indexed in batches of 48, with a targeted sequencing depth of 10 million reads per sample. Sequencing was performed using 50 base pair, single-end reads, using an Illumina NextSeq 500 instrument. Fastq files were trimmed to remove adapter sequences using Cutadapt version 1.2.1 46 and were aligned using Bowtie version 1.0.0 47 to the following transcriptome databases: miRBase22 (miRNAs), a subset of RefSeq v90 (snoRNAs), and custom-modified piRBase v2 (piRNA). Quantification was performed via SamTools 48 python implementation, using a custom-built bioinformatics architecture (Human Alignment Toolchain, HATCH, Quadrant Biosciences). To allow for efficient and meaningful alignment and quantification of RNAs from the piRBase v2 database, highly similar piRNA sequences were reduced using a hierarchical clustering approach and the resulting sequences were termed "wiRNAs." The aligned reads were quantile normalized and each ncRNA feature was scaled (mean-centered and divided by the feature standard deviation). The normalized ncRNA profiles for each sample were screened for sphericity using a principal component analysis prior to statistical analysis ( Figure S2). Statistical analysis First, the training set (n = 430; mTBI = 201 and controls = 229) was used to identify ncRNA candidates for mTBI diagnosis. For each class of ncRNAs (ie, miRNAs, wiRNAs, and snoRNAs), a nonparametric Wilcoxon rank test with Bonferroni correction was used to identity features with significant differences between mTBI and control groups (false detection rate [FDR] < .05). Next, a partial least squares discriminant analysis (PLSDA) was used to visualize the ability of each ncRNA class to separate mTBI and control samples in two dimensions. Variable importance in projection was calculated for each ncRNA feature using the weighted sum of absolute regression coefficients. Then, a random forest analysis was performed (1000 trees and 10 features) to estimate the out-of-bounds error for each class of ncRNAs. Mean decrease in accuracy was estimated for each ncRNA feature. Finally, the top 10 features within each ncRNA class on Wilcoxon (Adj. P-value), PLSDA (weighted coefficient sum), and Random Forest (mean decrease in accuracy) were pooled into a panel of ncRNA candidates with mTBI diagnostic potential. Duplicate features were removed from the candidate list. In total, 65 ncRNA candidates were iteratively assessed for diagnostic accuracy in the training set. Accuracy, defined by area under the receiver operating characteristic curve, was determined for both individual ncRNA features and ratios of features. To control for the potential effects of previous mTBI, sex, race, age, and underlying neuropsychological comorbidities (eg, anxiety, depression, chronic headache disorder, and ADHD) on ncRNA levels, these measures were scaled, log transformed, and incorporated as ratios with each ncRNA candidate. Missing values (1.8 % of all measures) were imputed with the singular value decomposition imputation method. Random forest was used to generate mTBI-predictive models from sets of individual ncRNAs and ncRNA ratios. To avoid over-modeling, and to produce a panel that could be rapidly assessed with qPCR in downstream applications, no more than 10 ncRNA features were allowed in each model. Area under the curve (AUC) in the training set was assessed with a 100-fold Monte-Carlo cross validation procedure: two thirds of the training set samples were used to evaluate feature importance and build a regression model for cross-validation in the remaining one third of training set samples. The model with the highest AUC was chosen for external validation. Coefficients and features in the model were held constant and applied to the naïve test set (n = 108; mTBI = 50 and controls = 58). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and AUC were reported for both sets. In the subset of training set participants assessed for post-concussion symptoms, balance, neurocognition, and salivary ncRNA (n = 321; mTBI = 176 and control = 145), random forest was used to generate three models predicting mTBI status: (a) validated symptom scales (symptom severity and symptom burden on PCSS), (b) performance on four neurocognitive tasks, and (c) balance performance in eight stances. Age was included in each predictive model to control for its potential impacts on neurocognitive and balance performance, as well as symptom reporting. Model accuracy was assessed across the entire subset (n = 321) using 100-fold cross-validated (CV) AUC, as above. To mirror current clinical guidelines, a predictive model combining standardized symptom scores on the PCSS with neurocognitive measures was generated, and compared to combined models of PCSS with ncRNA levels, and PCSS with balance scores. Finally, to assess the maximum diagnostic accuracy yielded by combining all four approaches, random forest was used to generate a model combining PCSS scores, neurocognitive performance, balance, and ncRNA levels. To explore potential biases within each predictive model, medical and demographic features were compared between misclassified and correctly classified participants with a two-tailed student's t test. Selection of ncRNA features and predictive model generation was performed with Metaboanalyst v4.0 online software. 49 A post hoc analysis using Power Analysis and Sample Size Software (PASS version 15.0.4; NCSS, LLC, Kaysville, UT) determined that the sample size used in the training set provided >99% power to detect a difference between the null AUC (AUC = .70, indicative of acceptable clinical utility) and the alternative hypothesis (AUC = .85, estimated from our previously published research). 37 A one-sided z-test was used with an alpha level set at .05 for continuous data with equal variances and binomial outcomes. The validation cohort achieved 87.6% power to differentiate the ncRNA model performance (AUC = .83) from the null hypothesis value (AUC = .70). Physiologic relevance To assess the physiologic relevance of the ncRNA features that were predictive of mTBI status, we used a threestep approach. First, we assessed whether ncRNA levels trended back toward control levels over time, using 679 longitudinal samples collected from the mTBI participants. Nonparametric ANOVA was used to assess levels of the nine ncRNAs used in the diagnostic algorithm across five time points: <72 h (n = 129), 4-7 days (n = 120), 8-14 days (n = 190), 15-30 days (n = 105), and 31-60 days postinjury (n = 135). One sample per participant was used in each time point. Second, relationships between the nine ncR-NAs of interest and individual symptoms (reported subjectively on the PCSS and measured objectively through neurocognitive and balance testing) were assessed with a Pearson correlation analysis. Third, high confidence gene targets (Targetscan Context Score < -0.5, P-value < .01) for the miRNAs of interest were identified in DIANA miR-PATH v3.0 online software. 50 Overrepresentation of Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways by these target genes was determined using a Fishers Exact Test with Bonferroni correction. For snoRNAs of interest, chromosome location, base pair size, orthologues, and proximal protein-coding genes were provided using NCBI Entrez Gene. Diseases associated with the proximal gene were identified in Genecards. Participant characteristics On average, participants were 18 (±6) years of age ( Table 1). The majority (331/538, 61%) were male and White race (335/538, 61%). There was no significant difference (P > .05) between the mTBI group and the control group in age, sex, or race for both the training and test sets. Self-reported rates of anxiety (24/538, 4%), depression (20/538, 4%), ADHD (36/538, 7%), and chronic headache (25/538, 5%) were low. There were no significant differences between groups in anxiety or ADHD in both training and test sets. The mTBI group had higher rates of chronic headache in the training set (P = .022) and higher rates of depression in the test set (P = .029). Factors that could potentially impact saliva ncRNA content, such as average time of saliva collection (14:00 ± 4:00 h), mean body mass index (24 ± 7 kg/m 2 ), rates of dietary restrictions (38/538, 7%), and recent exercise (77/538, 14%) did not differ significantly among mTBI and control groups in either the training or test set. The two groups had similar rates of prior lifetime concussions (141/538, 26%). There were 45 participants in the control group with OI (15%), and 20 sustained head impacts during exercise (mean hits-to-head = 8; range = 1-50). Balance and neurocognitive assessment Balance and cognitive testing were completed in a subset of mTBI (n = 179) and control (n = 147) participants using the ClearEdge Toolkit (Table S1). The magnitude of the difference between the groups was dependent on which balance and cognitive test was administered. The mTBI group displayed significantly (P < .05) greater body sway during TLEO (d = 0. After filtering out ncRNA features with <10 raw read counts in >90% of samples, there were 264 miRNAs, 4603 wiRNAs, and 176 Refseq RNAs (including lncRNAs and snoRNAs) remaining. The most common features in each ncRNA class were miR-27b-3p (present in 100%, mean raw read count: 32 619), RNA5-8SN3 (present in 100% of samples, mean raw read count: 11 334), and wiRNA_2 (present in 100% of samples, mean raw read count: 538 703). There were 28 miRNAs, 21 Refseq RNAs, and 1378 wiRNAs with significant (FDR < .05) differences between mTBI and control groups on Wilcoxon testing (Table S2). There were 16 of 28 (57%) miRNAs, 12 of 21 (57%) Refseq RNAs, and 675 of 1378 (49%) wiRNAs upregulated in the mTBI group. The 10 ncRNA features in each class with the most significant between-group differences are displayed in Figure 1A-C. Separate two-dimensional PLSDAs employing each class of ncRNA were used to visualize their ability to differentiate mTBI and control groups. No class of ncRNAs was able to achieve full separation of mTBI and control groups on PLSDA ( Figure S3A-C). Coefficient scores estimating the importance of each individual ncRNA feature in sample projection were used to identify the top 10 RNA candidates within each category (Table S2). Random forest, employing 1000 trees and 10 features, was used to estimate the error rate for each class of ncRNAs for determining mTBI status ( Figure S4A-C). The 10 features within each ncRNA category that provided the largest mean decrease in accuracy on random forest were identified (Table S2). Finally, the top 10 ncRNAs within each category (miRNA, Refseq RNA, and wiRNA) from Wilcoxon, PLSDA, and random forest analyses were concatenated into a list of candidate features for mTBI diagnosis (Table 3). After removing duplicate features, there were 65 ncRNA features, including 22 miRNAs, 17 RefSeq RNAs, and 26 wiRNAs remaining. Diagnostic utility A predictive model utilizing seven ratios, involving nine ncRNAs along with participant age and chronic headache status, differentiated mTBI and control participants with CV AUC of .857 (95% CI, .816-.903) in the training set and an AUC of .823 in the test set ( Figure 2A). The model correctly identified 190 of 251 (76%) mTBI participants and 232 of 287 (81%) control participants (PPV = 81%; NPV = 76%). In the subset of participants with symptom, neurocognitive, and balance data (n = 321), a model employing total number of symptoms and total symptom severity on the PCSS, along with participant age, was able to differentiate mTBI and control participants with CV AUC of .885 (95% CI, .836-.918; Figure 2B). The PCSS model Components of each model can be found in Table S3. Misclassification analysis A two-tailed student's t-test was used to compare characteristics of correctly and incorrectly classified participants and identify potential biases within each model (Table S4). The ncRNA model was more likely (P < .05) to correctly classify participants with a history of previous concussion (P = .045), anxiety (P = .041), White race (P = .017), or OI (controls only, P = .037). Incorrect classification was more likely for participants with higher symptom severity (P = .0093), particularly those with high levels of headache symptoms (P = .014). However, the ncRNA model displayed no difference (P > .05) in classification accuracy for participants with other commonly reported symptoms, including "balance problems," "fatigue," or "difficulty concentrating." The 37 mTBI participants who reported persistent symptoms 30 days postinjury were classified with similar accuracy as mTBI participants with symptom resolution by day 30 (P = .80). There was no difference between correctly and incorrectly classified participants in mTBI status, recent exercise, time since injury (mTBI group only), sex, age, body mass index, depression, or ADHD. For the model based on validated symptom scores, correct classification was more frequent than misclassification for participants with recent exercise (P = .015), history of previous concussion (P = .045), White race (P = .0025), and prolonged symptoms (P = .037). Incorrect classification was more likely for participants with mTBI (P = .00020), younger participants (P = .0019), and participants with ADHD (P = .047). There were no significant differences (P > .05) between correctly and incorrectly classified participants with respect to time since injury (mTBI group), body mass index, anxiety, or depression. The model combining ncRNA and validated symptom scores displayed no difference between correctly and incorrectly classified participants in mTBI status, recent exercise, time since injury (mTBI group), sex, history of previous concussion, body mass index, anxiety, depression, or ADHD. Correct classification was more likely for participants with younger age (P = .0079), White race (P = .0004), and prolonged symptoms (P = .039). The model was more accurate for participants reporting high levels of headache (P = .0003), balance problems (P = .013), difficulty concentrating (P = .0003), or fatigue (P = .0007), as well as those with higher symptom burden (P = .0013). Misclassification characteristics for balance and neurocognitive models are also displayed in Table S4. Longitudinal trends for miRNAs that predict mTBI status Nonparametric ANOVA was used to assess if the ncR-NAs used to predict mTBI status displayed longitudinal trends toward a "control baseline" level in the 60 days postinjury. There was a significant difference (P < .05) in levels of four of five miRNAs across the four time points ( Figure 3A). Only one of five snoRNAs displayed significant differences across the four time points ( Figure 3B). Relationships between ncRNAs and mTBI symptoms A Pearson correlation analysis was used to identify significant (FDR < .01) relationships between ncRNAs used in the predictive algorithm and mTBI symptoms (measured by PCSS, neurocognitive testing, and balance assessment) for 321 individuals. There were three ncRNAs associated with symptom severity (miR-4510: R = -.22 and FDR = 3.4 × 10 -6 ; SNORD57: R = .19 and FDR = 5.3 × 10 -4 ; SNORD104: R = .13 and FDR = .008). Levels of miR-4510 were most strongly associated with severity of "Headache," whereas levels of SNORD104 and SNORD57 were most strongly associated with "Dizziness" (Figure 4). There were no ncRNAs from the predictive algorithm that were associated with performance scores DISCUSSION This study identifies a set of ncRNA biomarkers in saliva that differentiate individuals with mTBI from peers without mTBI in both training (n = 430) and naive test sets (n = 108). In a subset of participants assessed with computerized neurocognitive testing, objective balance measures, and standardized symptom scales (n = 321), the ncRNA model displays similar accuracy for identifying mTBI status as these more traditional approaches. 16 Importantly, the ncRNA model was not biased by recent exercise, time since injury (mTBI group), sex, history of previous concussion, body mass index, or underlying neuropsychologic conditions. A model combining levels of four ncRNAs with subjective symptom reports yields comparable accuracy (AUC = .932) to that achieved with symptom reports and four neurocognitive measures (AUC = .888), or symptom reports and eight balance measures (AUC = .912). We note that none of these algorithms received the benefit of clinical acumen and injury history, which may explain why their diagnostic accuracy lags published rates. 51 Unfortunately, the development of mTBI biomarkers depends upon comparison to subjective assessments on which the original mTBI diagnosis is based. This "circular comparison" can make it difficult to determine the true accuracy of any emerging technology. Here, we use a control group consisting of individuals with anxiety, depression, ADHD, OI, and exercise-related fatigue to mimic many of the functional symptoms observed after mTBI. Although the majority of these control participants reported no mTBI symptoms, their mean symptom severity score (4) likely increased misclassification rates for mTBI participants with symptoms severity scores between one and four. Current clinical guidelines recommend that healthcare providers use age-appropriate symptom scales with or without computerized neurocognitive testing to diagnose mTBI, and specifically suggest that blood biomarkers only be used in research settings. 16 The present results indicate that saliva ncRNA biomarkers yield similar diagnostic accuracy to a limited neurocognitive battery and provide additive value when combined with an algorithm relying solely on symptom scores (unaided by injury history). However, adding neurocognitive measures to a symptom/ncRNA model had minimal impact on accuracy. Thus, the additional time required to administer and interpret neurocognitive tests may not provide added bene-fit when combined with saliva ncRNA testing. This may be because neurocognitive testing and subjective symptom reports capture parallel information (ie, fatigue, concentration, and memory), whereas ncRNA may represent a more direct measure of biologic changes with additive value. Because biological changes do not provide information about the clinical manifestation of mTBI, it is crucial that such measures be used only as an adjunct to symptom reports and functional assessments. In the present study, ncRNAs were measured with RNA sequencing technology, which requires >24 h to return results. However, we have previously shown that ncR-NAs measured with a multiplex assay (≤4 h) yield comparable results. 39 Emerging technology may soon provide the ability to measure ncRNAs field-side using a portable device within 1 h. [52][53][54] Until that time, balance and neurocognitive testing remain more expedient adjuncts. However, these assessments usually require an experienced healthcare provider, and many youth or high school competitions/practices do not have an experienced provider present. In comparison, minimal medical training is required to collect a saliva sample using the nucleic acid stabilizing kits employed here. Collection is completed in 10 s (without gagging the patient), samples are stabile at room temperature (for up to 3 months), and they can be transported to a lab without the biohazard regulations required for blood. Unlike neurocognitive testing, saliva ncRNA levels cannot be manipulated by individuals seeking to delay/expedite return to work or school. [17][18][19] The scientific principle underlying our approach is that ncRNAs are packaged into vesicles and released into saliva by cranial nerves. 34 The ncRNA content of salivary vesicles may change in order to regulate neuroinflammation and synaptic repair following mTBI. Indeed, functional analysis of the four miRNAs within the mTBI predictive model displays enrichment for genes involved in the synaptic vesicle cycle, SNARE interactions in vesicular transport, and GABAergic synapse. Numerous studies in animal models have identified alterations in GABA signaling networks following TBI, 55,56 and GABA imbalances have been implicated in specific symptomology following injury. 57,58 Transforming growth factor-beta is also implicated as a physiologic mechanism underlying mTBI, 59,60 and the candidate ncRNAs target this pathway as well. To our knowledge, this study is approximately five times larger than any previous study of ncRNA expression in mTBI. 33 Additional strengths include the use of training and naïve test sets to validate the ncRNA diagnostic algorithm, and the recruitment of a control group with comparable age, sex, ethnicity, neuropsychologic history, and rates of previous concussion. Inclusion of individuals with recent exercise, OI, and subconcussive head impacts within the control group serves to increase the scientific rigor. Recruitment of participants from 11 different sites and multiple clinic settings (ie, ED, sports medicine clinics, urgent care, and outpatient primary care clinics) also promotes generalizability. Finally, direct comparison to current standard-of-care assessments provides initial evidence for the clinical utility of ncRNA biomarkers. There are, however, several limitations to this study. No baseline cognitive or symptom data were available for participants, despite that fact that baseline testing is known to improve the accuracy of these measures. 61,62 Similarly, baseline levels of ncRNAs were not obtained, though reverse baseline measurements suggest preinjury saliva testing might also improve the accuracy of this technique. Although we attempted to match numerous medical and demographic characteristics across mTBI and control participants, complete matching was not possible for all categories, and rates of chronic headache disorder differed across groups. In addition, rates of neuropsychologic comorbidities (ie, ADHD, anxiety, and depression) are slightly lower in this cohort than in the general population, 63 which may affect generalizability of the findings. Although the ncRNA predictive model was built and validated in a cohort of 538 individuals, only 321 of these individuals completed a full battery of neurocognitive, balance, and symptom assessments. Some participants (63/538; 12%) were unable/unwilling to complete the full battery of tests, some sites (3/11) could not administer the full battery within their clinic workflow, and individuals with primary language other than English were excluded because the neurocognitive battery required English fluency. Thus, this subcohort may include an element of selection bias. This also highlights some of the shortcomings involved with time-consuming functional measures. We note that identification of mTBI participants in this study was reliant on physical and cognitive symptom assessments. Thus, the ncRNAs identified here may not improve sensitivity for mTBI identification. Future studies assessing ncRNA levels among asymptomatic individuals immediately following head impacts could determine if these molecules have the ability to increase diagnostic sensitivity. Of the four miRNAs used in our ncRNA predictive algorithm, three (miR-34a-5p, miR-192-5p, and miR-27a-5p) were identified in previous studies of miRNA expression in individuals with TBI. 38,62,64,65 One miRNA (miR-4510) has not been identified previously, and to our knowledge no previous mTBI study has interrogated snoRNAs or wiRNAs. Differences between our findings and existing literature may arise because of differences in (a) severity of brain injury; (b) participant age; (c) RNA quantification methodology; (d) sample sizes; and (e) biofluids. Even our own pilot studies of miRNA expression used an expecto-ration technique (rather than swab) to collect saliva, and involved younger participants. 37 In conclusion, this study demonstrates that saliva ncRNA levels may be used as an objective measure to identify mTBI status in concert with currently available clinical tools. In the present cohort, this biologic measure has similar diagnostic accuracy to neurocognitive or balance testing, and displays additive value with standardized symptom assessment. External validation of this ncRNA model would provide additional evidence that biomarker testing deserves further consideration within clinical guidelines for mTBI diagnosis. Prognostic application of this technology may provide even greater clinical benefit-nearly 25% of individuals with mTBI suffer from prolonged symptoms, yet few accurate tools exist to predict the course of recovery. Given that saliva ncRNA measurement is objective, noninvasive, and does not require expert administration or interpretation, such a measure could represent a significant advance in standard of care for individuals with mTBI. A C K N O W L E D G M E N T S The authors thank Alexandra Confair, Molly Carney, Jessica Beiler, Katsiah Cadet, Elizabeth Packard, Jennifer Stokes (Penn State), Kyle Kelleran (Bridgewater University), Haley Chizuk (SUNY Buffalo), Allison Iles, Arianna Montefusco, Rhianna Ericson, and Kayla Wagner (Quadrant Biosciences) for aiding with participant enrollment and sample collection. We thank Eric Schaefer (Penn State) for guidance on statistical modeling. This project was supported by a sponsored research agreement between Quadrant Biosciences and the Penn State College of Medicine to AL. SDH's time was supported by the National Center for Advancing Translational Sciences (Grant: KL2 TR002015 and UL1 TR002014). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. C O N F L I C T O F I N T E R E S T S SDH is a paid consultant for Quadrant Biosciences. SDH and FAM are scientific advisory board members for Quadrant Biosciences and are named as co-inventors on intellectual property related to saliva RNA biomarkers in concussion that are patented by The Penn State College of Medicine and The SUNY Upstate Research Foundation and licensed to Quadrant Biosciences. SDV, GF, and AR are paid employees of Quadrant Biosciences. CN is member of scientific advisory board and has equity interest in Quadrant Bioscience Inc. Material has been reviewed by the Walter Reed Army Institute of Research. There is no objection to its presentation and/or publication. The opinions or assertions contained herein are the private views of the author, and are not to be construed as official, or as reflecting true views of the Department of the Army or the Department of Defense. The investigators have adhered to the policies for protection of human subjects as prescribed in AR 70-25. The other authors have no conflicts of interest to declare. D ATA AVA I L A B I L I T Y S TAT E M E N T All FASTQ files used to generate the RNA sequencing dataset are the property of Quadrant Biosciences. Requests for data sharing will be reviewed and considered on a caseby-case basis.
2020-10-08T05:04:09.302Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "363e7f8ad361831405919da70ffc2f6da38b48ec", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ctm2.197", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "363e7f8ad361831405919da70ffc2f6da38b48ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232481104
pes2o/s2orc
v3-fos-license
Biomechanical evaluation of temporary epiphysiodesis at the femoral epiphysis using established devices from clinical practice The aim of this study is to compare biomechanical features of different devices used in clinical routine for temporary epiphysiodesis (eight-Plate® and FlexTackTM). The tested implants were divided into four different groups (eight-Plate® vs. FlexTackTM for lateral and anterior implantation) á 10 samples for testing implanted eight-Plate® vs. FlexTackTM in fresh frozen pig femora for maximum load forces (Fmax) and axial physis distance until implant failure (lmax). A servo hydraulic testing machine (858 Mini Bionix 2) was used to exert and measure reproducible forces. Statistical analyses tested for normal distribution and significant (p < 0.05) differences in primary outcome parameters. There were no significant differences between the eight-Plate® lateral group and the FlexTackTM lateral group for neither Fmax (p = 0.46) nor lmax (p = 0.65). There was a significant higher Fmax (p < 0.001) and lmax (p = 0.001) measured in the eight-Plate® group compared to the FlexTackTM group when implanted anteriorly. In anterior temporary ephiphysiodesis, eight-Plate® demonstrated superior biomechanical stability. At this stage of research, there is no clear advantage of either implant and the choice remains within the individual preference of the surgeon. is the necessity of an internal or external fixation and thus limited weight bearing and mobilisation for at least 6 weeks after operation [3]. In case of open growth plates (in children and adolescents), corrective osteotomies of the distal femur or the proximal tibia can be avoided by growth guiding techniques such as (hemi)epiphysiodeses. There are two categories of epiphysiodesis. The permanent and irreversible epiphysiodesis-originally described by Phemister-requires the destruction of the epiphyseal plate and is therefore indicated only when precisely calculating the remaining growth and determining the perfect age for surgery [3]. Canale and Christian [4] and Ogilvie and King [5] have focused on permanent percutaneous methods of minimal invasive epiphysiodesis using image intensification. Using these techniques, the physis is ablated or destroyed by drilling or curetting through small medial and lateral incisions. In 1998, Metaizeau et al. [6] described a further method of epiphysiodesis by placing two screws obliquely across the physis. The priniciple of this technique is based on applying compressive forces onto the physis [7]. The first temporary and possibly reversible hemiepiphysiodesis was described by Blount. His technique avoids physical damage by bridging the physis. This technique was described as early as 1949 for the first time and is performed by implanting 2 or 3 staples [8]. Increasing the number of staples correlates with a higher risk of damaging the physis. Therefore, Blount's technique requires an accurate calculation of the remaining growth as well [7]. Due to the risk of damaging the physis, operations using this technique are consequently performed towards the end of growth and not during childhood [3]. Steven et al. described a "tension band principle" technique for temporary hemiepiphysiodesis by implanting an eight-Plate® (Orthofix, Lewisville, USA), which can be applied in younger patients as well [9]. Using this implant, a plate bridging the physis is used and fixed by two screws that are not fixed-angle. These plates lead to a temporary hemiepiphysiodesis with a good pressure distribution around the physis [3]. Different retrospective studies showed that eight-Plate® are as effective as staple hemiepiphysiodesis for guided growth in cases of angular deformity, even in younger patients [3,10,11]. Another implant, with a similar functional principle for guided growth, is the "FlexTack TM " (Merete, Berlin, Germany), which in contrast to the eight-Plate® is a one-piece implant without mechanical slackness. This biomechanical in-vitro study was initiated to compare the biomechanical characteristics of this new Flex-Tack TM implant with the well-known eight-Plate® implant. Due to differences in design and implantation technique, it was hypothesised: (1) that the FlexTack TM can bear larger forces than the eight-Plate® before implant failure is observed and (2) that the FlexTack TM has a higher corrective potential as it develops momentum right after implantation, while the eight-Plate® has to experience angulation first to overcome initial mechanical slackness [11,12]. Materials and methods In this study, eight-Plate® (plate and screws) and "Flex-Tack TM " (flexible staples; Fig. 1) were tested for biomechanical features. For implant testing 40 fresh frozen pig femora, harvested at animal's age of 9-12 months were used. The existence of an open physis in the femora was verified by x-ray and subsequent macroscopic examination when removing attached soft tissue from the bones. The femora were stored in a vacuum plastic bag and frozen at −24°C before being used for testing. 24 h before implantation of eight-Plate® or FlexTack TM , the femora samples were defrosted and kept at 6°C. Before implantation, the cortical bone was cut at the height of the distal femoral physis. The opposite side of the femoral bone was opened as well to confirm the anatomical structure of the physis. The femoral shaft and the distal femoral condyle were fixed with polymethylmethacrylat (PMMA cement) (Technovit 3040, Heraeus Kulzer, Wehrheim, Germany). Samples were subsequently randomised into four groups á 10 samples each. Group 1 tested laterally implanted eight-Plate®, and Group 3 anteriorly implanted eight-Plate®. In analogy, groups 2 and 4 represented laterally and anteriorly implanted FlexTack TM . To test for the influence of implant positioning with regard to the femoral axis, groups 1 and 2 were divided in two subgroups each, including rectangular (>70°) and angulated (<70°) implants (Figs. 2 and 3). In the FlexTack TM group samples were distributed even. In the eight-Plate® group there was a slight mismatch with less rectangular (4) and more angulated (6) samples. This difference in distribution is explained by an angel of less than 70°in one case that was initially assigned to the rectangular group. For biomechanical evaluation, prepared femora were fixed in a servo hydraulic material testing machine (858 Mini Bionix 2, MTS, MN USA). The experimental set up was constructed as an axial pull out test. Strain was carried out with a preload pressure of 20 N and a constant speed of 10 mm/min. The fixation in the material testing machine was performed by a flange coupling and a cardan joint. The forces exerted (N) and the axial opening of the physis (mm) were recorded with a frequency of 100 Hz. A broken implant or an emigration of the implant were considered as fixation failure. Final outcome parameters were axial distance until fixation failure (l max in mm) and the maximum loads (F max in N) applicable to the implant before failure. Forces required for axial distraction of the physis of 2 mm and 4 mm were measured (F 2mm and F 4mm ; Fig. 4). The statistical analysis was carried out with SPSS 21.0 (IBM Corporation, Armonk, NY) software. Based on the Kolmogorov Smirnov test, the data was tested for normal distribution, followed by simple t-testing; p < 0.05 was considered significant. A. eight-Plate® lateral (Group 1) and FlexTack TM lateral (Group 2) There was no significant difference (p = 0.46) between the average F max for the eight-Plate® lateral group (Group Fig. 5). In Group 1, four plates were implanted angulated and six were implanted rectangular. There was no statistically significant difference between both types of angulation concerning F max (p = 0.19) and l max (p = 0.18). In Group 2, half of the FlexTack TM were implanted straight and the remaining five were implanted rectangular. No significant differences concerning F max (p = 0.49) or l max (p = 0.09) were found. In five cases of Group 1 the reason for the implant failure was a screw breakout with bone tissue and in the other five cases cut-out of the screws without bone tissue. In Group 2 only two implant failures were associated with the original bone tissue breakout. In nine of ten cases, the reason for implant failure in Group 3 was a screw breakout with a fracture apart from the screw (screw within bony tissue) and in one case a solitary breakout of the screw without adherent bone tissue. In Group 4, three of ten cases showed a cut-out of the FlexTack TM implant with a bone flake and seven cases a cut-out without adherent osseous flake. Discussion Angular osseous deformities of the lower extremity during the growth period can be treated by different surgical techniques. In fact, there are permanent epiphysiodeses using the Phemister technique and temporary epiphysiodeses using Blount staples as traditional options to correct angular deformities. Both of these techniques require accurate calculation of the remaining growth as the primary requires destruction of the epiphyseal plate and the temporary epiphysiodesis according to Blount exposes an increased risk of epiphyseal plate injury. An alternative surgical solution was introduced by eight-Plate® implants. The principle of this technique is tethering of the physeal periphery while enabling growth in the rest of [13]. It is a well-described and established method to correct coronal plane deformities of the knee with minimal complications and has been tested in various studies [14,15]. An implant, which was introduced more recently, is the FlexTack TM implant, which follows a similar functional principle. The advantage of both techniques compared to the definitive epiphysiodesis is, that implants can be removed once the angular deformity has been corrected. So far, however, there are no biomechanical or clinical studies comparing both solutions. This is the first study to compare biomechanical features of eight-Plate® and FlexTack TM in a porcine in vitro model. Although both implants differ in design, there was no difference detected between both implant types regarding maximum loads (F max ) and axial physis distance until fixation failure (l max ) when implanted laterally. When implanted anteriorly, FlexTack TM tended to fail at significantly lower maximum loads and showed significantly shorter axial physis distances until failure compared to eight-Plate®. It remains questionable whether this difference between both implants affects clinical routine as most implants are brought in laterally/medially to correct varus/valgus deformities and only a minor percentage is implanted anteriorly for correction of flexion deformities of the knee (e.g. in neuromuscular disorders). This study was performed on porcine distal lateral and anterior femora and tested for osseous stability only. It did not respect soft tissue-implant interaction. In clinical routine, we made the experience that patients tend to complain more often about soft tissue irritation when treated by eight-Plate® than by FlexTack TM . It is not exactly clear which forces are acting on the implants in-vivo as tendons, muscles and connective tissue may reduce distractive forces. It remains discussable if the implant behaviour of eight-Plate® changes in vivo after some weeks as its initial mechanical slackness between plate and screws will be turned into rigidity after a certain degree of angulation has been reached. In contrast, Flex-Tack TM are primarily fixed-angle without mechanical slackness-which has the advantage of immediate correction potential. This fact may contribute to earlier implant failure in the present study as eight-Plate® yield a an initial greater distraction stability due to the mechanical slackness [16]. In the present study, two implants were implanted anteriorly and one only laterally, which was based on our clinical standard and experience. Although there is initial mechanical slackness between screws and plate in eight-Plate® implants, screw threads yield a high initial stability within bone. Considering the implantation mechanics of FlexTack TM staples it may be assumed-although staple's legs are barbed-that they are initially less strong fixated within osseous tissue. However, the titanium alloy microstructured surface of the later allows for a rapid osteointegration, which results in a strong bone purchase after some weeks. In clinical practice, this fact requires special instruments (U-shaped chisels) for removal of deeply integrated FlexTack TM staple legs, while eight-Plate® screws can be removed more easily. Thus, the present study cannot replicate implant-depend osteointegrative features. The present study demonstrated that there are no biomechanical deficits when implanting the mentioned devices in either a rectangular or angular fashion. Thus, clinical studies need to evaluate and directly compare both, eight-Plate® and FlexTack TM with regard to clinical and radiological correction potential, adverse events, patient's convenience and cost-benefit analysis. Conclusion For lateral temporary epiphysiodesis there is no biomechanical difference between eight-Plate® and Flex-Tack TM regarding the maximum loads bearable by the implants as well as the maximal distraction distance of implants until failure. In anterior temporary ephiphysiodesis, eight-Plate® demonstrated superior biomechanical features regarding the above mentioned parameters. At this stage of research, there is no clear preference towards one or the other implant and the choice remains within the individual preferences of the surgeon. Funding Open Access funding enabled and organized by Projekt DEAL. Compliance with ethical standards Conflict of interest The authors declare no competing interests. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
2021-04-02T06:17:56.893Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "8f68b0a191a9b71900b642e6e663eed9a3b5652e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10856-021-06515-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6af999df5348543b61e0e9445d6031e492db83f", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219062595
pes2o/s2orc
v3-fos-license
Fault Diagnosis of Planetary Roller Screw Mechanism Based on Bird Swarm Algorithm and Support Vector Machine Intelligent fault diagnosis of rotating machinery has been widely developed in recent years due to the improvement of computing power, but how to identify the fault states of planetary roller screw mechanism is a difficult problem in practical industrial applications. A fault diagnosis method for planetary roller screw mechanism is proposed by combining with bird swarm algorithm (BSA) and support vector machine (SVM), which shows strong advantages in solving small sample, nonlinear and high-dimensional identification problems, and the bird swarm algorithm with high optimization accuracy and good robustness. In this paper, the vibration data of the planetary roller screw mechanism in two states with and without grease are collected, and features are extracted from the time domain, frequency domain and time- frequency domain, respectively. The predicted accuracy of SVM and BSA-SVM is compared, and the feasibility of the proposed method is verified. Introduction Planetary roller screw mechanism (PRSM) is a very important transmission element in precision drive system, which can realize the mutual transformation between rotary motion and linear motion based on the characteristics of large thrust [1], high-precision [2] and higher speeds [3], and is widely used in precision machine tool [4], robots [5], medical equipment [6] and other fields. Figure 1 shows the structure of PRSM. Fault diagnosis, which can detect anomaly and identify failure mode timely, is a very important part to ensure safety and reliability of machinery equipment in condition-based maintenance (CBM) system [7]. In recent years, due to the significant growth of monitoring data and the rapid development of machine-learning technology, data-driven diagnosis methods have been widely developed in bearings [8], gears [9] and ball screw mechanism [10]. For the PRSM, fault diagnosis is the main mean to monitor its safe operation. Especially, when lubrication problems occur and are not found in time, it will lead to serious failure of the mechanism, or even damaging the whole system. Moreover, up to now, researches [11], meshing principle [12], thermal characteristic analysis [6], kinematic analysis [13] and so on. There are few literature studies on the fault diagnosis of PRSM. Therefore, using machine learning technology to recognize fault state of PRSM is particularly urgent. The work proposes a feasible manner to identify the different states of PRSM. A fault diagnosis method based on BSA-SVM for PRSM is developed, the framework of this paper is arranged as follows. After introducing the characteristics of PRSM and the importance of fault diagnosis in Section 1, Section 2 briefly describes the process of optimizing parameters of SVM by BSA. Section 3 analyses the predicted results of BSA-SVM and SVM. In Section 4, the conclusions are presented. BSA-SVM Bird swarm algorithm (BSA) is a new biological heuristic algorithm proposed by Meng et al. [14] in recent years, according to the biological behaviors of birds in nature. Support vector machine (SVM) is a Machine learning algorithm widely used in classification and regression. But the penalty parameter c and kernel parameter g of SVM have a great effect on the predicted results. Therefore, the BSA-SVM is a fault diagnosis method optimizing parameters of SVM by BSA. The BSA-SVM model can be described in detail as follows. 1) Initialize the BSA parameters. When iteration number t equal to 0, the initial position of individual in the population need satisfy min max where, cmin and cmax are the minimum and maximum in the range of parameter c respectively. gmin and gmax represent the minimum and maximum in the range of parameter g. At the same time, the size of bird population N, iteration times M, flight frequency FQ, and foraging probability P are set (Step 1). 2) Calculate fitness values. The fitness value of each bird is calculated according to the fitness function, and the optimal position of individual and colony is determined by the fitness value (Step 2). 3) Update position. At the beginning of each iteration, it is determined whether t/FQ has a remainder. If there is a remainder, the individual bird group only carries out foraging behavior or vigilance behavior. Otherwise, birds are divided into producers and beggars. Their positions are updated (Step 3). 4) Update best positions of individual and group. If the current position of the individual is better than its previous best position, the current position becomes its own best position, and the current best position of the flock is also updated. 5) Whether the maximum number of iterations has been reached will be judged. If the iteration ends, go to step 6. Otherwise, go back to step 3 (step 5). 6) Output the best parameters. The optimal parameters are used to train the SVM, and the test set is used for prediction. The specific flow of the BSA-SVM model can be summarized as Figure 2. Experiment and analysis In the running process of PRSM, grease failure is one of the most common fault. And grease have a significant impact on the performance of PRSM. Grease failure will aggravate the wear, increase the heat at work, cause a decline in PRSM precision, produce deformation, and eventually lead to the entire PRSM break. In a word, grease failure usually appears before the PRSM whole fracture. Therefore, reasonable judgment on whether grease in the working process is present in PRSM effectively guarantees the PRSM running safety. And it can timely remind the operator whether to add grease to PRSM and provide certain judgment basis. Figure 3 indicates the working state of PRSM with or without grease. Figure 4 shows the PRSM failure test bench, and it consists of four parts, motor, reducer, PRSM and hydraulic load system. The motor drives the reducer, which drives the PRSM to run. And the PRSM pushes the hydraulic load to reciprocate. The vibration acceleration sensor is located at the upside of the nut and the sampling frequency is 20480Hz. 3.5 Without grease 6 9 Three working conditions are shown in Table 1. There are 48 samples of train data and 12 samples of test data in each working condition, so there are totally 288 samples of train set and 72 samples of test set. Kernel function of SVM is selected as RBF function, the range of kernel parameter g and penalty parameter c are set as [0,100], the population number is 30, the number of iterations is 50, and the flight frequency FQ is 10. When parameters of SVM are not optimized, penalty parameter c and kernel parameter g are respectively set as 1 and 0.5. By comparing c and g after optimization with before optimization, the difference is relatively large, so they have a great impact on the accuracy of prediction. As shown in Figure 7, the predicted accuracy of test data before and after optimization is compared. It can be seen from the Figure 7 that after optimization, the prediction accuracy in the time domain, frequency domain and time-frequency domain all has been greatly improved. In three domains, the predicted accuracy is respectively 83.33%, 83.33% and 72.22%. It indicates that BSA-SVM can improve classification accuracy and is a feasible fault diagnosis method of PRSM. Conclusion In this paper, a fault diagnosis method for the PRSM is proposed. The vibration acceleration data with grease and without grease is collected. Then, the fault features, which are extracted from the time domain, frequency domain and time-frequency domain respectively, are input to BSA-SVM and SVM. The shown that the accuracy of test data using the BSA-SVM model is better than that of the SVM. Also, the results illustrate that different kernel parameter and penalty parameter of SVM have a greater effect on the results of prediction. Moreover, the effectiveness of the proposed PRSM fault diagnosis method is verified.
2020-04-30T09:06:49.891Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "c0521a3ec444b216ba22045c94aa583a41a57f9a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/1519/1/012007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1bb3f7497a1977a9f1852626c7df09a73de977ab", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
237214500
pes2o/s2orc
v3-fos-license
Key features of the environment promoting liver cancer in the absence of cirrhosis The prevalence of obesity and non-alcoholic fatty liver disease (NAFLD) associated hepatocellular carcinoma (HCC) is rising, even in the absence of cirrhosis. We aimed to develop a murine model that would facilitate further understanding of NAFLD-HCC pathogenesis. A total of 144 C3H/He mice were fed either control or American lifestyle (ALIOS) diet, with or without interventions, for up to 48 weeks of age. Gross, liver histology, immunohistochemistry (IHC) and RNA-sequencing data were interpreted alongside human datasets. The ALIOS diet promoted obesity, elevated liver weight, impaired glucose tolerance, non-alcoholic fatty liver disease (NAFLD) and spontaneous HCC. Liver weight, fasting blood glucose, steatosis, lobular inflammation and lipogranulomas were associated with development of HCC, as were markers of hepatocyte proliferation and DNA damage. An antioxidant diminished cellular injury, fibrosis and DNA damage, but not lobular inflammation, lipogranulomas, proliferation and HCC development. An acquired CD44 phenotype in macrophages was associated with type 2 diabetes and NAFLD-HCC. In this diet induced NASH and HCC (DINAH) model, key features of obesity associated NAFLD-HCC have been reproduced, highlighting roles for hepatic steatosis and proliferation, with the acquisition of lobular inflammation and CD44 positive macrophages in the development of HCC—even in the absence of progressive injury and fibrosis. www.nature.com/scientificreports/ fat diet alone is insufficient to cause cancers and is given in combination with additional means of promoting oxidative stress/fibrosis or carcinogens 9,10 . These approaches all have value, with the use of C57Bl/6 mice a significant asset lending itself to subsequent genetic manipulation. They may also have limitations. In Wolfe et al., mice developed features of the metabolic syndrome and NAFLD, but tumours were restricted to those receiving a choline deficient diet in association with raised aminotransferases and fibrosis 11 . In Wei et al, over 80% of C57Bl/6 mice fed choline deficient L-amino acid defined (CDAA) diet developed tumours with some features of steatosis and liver injury, but this was not in association with features of the metabolic syndrome 12 . In Asgharpour et al., C57Bl/6 mice crossed with 129S1 (B6/129) fed a western diet did develop obesity, fatty liver injury and severe fibrosis with the majority developing cancers, but in the absence of impaired glucose tolerance 13 . In Dowman et al., 60% of C57Bl/6 fed the American Lifestyle (ALIOS) diet developed small 1-2 mm tumours, associated with features of NASH and fibrosis, although in the absence of obesity or impaired glucose tolerance 14 . In summary, while there are major advantages using C57Bl/6 mice, they do not develop the metabolic syndrome with age, even if they are fed an obesigenic diet. While these models are undoubtedly valuable for studying hepatocarcinogenesis in the presence of the injuries created, they do not necessarily capture the multifactorial processes underlying obesity and NAFLD associated HCC-which in up to 50% of humans, arises in the absence of advanced fibrosis or cirrhosis. Thus, the focus on C57Bl/6 mice may have limited the opportunities to identify all mechanisms relevant to the human metabolic syndrome and NAFLD-HCC. Consequently, we chose the ALIOS diet, but used the C3H/He strain of mouse 15 . C3H/He mice are susceptible to both obesity 16 and hepatoma development 17 with age. Results Exacerbation of the metabolic phenotype of C3H/He mice promotes NAFLD and HCC.. The ALIOS diet, representing popular 'fast food' meals, is associated with the development of NAFLD and 1-2mm liver tumours in C57Bl/6 mice at 1 year 14,18 . C3H/He mice are more susceptible to obesity and HCC. Activation of the Ras/Raf/MEK/ERK pathway is common in murine hepatocarcinogenesis regardless of the strain, with the liability of C3H/He attributed to an enhanced acquisition of spontaneous or induced Ha-ras oncogene mutation 19 . C3H/He mice were fed either ALIOS or control diet, with data summarised in Fig. 1 (F1) and supplementary Fig. 1 (SF1). ALIOS fed mice had significantly higher body weight (SF1A, F1B), liver weight (F1B, SF1B) and liver/body weight ratio (F1A) from as early as 12 weeks of age (4 weeks of diet), increasing stepwise at 24, 36 and 48 weeks of age. Visceral adipose tissue weight was also elevated (F1B). Age associated increases in serum lipids were similar in both groups of mice at 48 weeks relative to pre-diet at 8 weeks of age, although Serum LDL was higher (F1C). Serum alanine transaminase (ALT) was also elevated in ALIOS fed mice, in keeping with a greater degree of liver injury (SF1C). Neither dietary group developed diabetes, as defined by a GTT performed at 48 weeks compared to 8 weeks, but both dietary groups had elevated glucose levels at 2 hours, in keeping with age related impaired glucose tolerance (F1D). In addition, the ALIOS fed group developed impaired fasting blood glucose (FBG) (8.05 ± 0.32 versus 6.72 ± 0.19; p = 0.001) (F1D), in keeping with ALIOS diet associated insulin resistance. The expression of inflammatory markers, tumour necrosis factor alpha (Tnfα) and inducible nitric oxide synthase (iNos), was up-regulated in ALIOS diet liver tissues compared to control diet mice (SF1D), in keeping with a higher level of inflammation. A heatmap including a 182 gene inflammatory response signature 20 is shown in F1E, showing that the majority of ALIOS fed mice had a higher level of inflammation, but also a degree of heterogeneity within the dietary groups. Tumour data are summarized in Fig. 2 (F2). Livers of ALIOS fed C3H/He mice appeared pale and macroscopically fatty, with spontaneous visible tumours being common at 48 weeks (F2A). Tumours were present in 15/33 control mice and in 30/35 (ALIOS fed mice (p<0.0001; Pearson Chi Square), with the number and size also being greater in ALIOS fed mice (F2B, SF1E). In mice culled at 36 weeks, macroscopic HCC were present at a lesser frequency (0/8 control diet, 2/8 ALIOS diet). Notably, body weight, liver weight and the liver/body weight ratio (F2C) were strongly associated with tumour development, tumour number and the size of the largest tumour (Table 1). FBG at 48 weeks (GTT2 0 minutes) was also highly significantly associated with tumour numbers and size (Table 1, F2D). There was a significant but weaker association with serum LDL, although no associations with elevated serum ALT ( Table 1). Expression of tumour markers Afp, Gpc3 and Nope were significantly higher in the tumours versus the adjacent non-tumour tissues, this in keeping with the tumours being HCC (SF1F). In fourteen tumours there was sufficient tissues for RNA extraction and RNA-seq analysis. These were from one control diet and 13 ALIOS fed mice. Unsupervised clustering analysis distinguished the tumour tissues from the non-tumour tissues (SF2). Mutations in the Ras oncogene were highly prevalent in tumour tissues only, including tumours from the control diet animal and 9/13 ALIOS fed animals. Data can be accessed in GSE137407. To explore overlap between the tumours in our model and human HCC, the mouse transcriptomic data was compared to publicly available human data from The Cancer Genome Atlas Liver Hepatocellular Carcinoma (TCGA-LIHC) project. Unsupervised class discovery of 374 liver cancer and 50 normal tissue samples in the dataset by NMF-metagene and K-means clustering identified a single non-tumour (V6) and 6 tumour metagenes (V1-5, V7), similar to the main biological subtypes previously reported by clustering of this dataset 21 . By projecting metagenes onto the mouse RNA-seq dataset, non-tumour mouse gene expression was enriched in the human non-tumour V6 metagene, indicative of a comparable liver transcriptomic signature across the two species. The mouse tumour gene expression was enriched with the human V3 HCC metagene (F2E). GSEA analysis of the human metagene V3 DE gene list revealed a signature characterised by enhanced proliferation, high serum AFP and chromosomal aberrations (F2F). This gene signature was also enriched in the C3H/He mouse tumour transcriptomic data (F2G). Histological characterisation of NAFLD and HCC. NAFLD histological features in control versus ALIOS diet fed mice are summarised in Fig. 3 (F3). NAFLD was defined histologically by the presence of stea- Figure 1. Aged C3H/He mice fed the ALIOS diet develop the metabolic syndrome. Liver/body weight ratio increased with age in ALIOS fed mice (A), with elevations in body liver and visceral adipose tissue (VAT) at 48 weeks (B). Lipid profiles of 48 weeks mice differed regardless of diet compared to pre diet levels (8 weeks), with low density lipoprotein levels (LDL) being higher in ALIOS fed mice at 48 weeks (C). Significant age associated changes during a glucose tolerance test (GTT) in both 48 weeks dietary groups relative to pre-dietary intervention (D). The fasting blood glucose was significantly elevated in ALIOS fed mice at 48 weeks (D). Data presented as mean ± S.E.M., Statistics using Kruskal Wallis or Mann-Whitney Tests, ns-not significant, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. A heatmap created using heatmapper.ca 42 and an inflammatory response signature including 182 genes shows enrichment in the non-tumour livers of ALIOS fed mice (N13-N23) compaired to controls (N1-N12), as well as heterogeneity within the dietary groups. GSEA-gene set enrichment analysis. www.nature.com/scientificreports/ tosis in > 5% of hepatocytes. Steatosis was evident in the majority of the mice, regardless of control or ALIOS diet, by 24 weeks of age. The severity of steatosis increased with age and at 48 weeks was significantly more pronounced in ALIOS diet fed mice compared to control mice, as was lobular inflammation (F3A). As a marker of a chronic injury wound healing response, fibrosis was common in both dietary groups of mice, with elevated fibrosis stage 1-2 in ALIOS fed mice, versus 0-1 in control mice (F3A). Peri-sinusoidal fibrosis was more common at 48 weeks in ALIOS (100%, 24/24) compared to control diet mice (69.5%, 16/23) (F3A), confirmed by semi-quantitative image analysis of Sirius Red stain (F3D). Ballooning was present in 95% of ALIOS versus 65% of control diet mice, with the combined NAFLD activity score (NAS score) 22 significantly higher (F3A). Representative histopathology features are shown in F3C. IHC characterisation of the immune infiltrate confirmed increases in CD8 (cytotoxic), CD4 (helper) and FOXP3 (regulatory) T cells, as well as neutrophils (F3E). CD4 and CD8 T cells were the most prevalent, but the elevation induced by diet was most notable for FOXP3 T cells (4 fold), which were scant in control diet livers. Ancillary features of NASH, Mallory-Denk bodies and megamitochondria, were also present. NAFLD histological features such as microvesicular steatosis, pigmented Kupffer cells and lipogranulomas 22 were rare at 12-36 weeks, but were much more common at 48 weeks and in ALIOS diet fed mice (F3B). This was most striking for lipogranulomas, detected in just 1/23 control and 21/24 ALIOS diet fed mice (p < 0.001). A lipogranuloma consists of a steatotic hepatocyte or fat droplet, surrounded by mononuclear cells and macrophages and an occasional eosinophil (F3C). Counts of CD68+ macrophages confirmed these were elevated in both dietary groups of animals, more so in those fed ALIOS diet and at an earlier age (36weeks) compared to control diet mice (F3F). As reported above, 45/68 mice had tumours that were classed as HCC. Histological characterisation of tumours, presented in Fig. 4 (F4) revealed common steatohepatitic features. Nuclear atypia, eosinophilic inclusions, frequent mitoses and thickened (> 2) tumour cell plates, confirmed by reticulin staining, distinguished steatohepatitic hepatocellular carcinomas (SH-HCC) from benign adenomas (F4A, F4B). Histological features associated with NAFLD-HCC development. Histological associations with NAFLD-HCC development in C3H/He mice at 48 weeks (15/33 control diet; 30/35 ALIOS diet) are summarised in Table 2. Considering NASH features, there was no association with the presence of hepatocyte ballooning, although correlations with the severity of steatosis, lobular inflammation and the NAS score were highly significant. There were weaker associations with the presence of fibrosis. Ancillary features of NAFLD were also considered. Of these, the presence of lipogranulomas was most strikingly and significantly associated with NAFLD-HCC development. The presence of Mallory-Denk bodies was the feature most significantly associated with tumour size. Of the T cell subsets, the CD4 T cells were the most strongly and significantly associated with the development of tumours, the number of tumours and tumour size. C3H/He NAFLD-HCC arises in a proliferative steatotic environment with persistent inflammation. Parallel groups of mice on ALIOS and control diets received bucillamine a cysteine derivative used first line in Japan as a disease modifying drug to treat rheumatoid arthritis 23,24 . Bucillamine behaves as a thiol anti Table 2. There was a dramatic reduction in the presence and severity of hepatocyte ballooning and consequently the NAS score in ALIOS diet fed mice (F5A). Encouragingly, dietary bucillamine also reduced the ALIOS diet associated hepatic fibrosis stage at 48 weeks (F5A), as well as pericellular fibrosis (F5A-C). Despite the positive impact on the severity of NAFLD, bucillamine treatment reduced neither numbers nor size of HCC at 48 weeks (F5D). We explored the impact of bucillamine on other features, including those associated with elevated tumour incidence. Bucillamine had minimal impact on body weight, liver weight or blood glucose (SF3A). Regarding liver histopathology, there was also little impact on the steatosis grade or lobular inflammation score in the ALIOS fed mice, with also no impact on the presence of lipogranulomas (F5C). Of the other ancillary features of NAFLD, there was a highly significant reduction in Mallory-Denk bodies indicative of reduced hepatocyte injury. Strikingly, we noted a substantial reduction in the percentage of γ-H2AX+ nuclei, indicating a degree of bucillamine associated protection from DNA damage (F5E, SF3B). In contrast, Ki67 assessed hepatocyte proliferation was not impacted by bucillamine (F5F, SF3B). While γ-H2AX foci were dramatically elevated in ALIOS fed mice, this was most notable in aged mice at 48 weeks (F5F). Ki67 on the other hand, was elevated in younger ALIOS fed mice, at 24 weeks, with a further increase at 48 weeks (F5F). Of note, Ki67 was significantly associated with FBG, liver weight and HCC development at 48 weeks. In NAFLD, fibrosis is a critical prognostic factor in patients 5 , although patients without it can develop HCC. These data reveal that in C3H/He mice, a chronic proliferative environment is associated with steatosis and an elevated FBG, with the acquisition of lipogranulomas and mild lobular inflammation, sufficient to promote HCC development, even when levels of cellular and DNA damage were low and cirrhosis absent. Transcriptomic studies-immune signatures associated with NAFLD-HCC development. To investigate the candidate contributory pathways and upstream regulators leading to the development of HCC in C3H/He mice, RNA-sequencing (RNA-seq) was performed on 23 non-tumour liver tissues with a range of histological severities of NAFLD, with data summarised in Fig. 6 (F6). Unsupervised clustering analysis of the RNA-seq data identified two distinct groups (G1 and G2). Differences between dietary, gross and histological www.nature.com/scientificreports/ www.nature.com/scientificreports/ www.nature.com/scientificreports/ features of G1 and G2 are summarised in SF4, ranked according to significance. The G2 cluster mice were more commonly on the ALIOS diet, with higher liver and body weight, together with higher steatosis grade, fibrosis stage, higher NAS score and more frequent lipogranulomas compared to the mice in the G1 cluster. Notably, G2 mice developed bigger tumours and had a higher tumour burden compared to mice in the G1 subgroup. 3,900 genes were differentially expressed (DE) between the G2 and G1 groups with an adjusted p-value of less than 0.05. The top 100 DE genes are shown in Supplementary Results Table 3 and the top 50 in Fig. 6A. Ingenuity pathway analysis (IPA) of the DE gene list identified an activation of Th1 and Th2 pathways ranking amongst the top enriched canonical pathways. IPA also identified cell survival and viability, together with cellular movement and infiltration, as the top activated bio-functions (F6B), in keeping with the elevation of Ki67 as a biomarker of a 'proliferative' histological phenotype observed in C3H/He mice developing HCC. Gene set enrichment analysis (GSEA) of the same list recognised an inflammatory response signature (F6C), with a macrophage signature as the top enriched canonical pathway (F6C). Activation of the NF-κB and STAT3 inflammatory pathways was also evident (F6C), with activation of these two pathways in the non-tumour liver tissues of HCC patients previously reported to be associated with higher tumour recurrence 26 . Enrichment of CD44+ve macrophages, in association with T2DM and HCC. IPA analysis for the top upstream regulators of gene expression in the non-tumour tissues G2 cluster (F6) identified CD44 as the most differentially expressed (F6D). CD44 was also present in the gene lists from both the IPA defined Top Diseases Pathway and the GSEA defined Macrophage Enriched metabolic Network, with a role in recruitment of T lymphocytes implicated. CD44 transcript expression in whole liver was explored in the mouse RNA-Seq dataset Fig. 7 (F7), alongside the classical macrophage marker CD68 and two recently identified markers, CD9 and Trem2, associated with NAFLD progression [27][28][29] . Messenger RNA transcript levels of both CD68 and CD9 were relatively high in 48 weeks mice, with CD68 further increased in mice fed ALIOS diet (F7A). Levels of CD44 were lower in control mice, but rose quite dramatically in ALIOS fed (F7A). Trem2 levels also increased in ALIOS fed mice, but transcript levels in whole liver were low (F7A). These findings were supported at the cellular protein levels by IHC in the mouse tissues (Control diet n = 12, ALIOS diet n = 11), as summarised in (F7B-D). CD44 expression was predominantly in macrophages, characterised by their location within sinusoids and lipogranulomas, but also by association with CD68 and F4/80 expressing cells and dual immunofluoresecent labelling confirming co-expression of CD44 in CD68 expressing macrophages (F7C-D, SF5A-B). Comparing macrophage numbers, resident CD68 and F4/80 positive cells were relatively common in control mice at 48 weeks of age, rising further in ALIOS fed mice. The numbers of CD44+ cells were scant in control diet mice, with a dramatic increase in ALIOS fed mice (88.79±9.1 versus 14.32±2.3 per high power field respectively, p < 0.0001), reaching similar levels to CD68 and F4/80+ positive cells (F7B). Despite high transcript levels in whole liver, CD9+ macrophages were relatively scant in C3H/He livers at 48 weeks, with no significant change induced by ALIOS diet (F7A-B). Trem2 was not discerned by IHC in our FFPE liver tissues from C3H/He mice (data not shown).The number of CD44+ macrophages was strongly associated with the development of tumours and tumour number (Table 2). CD44+ cells, rather than CD68+ or F4/80+ cells, also correlated strongly and significantly with CD4, FOXP3 and CD8 T cells (Spearman correlations 0.636, p = 0.001; 0.548, p = 0.008; 0.636, p = 0.001 respectively). Dietary supplementation with bucillamine had no impact on the presence or absence of liopgranulomas in ALIOS fed mice at 48 weeks (F5C), nor numbers of CD44+ macrophages (SF5C-D). Data from FFPE Human (NAFLD-no-HCC n = 15; NAFLD-HCC n = 27) non-tumour liver biopsy tissues is summarized in Fig. 8 (F8). The distribution of CD44+ cells in NAFLD cases with and without HCC is represented in F8A, showing similar distribution in sinusoids and lipogranuolams to CD68+ and CD163+ cells and co-expression with CD68 confirmed by dual labelling immunofluorescence. In human NAFLD cases, although there were strong correlations between CD44+ and CD68+ (Spearman 0.735, p < 0.001) and CD163+ (Spearman 0.695, p < 0.001) cells, the level of CD44 was significantly lower in control NAFLD cases without HCC (F8B) (p < 0.001 relative to CD163 and p < 0.05 relative to CD68). Consequently, the increase seen in CD44+ macrophages in human cases with NAFLD-HCC was much more dramatic (F8B). The increases in each of the macrophage subtypes seen in non-tumour liver in association with NAFLD-HCC were independent of the presence or absence of cirrhosis (F8D), although the increase in CD44+ macrophages was significantly associated with the presence of T2DM (F8C). There was no significant T2DM association with CD68+ or CD163+ macrophages (F8C). Discussion NAFLD-HCC patients are often older, with metabolic syndrome associated comorbidities, diagnosed at a more advanced stage of disease owing to lack of or ineffective HCC surveillance 30,31 . To develop preventive and therapeutic approaches for this target population, there is a need to better understand the pathogenesis of HCC in these contexts. A number of animal models have been used to explore hepatocarcinogenesis in NAFLD 5 , with some of those involving dietary manipulations as described in the introduction. While caution is advised interpreting data from any model in isolation, particularly given the transcription profiles of tumours tend not to recapitulate human disease 32 , significant advances in our understanding of tumour development in fatty livers have been made. It is known that increases in circulating fatty acids lead to steatosis, with intrahepatic lymphocytes and inflammatory macrophages, along with inflammatory cytokines, leading to chronic cellular injury, DNA damage and hepatocyte death 5 . NAFLD progression to NASH and fibrosis, contributes to the increased risk of HCC 5 . Our aim was to compliment these studies by exploring features not necessarily dependent on NASH progression to fibrosis-given up to 50% of humans with NAFLD-HCC develop it in the absence of advanced fibrosis or www.nature.com/scientificreports/ cirrhosis. As the majority of dietary studies thus far have used C57Bl/6 mice, which do not develop the metabolic syndrome with age and require additional insults to promote non obesigenic liver injuries and HCC, we considered alternative strains of mice. Attempting to recapitulate the human condition without the need for additional liver insult, we used dietary manipulation with the western ' ALIOS' diet 18 -previously shown to induce NAFLD and small liver lesions in 60% of C57BL/6 mice at 1 year 14 -but in C3H/He mice. C3H/He mice do develop obesity and hepatomas with age 16,17 . ALIOS fed C3H/He mice developed exaggerated obesity and an elevation in FBG, reflective of insulin resistance. This was associated with human reminiscent NAFLD-including a number of the ancillary features not previously assessed in association with HCC risk. In this model, 96% of ALIOS fed mice developed macroscopic tumours characterised as HCC at 48 weeks of age. Liver weight was markedly higher in ALIOS fed mice, contributed to by more hepatic fat, but also increased hepatocyte proliferation. Each of those features (FBG, liver weight, steatosis and Ki67 proliferation) were significantly associated with tumour development. Notably, the ALIOS diet induced hepatocyte proliferation as early as 24 weeks in the mice, when lesser levels of steatosis were present and NASH was absent. The later acquisition of inflammation, DNA damage and fibrosis were also associated with tumour development. The transcriptional landscape of liver tumours in C3H/He mice resembled a proliferative human HCC-subclass. Bucillamine reduced DNA damage and fibrosis but had little impact on HCC development. Accepting that there may be uncharacterised mechanistic attributes of bucillamine, it also had little impact on some of the other features associated with ALIOS diet induced HCC development-namely FBG, liver weight, steatosis and inflammation. While DNA damage is mandatory in the development of cancer, these data in combination suggest that beyond DNA damage-regardless of whether there is a little or a lot and whether or not there is fibrosis-the proliferative environment may be key. In the presence of cirrhosis, proliferation occurs as a compensatory regenerative response to progressive hepatocyte injury 5 . Here we have demonstrated the presence of proliferation-even with low levels of DNA/cellular damage and fibrosis-again associated with cancer risk. This is reminiscent of human NAFLD-HCC arising in elderly patients with the metabolic syndrome, in the absence of significant injury and cirrhosis. The cause of the proliferative environment is pertinent and while association studies have their limitations, even at 24 weeks, the ALIOS diet induced elevations in Ki67+ hepatocytes, correlating closely with liver weight (Spearman correlation 0.663, p = 0.008, n = 16) and FBG (0.533, p = 0.033), rather than body weight. At the level of histology, mild to moderate steatosis was present at 24 weeks, without evidence of lobular inflammation or NASH or elevated macrophages on light microscopy. Lipid metabolic pathways are altered when hepatocytes switch to proliferation 33 , although the stimulus, switching hepatocytes to proliferation in the context of steatosis, is not well characterised. Of note, HCC did not develop in the DINAH model until at the earliest 36 weeks, after the development of additional features including inflammation. In particular, we have highlighted the potential role of lipogranulomas-well recognised as ancillary features in NAFLD-as promoters of an HCC permissive environment. Macrophages are the cardinal immune cell type associated with lipogranulomas, with an elevation in CD68+ macrophages evident from 36 weeks in ALIOS fed mice. In addition-led by our transcriptome analysis of nontumour fatty liver in C3H/He mice, also highlighting a role for macrophages, we explored a specific candidate, namely CD44, with a suggested role as an upstream regulator of T cell homing. We confirmed that CD44+ was acquired in the livers of mice with HCC, that it was predominantly expressed in macrophages, and that its expression was closely associated with expansion of CD4 T cells which characterise a cancer permissive environment [34][35][36] . In human NAFLD, CD44 expression in immune cells has been reported to play a key role in NASH 37 and here we have shown that CD44+ macrophages are acquired in NAFLD patients developing HCC. Furthermore, the elevation in CD44+ macrophages in these patients was associated with the presence of T2DM, which the more modest elevations in CD68 and CD163 positive cells were not. In the contexts of cirrhosis and viral hepatitis, cross talk between macrophages and T cells in the creation of an immunosuppressive environment in which cancers develop and progress, is increasingly recognised 34,35,38 . The DINAH study is an observational one, but highlights these features as also being key in the development of NAFLD-HCC, even in the absence of cirrhosis, where T2DM associated CD44+ macrophage phenotype is acquired. The metabolic and inflammatory relationships between macrophages and their states of activation have been recently reviewed 39 , with the field and identification of disease specific subtypes rapidly advancing [27][28][29] . Although we did not identify the recently reported NAFLD associated subtypes of CD9 or Trem2+ macrophages as elevated in ALIOS fed mice livers, CD9 was present at transcript and protein levels in both groups at 48 weeks and may have been acquired with age. Trem2 was elevated, but at low levels in whole liver and more sensitive methods may have detected it. However, CD9 and Trem2 have been identified as associated with progressive fibrosis in NAFLD, which is not the most striking phenotype of ALIOS fed C3H/He mice. Going forward, single cell studies in models focused on noncirrhotic HCC may identify and characterise subtypes in addition to the CD44+ one reported here. Our aim in conducting this comprehensive observational study was to gain further insights in NAFLD-HCC pathogenesis. In conclusion, the DINAH model recapitulates many of the pathological and histological features of human NASH and HCC. We highlight the importance of elevations in blood glucose and insulin resistance, a proliferative environment and elevated liver weight, alongside a CD44+ macrophage phenotype in fatty livers in which HCC develop. Further evaluation of the metabolic regulation of macrophage phenotype in NAFLD may shed light on preventive or therapeutic strategies for HCC. Methods Mouse studies. Experiments www.nature.com/scientificreports/ He mice (12 per group) were fed an American lifestyle diet (ALIOS: 45% fat calories, high trans-fat with high fructose drinking water) or a control diet (15% fat calories, low trans-fat without sugar) ad libitum. The diets (ALIOS: Teklad TD.110201; Control: Teklad TD.110196) were distributed from Harlan Laboratories (Madison, Wisconsin, USA). C3H/He mice were C3H/HeNHsd (Harlan, UK) or (C3H/HeNCrl) (Charles River, UK), not C3H/HeJ. C3H/HeJ mice, not the ones we used, are known to be hypo-responsive to lipopolysaccharide consequent to a point mutation in the TLR4 gene 40 . A glucose tolerance test (GTT) was performed in fasted mice at 48 weeks, following which the mice were humanely killed. This dietary induced NASH and HCC (DINAH) model was taken forward in a comprehensive follow-up study, but killing was at 12, 24, 36 (n = 8 per group) and 48 weeks of age (n = 24 per group). In a parallel intervention study, the diets were supplemented with bucillamine 20mg/kg/day from 24 weeks of age until killing at 48 weeks of age. A more detailed overview of the study groups and justification of numbers is provided in Supplementary methods and in Supplementary Table 1. DINAH pilot and comprehensive studies were combined for presentation and statistical analyses.
2021-08-20T06:17:26.143Z
2021-08-18T00:00:00.000
{ "year": 2021, "sha1": "b294da507a4bad56a882dc5430e05ce21c4dbd0d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-96076-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ecc5e4f4c6aa9a1ee0e4728d3f54424a84df25e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269187490
pes2o/s2orc
v3-fos-license
Distinct Hippocampal Oscillation Dynamics in Trace Eyeblink Conditioning Task for Retrieval and Consolidation of Associations Trace eyeblink conditioning (TEBC) has been widely used to study associative learning in both animals and humans. In this paradigm, conditioned responses (CRs) to conditioned stimuli (CS) serve as a measure for retrieving learned associations between the CS and the unconditioned stimuli (US) within a trial. Memory consolidation, that is, learning over time, can be quantified as an increase in the proportion of CRs across training sessions. However, how hippocampal oscillations differentiate between successful memory retrieval within a session and consolidation across TEBC training sessions remains unknown. To address this question, we recorded local field potentials (LFPs) from the rat dorsal hippocampus during TEBC and investigated hippocampal oscillation dynamics associated with these two functions. We show that transient broadband responses to the CS were correlated with memory consolidation, as indexed by an increase in CRs across TEBC sessions. In contrast, induced alpha (8–10 Hz) and beta (16–20 Hz) band responses were correlated with the successful retrieval of the CS–US association within a session, as indexed by the difference in trials with and without CR. Introduction Memories are represented by the distributed activity of neurons organized into neuronal assemblies in hippocampal-neocortical circuits (Squire, 1992;Buzsáki and Moser, 2013).Trace eyeblink conditioning (TEBC) has been widely used as a model system for studying the neural basis of associative learning and memory in a wide range of species from mice to humans.In TEBC, a conditioned stimulus (CS, usually auditory stimulus) is repeatedly paired with an eyeblink-evoking unconditioned stimulus (US, airpuff or electric shock to the eyelid).A successful conditioned response (CR) in response to the CS is dependent on learning, and upon later encounter with the CS, retrieving from memory, of an association between the CS and the US, that is, contingency detection (Prokasy, 1984;Clark and Squire, 1998;Thompson, 2005;Cheng et al., 2008). Forming an association between two events separated in time is a critical brain function (Pilkiw and Takehara-Nishiuchi, 2018) that depends on the hippocampus (Solomon et al., 1986;Moyer et al., 1990;Weiss et al., 1996;Tseng et al., 2004).Intriguingly, recent work utilizing human intracranial and scalp electroencephalogram (EEG) suggested that the hippocampus could represent both sensory and mnemonic information and act as a switchboard between internal and external representations (Treder et al., 2021).In line, patients with medial temporal lobe (MTL) lesions and memory impairment also exhibited diminished markers for conscious perception (Urgolites et al., 2018) and showed attenuated and temporally more dispersed responses to visual stimuli (Reber et al., 2017), suggesting that hippocampal circuits also represent sensory stimulus properties (Kreiman et al., 2002). Despite extensive research on the neural basis of learning using TEBC, the relationship between mapping stimulus contingencies within a trial (encoding and retrieval of the CS-US association) and learning, that is, enforcing their associations across training sessions (memory consolidation), remains unclear.While the role of oscillation dynamics in memory functions is well established, the differences in these dynamics between the formation of mnemonic associations between the CS and US within a trial and memory consolidation of stimulus contingencies across training sessions remains unexplored.With training on the CS-US contingency, the probability of a CR to each CS increases gradually over time.However, even with extensive training, some CS presentations fail to elicit a CR.In this study, we hypothesized that distinct spatiotemporal signatures would characterize mapping CS-US contingencies within sessions (indexed as CR for the trial) and memory consolidation across training sessions (indexed as an increasing proportion of CRs over sessions).To this end, we recorded local field potentials (LFPs) from the dorsal hippocampus in freely moving adult healthy Sprague Dawley male rats during classical TEBC and estimated oscillation dynamics for hippocampal subfields using state-of-the-art analysis approaches. Materials and Methods Animals.Eight adult healthy male Sprague Dawley rats (Harlan Laboratories/Envigo, weighing ∼300 g, ∼10 weeks) were used as subjects.Food and water were available ad libitum, with room temperature and humidity maintained at 21 ± 2°C and 50 ± 10%, respectively.The rats were kept under a 12 h light/dark cycle.All procedures and experiments were conducted during the light cycle.The study was conducted in accordance with Directive 2010/63/EU of the European Parliament and of the Council on the care and use of animals for research.The experiments were approved by the Animal Experiment Board of the Regional State Administrative Agency of Southern Finland.The ARRIVE guidelines (http://arriveguidelines/org/arrive-guidelines) were followed. TEBC and analysis of the eyeblink response.Data were collected during a TEBC task (Fig. 1A) using LabVIEW (National Instruments).The animals were conditioned using white noise (75 dB, 200 ms) as a CS and a 100 Hz burst of 0.5 ms bipolar pulses of periorbital shocks (100 ms) as a US.The amplitude of the US was adjusted individually for each animal to elicit a blink response, that is, the unconditioned response in 100% of the trials.Each trial started with the 200 ms CS presentation, followed by a 500 ms stimulus-free trace period, and then a 100 ms shock US.Each animal performed eight sessions, each consisting of 60 trials.Over time, the animals started to blink in response to the CS; that is, they acquired the CR. Eyeblinks were detected from the EMG signals offline to determine the percentage of CRs (here defined as the hit rate, HR), as in Nokia et al. (2017).In brief, for each trial, the response was defined as a CR if the signal exceeded a threshold of mean + 3 standard deviations (SD) within the last 200 ms of the trace period (Fig. 1A).That is, the eyelid started to close immediately before the onset of US.HR was defined as the proportion of trials with a CR (Fig. 1D).We then used repeated measures ANOVA (rm ANOVA) to estimate the change in HR across the eight sessions as well as across the animals.Reaction times (RTs) were computed for the trials with CRs. Surgery.Rats were anesthetized with an intraperitoneal injection of pentobarbital (60 mg/kg) and treated for pain with carprofen (5 mg/kg, s.c.) and buprenorphine (0.03 mg/kg, s.c.).Using a stereotactic frame, we implanted two four-wire electrodes (Formvar-Insulated Nichrome, bare diameter 50 µm, no.762000, A-M Systems) chronically to record LFPs from the dorsal hippocampus.The wires were glued together with a tip separation of 200-250 µm.The bundles were implanted with the lowest electrode tip at the dentate gyrus (DG; 3.6-4.5 mm posterior, 1.5-2.2mm lateral, and 3.6-4.0mm below bregma; Fig. 1C).Skull crews served as the reference (11 mm posterior and 2 mm lateral to bregma) and ground (4 mm anterior and 2 mm lateral to bregma).To stimulate the eyelid and record electromyography (EMG) during TEBC, two bipolar electrodes made of stainless steel wire insulated with Teflon (bare diameter 127 µm) were implanted through the upper right eyelid.Finally, the entire construction was secured in place using dental acrylic cement.After the surgery, each rat was allowed to recover for at least 1 week and was medicated for pain with buprenorphine. Recordings.A low-noise wired preamplifier (10×) was directly attached to the electrode connector in the rat's head.The LFP signals were filtered from 1 to 5,000 Hz, amplified 50×, digitized at 20 kHz, and then low-pass filtered at 500 Hz.Finally, all signals were stored at a sampling rate of 2 kHz (USB-ME-64, Multi Channel Systems). Histology.Rats were killed by exposure to a rising concentration of CO 2 and then decapitated.The locations of the electrode tips in the brain were marked by passing a DC anodal current (200 mA, 5 s) through them.The brain was then removed, fixed in a 4% paraformaldehyde solution, and coronally sectioned with a vibratome (Leica VT1000).The slices were stained with Prussian blue and cresyl violet.The electrode tip locations were determined with the help of a conventional light microscope and a brain atlas (Paxinos and Watson, 1998). Signal preprocessing, re-referencing, and filtering.Trials with large LFP fluctuations due to movement or device-related artifacts were excluded if they exceeded 1,500 µV cutoff, resulting in a rejection rate of 0.6%.To remove the stimulus-induced spike artifact, raw signals between −2.5 and 1.5 ms from CS onset and offset were interpolated with an alpha blend fraction of 0.45.Current source density (CSD) profiles were then calculated for re-referencing using the Laplacian re-referencing method with adjacent channels (Mitzdorf, 1985).The signal was then filtered between 3 and 480 Hz using a finite impulse response (FIR) filter with a logarithmically scaled increment of frequency and utilizing a combination of high-pass and low-pass filter pairs (high-pass: 0.6 stop band and low-pass: 1.4 stop band, 60 dB attenuation).The filtered signals were further Hilbert transformed to obtain phase and amplitude time series (Fig. 1E). Data analysis of oscillation dynamics.We computed local oscillation dynamics using measures of oscillation amplitudes and intertrial coherence (ITC) with a phase-locking factor (J. M. Palva et al., 2005) separately for the hilus and fissure.The amplitude and ITC time series were averaged across trials for each condition from −600 to 600 ms from CS onset for each filtered frequency.The average baseline values from −600 to −100 to CS onset were then subtracted from poststimulus values.Phase time series data were further used to compute interareal synchronization between the fissure and hilus using the phase-locking value (PLV; Fig. 1F).PLV was normalized by dividing the value by the averaged PLV value obtained by shuffling trials within a condition and measurement site 100 times to control for the contributions of stimulus-driven artificial synchronization (Hirvonen et al., 2018). To estimate coupling across the frequencies, PAC was computed with PLV between the phase of the slow oscillation (low frequency, LF) and the phase of the amplitude envelope of the high-frequency (HF) oscillation for the m:n ratios between 2 and 9 (Fig. 1G; J. M. Palva et al., 2005;Siebenhühner et al., 2016).Phase transfer entropy (phase TE; Lobier et al., 2014) was computed to identify directionality in the narrow-band oscillatory signals.Phase TE was derived from the instantaneous phase time series of the signal X hilus (t) and Y fissure (t) expressed as θ hilus (t) and θ fissure (t) (Fig. 1E).To create a bias-free measure, we computed the differential TE (dTE) derived by pTE hil→fis −pTE fis→hil and used this measure for further analysis (Fig. 1H). Statistical analysis. To avoid different signal-to-noise ratios (SNRs) from influencing the results, the number of trials between conditions (CR vs no-CR; highest HR vs lowest HR) was equalized before statistical analyses by randomly selecting trials to match the minimum number of trials within a session.Statistically significant changes in oscillation amplitudes, ITC, and interareal synchrony were achieved by deriving null distributions (n = 20,000) using random flips with a probability of 0.5 (Monte Carlo p < 0.025, two-tailed).To correct for multiple comparisons, we used the Benjamini-Hochberg procedure for each analysis time window. To assess the significant differences between CR and no-CR trials, we first estimated the differences in neuronal dynamics within each animal.We then used a paired t test to identify the t-threshold (p < 0.025, two-tailed) for each time t and frequency f and obtained the t-sum observed value based on temporal adjacency.The maximum t-sum observed was tested against the t-sum null distribution from 1,000 surrogates (Monte Carlo p < 0.05) using cluster-based permutation statistics, which account for multiple comparisons in statistical analysis (Maris and Oostenveld, 2007).The same clustering permutation statistics were performed to compare the sessions with the lowest and highest HR.The null distributions were derived by randomizing trials for comparison between CR and no-CR trials and by randomizing the sessions for comparing learning. Individual-level statistical analysis for oscillation amplitude differences was computed using a cluster-based permutation method by randomizing the CR and no-CR trials and sessions for the lowest %HR and highest %HR, respectively, to obtain surrogate mean values.Individual-level statistical analysis for ITC values was obtained by comparing each ITC value with a surrogate null distribution [Monte Carlo p < 0.05, false discovery rate (FDR) corrected]. To directly study the main effects and the interaction effect of successful memory retrieval within a session (CR vs no-CR) and consolidation across TEBC training sessions, we conducted two-way rm ANOVA on both the oscillation amplitudes and on the ITC within the time-frequency regions of interest (p < 0.025).The statistical analysis for 1:1 interareal synchronization between conditions was performed using a two-sample t test for the time-frequency regions of interest. The presence of significant PAC in the poststimulus periods was assessed post hoc using a one-sample t test across all subjects and sessions for each pair of low and high frequencies, analysis time windows, and laminar pair (FDR corrected).For statistical analysis of phase TE, surrogate data were obtained by shuffling trials 1,000 times and comparing the surrogate distributions with the empirical data to test the statistical significance against a null hypothesis of no information transfer between the hilus and fissure (Monte Carlo p < 0.05, FDR corrected). Behavioral data Associative learning was assessed using a TEBC task (Fig. 1A, see Materials and Methods for more details).HR (the percentage of CRs) significantly increased as a function of the session (Pearson's r = 0.49; p = 4.51 × 10 −5 ; Fig. 1B), but Figure 1.Overview of the approach.A, Task schematics and an example of EMG trace.In the TEBC task, a tone was used as CS followed by the US (periorbital shock).Eyeblinks were recorded with EMG.The gray area indicates the time window when eyeblinks were counted as a CR.B, HR for the CRs as a function of session with mean and SEM.The circles represent data from individual rats (n = 8).C, A graphical illustration of recording electrode placement in the rat hippocampus targeting fissure (f, upper) and hilus (h, lower).D, E, An example of a broadband raw signals recorded from fissure (blue) and hilus (orange) for a single trial and of their narrow-band filtering and transformation to complex values to obtain amplitude and phase time series.F, Interareal synchronization between fissure and hilus was estimated by computing PLV between signals and comparing the empirical values to a surrogate distribution.G, The n:m PAC was computed between the phase of a lower frequency (n) and amplitude of the higher frequency (m).The traces show the θ phase in hilus and the γ amplitude envelope in fissure (black).H, Differential phase transfer entropy (phase dTE) was estimated from the phase-lag information from each region using an analysis window of 0.5 s during the post-CS period.(See Materials and Methods for more details.)sessions with the best performance varied across animals.The mean RT for CRs averaged across all sessions and animals was 558.72 ms with an SEM of 5.7 ms.To control the confounding effects of individual differences, we also conducted rm ANOVA, confirming a significant change in HR across sessions (F (7,49) = 5.72; p = 7.17 × 10 −5 ) and indicating that the rats learned to anticipate the shock US and shield the eye with a CR as training progressed. Spatiotemporal dynamics of oscillations in response to the CS Figure 2A shows group-averaged CSD re-referenced LFP traces from the hippocampal subfields (fissure, hilus).Consistent with the fissure encompassing dendrites of the DG granule cells receiving the main input from the EC, CS-evoked responses were robust in the fissure but not in the hilus.In addition, the evoked responses were reflected in both oscillation amplitudes and ITC, showing transient broadband responses in the fissure but also in the hilus (Fig. 2B-E; Monte Carlo p < 0.025, two-tailed, FDR corrected).This time-locked activity was followed by induced alpha [α, 8-12 Hz, note that this band is usually defined as theta (θ) in rodent literature] and high-gamma (high-γ, 50-100 Hz) band responses in the fissure (Fig. 2B,C, gray, light gray; Monte Carlo p < 0.025, two-tailed, FDR corrected).In addition, concurrent suppression of θ (here 4-6 Hz) and β/γ band (23-40 Hz) amplitudes was observed both in the fissure and hilus (Fig. 2B,C, light gray trace; Monte Carlo p < 0.025, two-tailed, FDR corrected).Despite similar spatiotemporal oscillatory profiles in the hilus and fissure, induced amplitudes were stronger in the fissure, which is the primary site of EC inputs via the perforant pathway (Scharfman, 2016). Distinct spatiotemporal patterns for the retention of stimulus contingencies between CS-US and consolidation across training sessions We next explored whether distinct spatiotemporal patterns associated with the retrieval of stimulus contingencies between the CS and the US and with memory consolidation across the daily training sessions could be identified (Fig. 3A-D).To examine this, we utilized the eyeblink-CR as an approximation of the contingency detection and retrieval of the CS-US association, comparing trials with CR to those without CR (=no CR) within the session.After the initial trials during which representations of the CS and US are formed, CR is thought to depend on retrieving the association between the CS and the US as well as planning and executing the subsequent motor response (Prokasy, 1984).To assess the spatiotemporal patterns of oscillations related to successful contingency detection, we computed oscillation amplitudes and ITC separately for trials with and without CR for each animal and condition.Sustained amplitudes in α and β bands at ∼300 ms from the CS onset were stronger for the CR trials than for the no-CR trials in both the fissure and to a lesser extent in the hilus (Fig. 3A; Monte Carlo p < 0.050).In contrast, there was no noticeable difference between the CR and no-CR trials in the ITC (Fig. 3A, bottom).In addition to group-level significance, the results were further reproduced by performing statistical analysis at the individual level (Fig. 3C; Monte Carlo p < 0.050). We then tested whether we could dissociate a separate spatiotemporal pattern for predicting learning across the daily training sessions by comparing the data from the session with the highest and lowest %HR for each animal.Transient broadband amplitudes and ITC were stronger for the session with the highest %HR than for the lowest %HR (Fig. 3B; Monte Carlo p < 0.050).The results were further reproduced by performing statistical analysis at the individual level (Fig. 3D; Monte Carlo p < 0.050). Figure 4. Interareal synchronization and directed interactions between hilus and fissure.A, Group-averaged interareal synchronization between hilus and fissure estimated with PLV.Data are averaged across trials, sessions, and then across all rats (left).B, Same as in A but presenting PLV for the contrast between CR and no-CR trials (left) and the contrast between the highest and the lowest HR (right).The transparency highlights the time and frequencies with a significant difference.The scatter plots below show individual rat data, with red horizontal lines indicate the median, (paired t test, p < 0.05).C, Group-averaged differential phase transfer entropy (dTE) across sessions and rats during a 0.5 s post-CS period.Green marks indicate significant dTE from hilus to fissure, blue mark indicates significant dTE from fissure to hilus.D, Distribution of mean dTE derived from surrogate data by trial shuffling.The green or blue arrows in each subplot indicate the mean dTE of the empirical data. Phase synchronization and directional interactions among the hippocampal subfields In the hippocampus, information flows from the EC to the dendrites of the DG granule cells and CA1 pyramidal cells lining the fissure (perforant path) and from the DG to CA3 pyramidal cells via the hilus (mossy fibers).From the hilus, information feeds back to the CA1 pyramidal cell dendrites lining the fissure (Fig. 1C).To study the information flow between the hippocampal subfields, we computed the phase synchronization of oscillations between the hilus and fissure.Robust phase synchrony between the hilus and fissure was found in the α band and in the high-γ band (Fig. 4A, right; Monte Carlo p < 0.010, FDR corrected) despite the lack of concurrent oscillation amplitude increases.Interestingly, for local oscillations, sustained α-β band (9-16 Hz) phase synchronization between the hilus and fissure was significantly higher for the highest %HR versus lowest %HR (Fig. 4B, right; p < 0.05), whereas the early transient α and β band phase synchronization was stronger for CR versus no-CR trials (Fig. 4B, left; p < 0.05). To further establish whether oscillatory interactions would be directional as predicted by the feed-forward information flow from the fissure to the hilus or feedback processes from the hilus to the fissure, we estimated the directionality of coupling using pTE (Lobier et al., 2014), focusing on the frequencies showing significant interareal synchrony.Differential TE (dTE) was derived by the difference between pTE hil→fis and pTE fis l→hil across all sessions and rats.In the α and β bands, information flow was significant from the hilus to the fissure (Fig. 4C,D, green marks; Monte Carlo p < 0.050), whereas oscillations at high-γ (80 Hz) band showed the opposite direction, from the fissure to the hilus (Fig. 4C,D, blue marks; Monte Carlo p < 0.050). Nested α-γ oscillations associated with learning across sessions We then examined the PAC between oscillations across frequencies and laminar pairs.Specifically, we focused on the coupling of α phase (9 Hz) with higher frequencies.Robust α:γ PAC characterized the LFP signal within the hilus and between the hilus and fissure across all windows (Fig. 5A).PAC did not differ between CR and no-CR trials (Fig. 5B); however, there was a notable increase in α:γ PAC during sessions with the highest %HR compared with the lowest %HR sessions between 100 and 500 ms from CS onset (Fig. 5C; paired t test; p < 0.05, corrected), showing that α:γ strengthens as a function of learning. Discussion TEBC has been widely used to study neuronal mechanisms of learning in humans (Cason, 1922), rabbits (Schneiderman et al., 1962), and rodents (Weiss et al., 1996;McEchron and Disterhoft, 1999;Tseng et al., 2004).In this study, we used a classical TEBC task in rats to investigate hippocampal oscillation dynamics linked to the retrieval of stimulus contingencies within a trial (approximates as CR) from those dynamics associated with memory consolidation, that is, the improvement in performance across the training sessions.Our results demonstrate that different spatiotemporal patterns of hippocampal oscillations reflect these functions.Transient CS time-locked responses were correlated with memory consolidation (learning across sessions), but not with the within-session retrieval of stimulus contingencies.In contrast, induced θ/α and γ band oscillations at later time windows were correlated with the successful retrieval of the CS-US contingency within sessions, but not with memory consolidation across the training sessions.These results demonstrate that learning subprocesses are associated with distinct oscillation dynamics. Transient early broadband activity predicts learning across sessions The early transient broadband response was similar to that found in the sensory cortices in humans for perception (S.Palva et al., 2005;Palva et al., 2011;Hirvonen and Palva, 2016;Julku et al., 2021) and short-term memory encoding (Palva et al., 2011), as well as in the primary auditory cortex in guinea pigs (Voigt et al., 2018).The transient γ band amplitude response, and θ-α ITC, predicted memory consolidation (improved performance) across the training sessions.This result agrees with previous findings of θ band phase at CS onset predicting hippocampal responses and learning in TEBC in rabbits (Seager et al., 2002;Nokia et al., 2015) and in human episodic memory tasks (Griffiths et al., 2021).The early latency of this transient response suggests that it might reflect the encoding of the CS features and feed-forward processing of sensory information (Lamme and Roelfsema, 2000), rather than short-term memory of the CS, retrieving the CS-US association from memory, or the motor response planning.This supports the idea that improved behavioral performance across the training sessions is due to strengthened stimulus representations of CS, which leads to memory consolidation over time.This is also supported by the interaction effect between memory consolidation and contingency detection in the hilus. Within-trial retrieval of stimulus contingencies is associated with sustained oscillations Beyond the encoding of the physical environment, such as spatial mapping during navigation or exploration (O'Keefe and Dostrovsky, 1971;Morgan et al., 2011), the hippocampus is also implicated in representing and encoding visual information in primates (Lee et al., 2012;Jutras et al., 2013;Zeidman et al., 2015).In this study, sustained α and γ band responses were correlated with successful retrieval of CS-US contingencies, but not with learning across daily training sessions.This indicates that processes related to retrieving the CS-US association are distinct from those mediating improvements in learned behavior over extended periods (hours, days).Our study dissociating these two processes in a classical TEBC task provides a novel view of the role of concurrent θ and γ oscillations, where the traditional view has been that θ band amplitudes (Berry and Thompson, 1978;Winson, 1978) and θ band phase (Nokia et al., 2015) are related to learning.Our study demonstrates that these oscillations are, in fact, associated with the successful retrieval of stimulus contingencies.Taken together, we propose that the switch from transient broadband oscillations to sustained θ, α, and γ oscillations reflects a transition from unconscious learning-induced feed-forward processing to short-term mnemonic associations between CS and US, enabling the retrieval of stimulus contingencies and appropriate action selection. Interareal coupling between the hippocampal subregions Interareal synchronization between fissure and hilus in the α, β, and γ bands is consistent with the well-established anatomical connectivity between hippocampal subregions (Andersen et al., 2007) and narrow-band synchronization among hippocampal areas (Gloveli et al., 1997;Akam et al., 2012).Similar to the temporal pattern observed in local oscillation amplitudes, the transient early α−β band synchronization was correlated with memory consolidation across the training sessions, whereas the later sustained α−β band synchronization was correlated with the retrieval of CS-US contingencies.In these frequency bands, the direction of information flow was significant from the hilus to the fissure, while the opposite was observed for high-γ activity.Considering that the neocortex sends projections to DG granule cells, whose dendrites occupy the molecular layer along the fissure, and that the DG granule cells convey the signal to CA3 pyramidal cells via their axons passing in the hilus, these results suggest that, in accordance with previous work (Jensen et al., 2015;Michalareas et al., 2016;Richter et al., 2017), oscillation-based interactions in the γ band could mediate feed-forward processing of information (along the granule cells), while the slower frequencies could exert feedback from EC to DG granule cell and CA1 pyramidal cell dendrites. Nested alpha-gamma coupling predicts memory consolidation In accordance with θ oscillations yielding clocking mechanisms for the temporal coding of information in the hippocampal-subicular network (Buzsáki and Chrobak, 1995;Chrobak and Buzsáki, 1996;Lisman and Jensen, 2013), robust PAC between the phase of α oscillation and the amplitude envelope of γ oscillation characterized local and interareal hippocampal oscillation dynamics.Intriguingly, similar to the interareal synchronization, the strength of this α:γ PAC predicted memory consolidation across training sessions, suggesting that learning over time is dependent on a complex pattern of both local and interareal interactions between hippocampal subregions.These data agree with prior studies that have observed θ-γ PAC in the rat hippocampus during a T-maze task (Tort et al., 2008) and associative learning (Tort et al., 2009), as well as in the neocortex during a recognition memory task (Siebenhühner et al., 2016;Bahramisharif et al., 2018). Figure 2 . Figure 2. Spatiotemporal dynamics of hippocampal oscillations during TEBC.A, A group-averaged broadband evoked response to the CS, averaged across all trials, sessions, and animals from fissure (left) and hilus (right).Shade indicates SEM.Dashed lines indicate CS onset and mean (±SEM) and latency of CR (558.7 ± 5.7 ms), respectively.B, Baseline-corrected time-frequency representation (TFR) of oscillation amplitudes in response to the CS separately for fissure (left) and hilus (right).Oscillation amplitudes were averaged across trials, sessions, and then across all rats (n = 8).The horizontal bar below the panel indicates analysis periods relative to CS onset; 0-0.1 s (black), 0.1-0.3s (gray), 0.3-0.5 s (light gray).C, Oscillation amplitudes averaged across three time windows.Horizontal colored bars denote statistical significance at p < 0.025 (see Materials and Methods for details).D, E, Intertrial coherence (ITC) as estimated with PLF as in B and C. Horizontal bars denote statistical significance at p < 0.025. Figure 3 . Figure 3. Distinct spatiotemporal profiles of hippocampal oscillations.A, Oscillation amplitudes (top panel) and ITC (bottom panel) for the contrast between CR and no-CR trials.The color scale indicates the amplitude (top) and ITC (bottom) for each time-frequency element.The significant clusters of time and frequency (Monte Carlo p < 0.05) are highlighted with transparency while nonsignificant observations are nontransparent.Dashed lines indicate mean RT.Solid lines indicate CS onset.B, The same as in A, but for the contrast between sessions with the lowest and the highest HR.C, Individual level statistics for the contrast between CR and no CR and for D. The lowest versus highest HR.Color scale indicates the fraction of individuals that show significance in each time-frequency element (Monte Carlo p < 0.050).E, F, Two-way ANOVA.E, The representation of significant time and frequencies denoting the main effects of CR versus no-CR (in red) and Highest %HR versus Lowest %HR (in blue), along with the interaction of the two factors (in pink) on amplitude (ANOVA p < 0.025).Fissure (left), hilus (right).Black lines denote CS onset F. The same convention as in E but illustrates the main effects of the ANOVA factors on ITC. Figure 5 . Figure 5. PAC for CS.PAC on each laminar pair (H, hilus; F, fissure) represented in the x-axis, and the high-frequency is plotted in the y-axis.Each panel shows each analysis window of interest (rows from left to right; leftmost, 0-0.1 s; middle, 0.1-0.3s; rightmost, 0.3-0.5 s after stimulus onset).A, PAC using all trials, PAC difference between poststimulus and prestimulus time windows.B, PAC difference between CR and no-CR trials.C, PAC difference between highest %HR and lowest %HR.Black stars indicate statistically significant difference between the contrasting conditions (p < 0.05, FDR corrected).
2024-04-18T06:17:42.679Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "c656b05ed56bda6bd23a859508039ed44892ecfc", "oa_license": "CCBY", "oa_url": "https://www.eneuro.org/content/eneuro/early/2024/04/15/ENEURO.0030-23.2024.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0aeb558bcc53931f359bb6168fc736fce4b63d11", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119524295
pes2o/s2orc
v3-fos-license
The Landau-Pomeranchuk-Migdal effect and transition radiation in structured targets The radiation from high-energy electrons is investigated for the case when a target consists of several separated plates. The spectrum of radiation is considered in the region in which the bremsstrahlung is under influence of the multiple scattering of a projectile (the LPM effect), the polarization of a medium and the hard part of the boundary radiation contribute. In this region the general expression for the radiation spectrum is obtained for the $N$-plate target. A qualitative description of the arising interference pattern is given. Introduction The process of bremsstrahlung from high-energy electron occurs over a rather long distance, known as the formation length. If the formation length of the bremsstrahlung becomes comparable to the distance over which a mean angle of multiple scattering becomes comparable with a characteristic angle of radiation, the bremsstrahlung will be suppressed (the Landau-Pomeranchuk-Migdal (LPM) effect [1], [2]). An influence of polarization of a medium on radiation process leads also to suppression of the soft photon emission (Ter-Mikaelian effect, see in [3]). A very successful series of experiments [4] - [6] was performed at SLAC during recent years. In these experiments the cross section of the bremsstrahlung of soft photons with energy from 200 keV to 500 MeV from electrons with energies 8 GeV and 25 GeV is measured with an accuracy of the order of a few percent. Both the LPM effect, and dielectric suppression (the effects of the polarization of a medium) were observed and investigated. These experiments were a challenge for a theory since in all the previous papers calculations (cited in [7]) were performed to logarithmic accuracy which is not enough for a description of the new experiment. Very recently authors developed the new approach to the theory of the LPM effect [7] in which the cross section of the bremsstrahlung process in the photon energies region where the influence of the LPM is very strong was calculated with a term ∝ 1/L , where L is characteristic logarithm of the problem, and with the Coulomb corrections taken into account. In the photon energy region, where the LPM effect is "turned off", the obtained cross section gives the exact Bethe-Heitler cross section (within power accuracy) with the Coulomb corrections. This important feature was absent in the previous calculations. The polarization of a medium is incorporated into this approach. The considerable contribution into the soft part of the measured spectrum of radiation gives a photon emission on the boundaries of a target. In [7] we investigated the case when a target is much thicker or much thinner than the formation length of the radiation. A target of an intermediate thickness was studied in paper [8] . In the last paper we derived general expression for the spectral probability of radiation in a thin target and in a target of intermediate thickness in which the multiple scattering, the polarization of a medium and radiation on the boundaries of a target are taken into account. In [9] the effect of multiphoton emission from a single electron was studied, this effect is very essential for understanding data [4]- [6]. The LPM effect was recently under active investigation, see e.g. [10], [11] and review [12]. In papers [7], [8] the target is considered as a homogeneous plate. A radiator which consists of a set of thin plates is of great interest. The radiation from several plates for the relatively hard part of the spectrum in which the bremsstrahlung in the condition of the strong LPM effect dominates was investigated recently in [13].A rather curious interference pattern in the spectrum of the radiation was found which depends on a number (and a thickness) of plates and the distance between plates. In this part of the spectrum one can neglect the effects of the polarization of a medium. In the present paper the probability of radiation in a radiator consisting of N plates is calculated. The transition radiation dominates in the soft part of the considered spectrum, while the bremsstrahlung under influence of the strong LPM effect dominates in the hard part. The intermediate region of the photon energies where contributions of the both mentioned mechanisms are of the same order is of evident interest. We consider this region in detail. In this region effects of the polarization of the medium are essential. The numerical calculation was performed for the radiator of two gold plates with thickness l 1 = 0.35% L rad , L rad is the radiation length, (the same object was considered in [13]). The interference pattern depending on the distance between plates was analyzed in the intermediate region. An another interference pattern was found in the soft part of the spectrum where the transition radiation contributes only. Radiation from structured target With allowance for the multiple scattering and the polarization of a medium we have for the spectral distribution of the probability of radiation (see Eq.(4.4) of [7],and Eq.(2.1) of [8]) here ε is the energy of the initial electron, ω is the energy of radiated photon, ε ′ = ε − ω, n is the density of electrons in a medium, l is the length of the trajectory of a particle, the function g(t) describes change of the density of a medium on the trajectory. The mean value in Eq.(2.1) is taken over states with definite value of the two-dimensional operator ̺ (see [7], Section 2). The propagator of electron has a form where the Hamiltonian H(t) is where C = 0.577216 . . . is Euler's constant. The contribution of scattering of a projectile on the atomic electrons may be incorporated into the effective potential V (̺). The summary potential including both an elastic and an inelastic scattering is In Eq.(2.1) it is implied that the subtraction is made at V = 0, κ = 1. In [8] the target was one plate of an arbitrary thickness l 1 Here we consider the case when the target consists of N identical plates of thickness l 1 with the equal gaps l 2 between them. The case l 1 ≪ l c will be analyzed where l c is the characteristic formation length of radiation in absence of a matter (κ 0 = 0) where ϑ c is the characteristic angle of radiation. In [7] (Sect.5) we obtained the following expression for the operator S(t 2 , t 1 ) where H 0 = p 2 , V (̺, t) = V (̺)g(t), T means the chronological product. Let us introduce new variables Substituting these variables into Eq.(2.7) we have Using the equality and the condition (2.6) we obtain Here we neglected the term −iH 0 T 1 in the exponent (exp(−iH 0 (τ 2 + T 1 )) → exp(−iH 0 τ 2 )) since H 0 T 1 ∼ p 2 c T 1 ≪ 1, we neglected also the term 2pτ in the argument of the function V (̺ + 2pτ ) (the term of the order 1/p c is conserved in comparison with the term of the order p c T 1 , see [7], Sect.5). We will use below the matrix element of the form where a(̺) ≡ V (̺)T 1 , the matrix element is calculated between states with definite momentum. The potential V (̺) in Eq.(2.4) we write in the form where the parameter ̺ b is defined by a set of equations (we rearranged terms in Eq.(5.9) of [7]): where L 1 is defined in Eq.(2.4). The parameter ̺ b ≃ 1/p c is determined by a characteristic angle of radiation (momentum transfer). We will calculate the matrix element g n (p ′ , p) in the first approximation neglecting correction terms containing v(̺). Then In the calculation of the expression (2.12) we insert the combinations of the state vectors |p 1 p 1 | . . . |p n−1 p n−1 | between the operators exp(−a(̺)) and use the matrix element calculated in (2.15), we have 17) The obtained function d n (α) is the polynomial with respect to α of degree (n − 1) After substitution α = cosh η we find from Eqs.(2.17) and (2.16) For the case η ≪ 1 one has For the limiting case nη ≪ 1 one has Using the results obtained we can calculate now the mean value entering Eq.(2.1) If we use the same procedure for the second mean value in (2.1) we obtain (2.23) We split now the spectral distribution of the probability of radiation into two parts where dw br /dω is the spectral distribution of the probability of bremsstrahlung with allowance for the multiple scattering and polarization of a medium, dw tr /dω is the probability of the transition radiation obtained in the frame of quantum electrodynamics. Note that the subtraction in Eq.(2.24) has to follow the procedure The integral in the exponential in (2.24) is where we used notations introduced in (2.2) and (2.8). Substituting (2.22), (2.23), (2.26) and (2.27) into Eq.(2.24) we obtain the following expression for the spectral distribution of the probability of the bremsstrahlung with allowance for the multiple scattering and the polarization of a medium for the N-plate target here we used (2.22), (2.23) and (2.8). Note for the subtraction procedure one has that when q → 0, the function b, β n → ∞. The formula (2.28) can be rewritten in the form which is more convenient for application For one plate (N = 1) we have n 1 = n 2 = 0, n = 1 and In the integral (2.28) we rotate the integration contours over τ 1 , τ 2 on the angle −π/2 and substitute variables τ 1,2 → −ix 1,2 . Then we carry out change of variables Here the integration by parts is carried out in the term ∝ r 2 first over x and then over z in the term containing ln(1 + xz(1 − z)). The expression (2.33) was derived in [7], Sect.5 (see also references therein) with allowance for the correction term v(̺) (see Eq.(2.13)). In the case of the weak multiple scattering (b ≫ 1) neglecting the effect of the polarization of a medium (κ = 1) and expanding the integrand in (2.34) and (2.36) over 1/b we have for the probability of radiation If the distance between plates is small (T ≪ 1) we have In the opposite case (T ≫ 1) one can obtain asymptotic expansion of G(T ) using the method of stationary phase (similar method wasw used in derivation of the Stirling formula) Note that the main term of the decomposition in Eq.(2.37) is the Bethe-Heitler probability of radiation from two plates which is independent of the distance between plates. This means that in the case considered we have independent radiation from each plate without interference in the main order over 1/b. The interference effects appear only in the next orders over 1/b. In the case of the strong multiple scattering (b ≪ 1, p 2 c ≫ 1) we consider first the transition region from T ≪ b (the formation length is much longer than the distance between plates) to T ≫ b (at small T, T ≪ 1). We introduce parameter δ = iT /b, then assuming b ≪ 1, T ≪ 1 we obtain where When the formation length is much larger than the target thickness as a whole (T ≪ b, |δ| ≪ 1), one can decompose the functions f 1 (δ) and f 2 (δ) into the Taylor series over δ. Retaining the main terms of the expansion we find for the probability of radiation In the opposite case b ≪ T ≪ 1 (|δ| ≫ 1) neglecting the effect of the polarization of a medium (κT 1 ≪ 1) we have Note that when T = 1 the probability (2.43) is within logarithmic accuracy doubled probability of radiation from one plate with the thickness l 1 : The case of the strong multiple scattering (b ≪ 1) for the photon energies where the value T ≥ 1 is of the special interest. In this case we can neglect the polarization of a medium κ = κ = 1, and disregard the terms ∝ κT 1 in the exponent of the expressions (2.34) and (2.36) since T 1 ≪ 1. In the integral over τ 1 in (2.34) we add and subtract the contribution of the interval T ≤ τ 1 < ∞. The sum gives Eq.(2.33), i.e. the radiation from one plate, and in the difference the main contribution gives region τ 2 ∼ 1 so that one can disregard the second terms in the square brackets in Eqs.(2.34) and (2.36). We obtain as a result where si(z) is the integral sine and ci(z) is the integral cosine. At T ≫ 1 we have We carried out the analysis of cases N = 1, 2 using Eq.(2.28). We illustrate an application of Eq.(2.30) for the case N = 4: We consider now the case of large N (N ≥ 3). If the formation length of the bremsstrahlung is shorter than the distance between plates (T > 1) the interference of the radiation from neighboring plates takes place. Using the probability of radiation from two plates (2.37) we obtain in the case of weak multiple scattering In the opposite limiting case |η| ≪ 1, (η 2 ≃ δ) N|η| ≪ 1 (see Eqs.(2.20) and (2.21)), i.e. when the formation length of the bremsstrahlung is longer than the radiator thickness, the radiation act takes place on a target as a whole. In this case, as it follows from Eq. (2.21), the parameter b diminishes N times (the value p 2 c increases N times). The analysis of the case of two plates conducted above in detail supports this result. In the limiting case of strong multiple scattering (b ≪ 1) one can see this from (2.42). In the case |η| ≪ 1 (the formation length of the bremsstrahlung is longer than a distance between plates as before) and large N (including the case when |N|η| ≫ 1) one can substitute the summation over n by integration in the expression for the probability of radiation (2.28). Using Eqs.(2.19)-(2.22) we have where ν = 2 √ iq (see [7], Sect.2). The four regions contribute into the sum and integrals over τ 1 , τ 2 in Eq.(2.28), see also Eq.(2.30). The first region where we take into account that νT = η ≪ 1. In the second region 3. The third region gives the same contribution after substitution τ 1 ↔ τ 2 , t 1 ↔ t 2 . A qualitative analysis of the radiation in the structured target We investigate the behavior of the spectral distribution ω dw dω using as an example the case of two plates with the thickness l 1 and the distance between plates l 2 ≥ l 1 which was analyzed in detail in the previous Section. For plates with the thickness l 1 ≥ 0.2%L rad and in the energy interval ω > ω p , in which the effects of the polarization of a medium can be discarded, the condition (2.7) is fulfilled only for enough high energy ε, when the characteristic energy where n a is the number density of atoms in the medium, is such that ω p ≪ ω c . We study the situation when the LPM suppression of the intensity of radiation takes place for relatively soft energies of photons: ω ≤ ω c ≪ ε. We consider first the hard photons ω c ≪ ω < ε. In this interval of ω the formation length l 0 (2.2) is much shorter than the plate thickness l 1 (T 1 ≫ 1), the radiation intensity is the incoherent sum of radiation from two plates and it is independent of the distance between plates. In this interval the Bethe-Heitler formula is valid. For ω ≤ ω c the LPM effect turns on, but when ω = ω c the thickness of plate is still larger than the formation length l 0 (the opposite case will be considered in the end of the Section) so that the formation of radiation takes place mainly inside each of plates. With ω decreasing we get over to the region where the formation length l c > l 1 , but effects of the polarization of a medium are still weak (ω > ω p ). Within this interval (for ω < ω th ) the main condition (2.6) is fulfilled. To estimate the value ω th we have to take into account the characteristic radiation angles (p 2 c in Eq.(2.6)), connected with mean square angle of the multiple scattering. Using Eq.(2.14)) and the definition of the parameter b = 1/(4qT 1 ) in Eq.(2.15)) we find ; . It is shown in [8] (Sec.3, see discussion after Eq.(3.6)) that ω th ≃ 4ω b (in [8] the notation ω 2 was used instead of ω b ). Naturally, ω th < ω c /T c , and when ω = ω c /T c one has T 1 = 1, l 1 = l 0 . It is seen from Eq.(3.3) that when the value l 1 decreases, the region of applicability of results of this paper grows. So, when ω < ω th the formation length is longer than the thickness of the plate l 1 and the coherent effects depending on the distance between plates l 2 turn on. For the description of these effects for T = (1 + l 2 /l 1 ) T 1 ≥ 1 one can use Eq.(2.45). For T ≥ π ≫ 1 one can use the asymptotic expansion (2.46) and it is seen that at T = π the spectral curve has minimum. Let us note an accuracy of formulas is better when ω decreases, and the description is more accurate for T ≫ T 1 (l 2 ≫ l 1 ). With further decreasing of the photon energy ω the value T diminishes and the spectral curve grows until T ∼ 1. When T < 1 the spectral curve decreases ∝ ln T according with the second formula of Eq.(2.44). So, the spectral curve has maximum for T ∼ 1. The mentioned decreasing continues until the photon energy ω for which (1 + 2/b)T ∼ 1. For smaller ω the thickness of the target is shorter than the formation length. In this case the first Eq.(2.44) is valid which is independent of the value T . The next characteristic region of the photon energies is ω ≤ ω p where the polarization of a medium is manifest itself. For ω ∼ ω 2 0 l 1 ≪ ω p one has κT 1 ∼ 1 and for the bremsstrahlung contribution instead of Eq.(2.44) we have to use Eqs.(2.42)-(2.43) which include the interference of the bremsstrahlung on the plate boundaries. However, in this region the transition radiation gives the main contribution. The spectral probability of transition radiation in the radiator consisting of N thin plates of the thickness l 1 separated by equal distances l 2 was discussed in many papers, see e.g. [3]. It has the form where Φ N (y) = sin 2 ϕ 1 2 here y = ϑ 2 γ 2 , ϑ is the angle of emission with respect velocity of the incident electron (we assume normal incidence) and The formula (3.4) can be derived directly from Eq.(2.25) if one gets over to the p-representation (make than one substitutes the real part of the double integral over time by one-half of the modulus squared of the single integral. After substitution p 2 = y, d 2 p = πdp 2 = πdy we pass to Eq.(3.4). In the case N = 2 one has Φ 2 (y) = 4 sin 2 ϕ 1 2 cos 2 ϕ 2 . (3.7) In the integral in (3.4) for y < 2/T ≫ 1 the function Φ 2 ≃ sin 2 κT 1 and in the interval κ ≥ y > 2/T we can substitute cos 2 ϕ/2 by it mean value 1/2. As a result we have within logarithmic accuracy The function F 2 vanishes in the points κT 1 ≃ κ 2 0 T 1 = 2πn, the corresponding photon energies are In the points κT 1 ≃ π(2n + 1) the function F 2 has minimums which depend on the distance between plates l 2 ω (2n+1) ≃ 2ω 1 2n + 1 , F 2 ≃ 2 ln κT 2 = 2 ln κT 1 2 l l 1 ≃ 2 ln n + 1 2 π l l 1 , l = l 1 + l 2 (3.10) The function F 2 has maximums in the points where sin κT 1 ≃ 1(κT 1 ≃ π(m + 1/2)) and in these points i.e. as the values of the function F 2 as well as the positions of the maximums of F 2 are independent of the distance between plates in the wide interval l 2 ≥ l 1 . Now we perform similar analysis for arbitrary N. In the region y < 2/(NT ) ≫ 1 one has Φ N ≃ sin 2 NκT 1 2 , and in the interval κ > y > 2/T the phase ϕ varies fast and the function can be substituted by its mean value N. In this interval Φ N ≃ N sin 2 (κT 1 /2). In the intermediate region 2/T > y > 2/(NT ) the phases ϕ 1 and ϕ are approximately equal and the function Φ N ≃ sin 2 (Nϕ/2) oscillates fast and can be substituted by it mean value 1/2. Taking this results into account we find performing the integration over y It follows from this formula that positions of the minimums in the points ω = ω (2n) are independent of N, while the minimums in the points ω = ω (2n+1) are disappearing when the value N increases and for enough large N the function F N has maximums in this points. The case N ≫ 1 was considered in detail in our recent paper [14]. We will discuss now the results of numerical calculations given in Figs.1,2. The formulas (2.34), (2.36) and (3.4), (3.5) were used respectively. The spectral curves of energy loss were obtained for the case of two gold plates with the thickness l 1 = 11.5 µm with different gaps l 2 between plates. The initial energy of electrons is 25 GeV. The characteristic parameters for this case are: 5, 7, 9, 11. (3.13) At ω > 80 MeV the radiation process occurs independently from each plate according with theory of the LPM effect [8]. The interference pattern appears at ω < 80 MeV where the formation length is longer than the thickness of one plate and the radiation process depends on the distance between plates T . According to Eqs.(2.46), (2.48) the curves 1-5 have minimums at ω ≃ πω th /k (T = π) which are outside of Fig.1 and will be discussed below. In accord with the above analysis the spectral curves in Fig.1 have the maximums at photon energies ω ≃ ω th /k (T = 1). These values (in MeV) are ω ≃ 27, 16, 11, 9, 7 for curves 1, 2, 3, 4, 5 respectively. At further decrease of ω(T ) the spectral curves diminish according to Eq.((2.44)) and attain the minimum at ω min = ω th /(k(1 + 2/b)) (T (1 + 2/b) = 1). The corresponding values (in MeV) are ω ≃ 3.5, 2, 1.4, 1.2 for curves 1, 2, 3, 4. The least value of ω min ≃ 1 MeV has the curve 5. However, one has to take into account that at ω ≤ 1.5 MeV (κ 2 0 = p 2 c ≃ 2/b) the contribution of the transition radiation becomes significant. Starting from ω ≤ 0.6 MeV the contribution of the transition radiation dominates. The spectral curves for the transition radiation in Fig.2 increase for ω < 0.2 MeV as (κT 1 ) 2 ln 2/T (κT 1 ≪ 1) according to Eq.(3.8) and attain the maximal value at ω (0) = 4ω 1 = 0.12 MeV (see Eq. (3.11)). The height of the spectral curves at this point within the logarithmic accuracy is independent of T and roughly the same for all the curves. The minimums of the spectral curves are disposed at ω (1) = 2ω 1 ≃ 0.06 MeV according to Eq.(3.10) from which it follows that in this point for larger T the spectral curve is higher. The next maximum is situated in ω (1) = 4ω 1 /3 = 0.04 MeV. At ω (2) = ω 1 = 0.03 MeV the spectral curves have the absolute minimum according to Eq. (3.8). At further decrease of ω the higher harmonics appear: maximum at the point ω (2) = 4ω 1 /5 = 0.024 MeV, the minimum at ω (3) = 2ω 1 /3 = 0.02 MeV etc. The approach developed in this paper is applicable in the interval of photon energies where effects of the polarization of a medium are essential. It includes also the soft part of the LPM effect. For the case given in Fig.1 our results are given up to ω max ∼ 20 MeV. On the other hand, in [13] the hard part of the LPM effect spectrum was analyzed where one can neglect the effects of the polarization of a medium (ω > 5 MeV for the mentioned case). Although in our paper and in [13] the different methods are used, the results obtained in overlapping regions are in a quite reasonable agreement among themselves. It is interesting to discuss behavior of the spectral curves obtained in [13] from the point of view of our results. We consider the low value l 1 (T c is not very large) and a situation when corrections to the value b in (3.3) which neglected in [13] are less than 20%. This leads to the difference in results less than 10%. We concentrate on the case of gold target with the total thickness Nl 1 = 0.7% L rad . The case N = 1 where T c = 5.8, b −1 = 7.3, ω c ≃ 240 Mev, ω th = 4ω c /(T c (1 + T c )) ≃ 24 MeV is considered in detail in our paper [8]. The curves in figures in [13] are normalized on the Bethe-Heitler probability of radiation, i.e. they measured in units αr 2 T c /(3π). In the region where our results are applicable ω < ω th ≃ 24 MeV in Fig.2 (G = 0) of [13] one can see plateau the ordinate of which is 10% less than calculated according (2.33). The case of two plates (T c ≃ b −1 ≃ 3, ω c ≃ 240 MeV, ω th ≃ ω c /T c = 80 MeV) is given in Fig.3 of [13]. The lengths of the gaps are the same as in our Fig.1, except k = 9. The positions and ordinates of the minimums and the maximums in the characteristic points (ω ≃ πω th /k for minimums and ω ≃ ω th /k (T = 1) for maximums, see above) as well as behavior of the spectral curves is described quite satisfactory by our formulas (see e.g. asymptotic Eqs.(2.42)-(2.46)). In the case of four plates (T c ≃ b −1 ≃ 1.5, ω th ≃ ω c /T c = 160 MeV, Fig.4 of [13]) the value b is not enough small and we can do the qualitative analysis only. The curves in this figure correspond to the values k = 3, 5, 9, 13 according to Eqs.(2.42)-(2.46), (2.49). For k = 13 we have from (2.46):the first maximum is at ω ∼ ω th /k ≃ 12 MeV, the first minimum is at ω ∼ πω th /k ≃ 40 MeV, the next maximum is at ω ∼ 2πω th /k ≃ 80 MeV, the next minimum is at ω ∼ 3πω th /k ≃ 115 MeV, but it is rather obscure because of large value of T = 3π. The accordance of these estimates of the characteristic points with the curve in the figure is quite satisfactory. Let us note in conclusion that for observation of the interference pattern at large values b which is described by Eq.(2.37) (see Fig.3) one needs to use very thin plates (l 1 = αL rad /(2πb)). Since the interference manifests itself in the terms ∝ 1/b only, the value b should not be very large. It seems at first sight that in this situation one can use the light elements, e.g. lithium, for which the radiation length L rad is large. However, these elements have low charge of nucleus Z and low density and because of this the LPM effect begins at ω < ω c which is close to ω p = ω 0 γ where effects of the polarization of a medium are essential. So, one can expect that the optimal situation will be for elements in the middle of the periodic table of the elements. For example, for Ge one has L rad = 2.30 cm, ω 0 = 44 eV, and at energy ε = 25 GeV, ω c = 34 MeV, ω th = bω c = 100 MeV, ω p = 2.2 MeV. Taking b = 3 we have l 1 = 8.9 µm, ω 1 = 6.95 keV. In this situation there is rather wide interval of photon energies ω p < ω < ω th where the interference can be observed. If one wants to observe the interference pattern G(T) given by Fig.3 in the wide interval of T : T min ≤ T ≤ T max , T min ≪ 1, T max ≫ 1, then one has to take into account that ω = ω th T 1 = ω th T /k (k = 1 + l 2 /l 1 ). In this situation one has ω max = T max ω th k < ω th , k > T max , ω min = T min ω th k > ω p , k < ω th ω p T min . (3.14) Acceptable parameters are T max ∼ 10, T min ∼ 1/5. For T > 10 the amplitude of oscillation is very small, while for T ≥ 0.2 we have very distinct first maximum. So we have k ≃ l 2 /l 1 ≃ 10 for the case considered. • Fig.2 The same as in Fig.1 E(ω) = dε dω in soft part of the spectrum where transition radiation contributes only.
2019-04-14T02:46:39.459Z
1999-02-23T00:00:00.000
{ "year": 1999, "sha1": "bcacbbcd330a4a4aff4fb0a1530ffab4b92285b2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9902436", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c52fe8c269856fb394bc12bfae84c5b694906c78", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269958709
pes2o/s2orc
v3-fos-license
Study on Nowcasting Method of Severe Convective Weather Based on SA-PredRNN++ : Severe convective weather, characterized by short-term intense precipitation, thunderstorms, and strong winds, poses significant threats to human life and property. Therefore, accurate and efficient prediction of severe convective weather is crucial for disaster prevention. Currently, utilizing deep learning for radar echo extrapolation stands as the primary method for forecasting severe convective weather. We propose a predictive recurrent neural network model that integrates a self-attention mechanism, specifically designed for radar echo extrapolation in severe convective weather forecasting. The self-attention mechanism offers the advantage of being lightweight, as it does not substantially increase the model parameters. Additionally, it facilitates global attention extraction, thereby enhancing the model's accuracy to some extent. By utilizing radar echo images from the previous hour as input, the model undergoes self-learning to achieve the best forecast for radar echo extrapolation in the subsequent two hours. Research findings demonstrate that our model outperforms other models in accurately predicting severe convective weather within this two-hour timeframe. Introduction Intense convective weather events can have catastrophic impacts.For example, during the period from July 18th to August 2021, Zhengzhou, China, experienced heavy rainfall, leading to flash floods and significant disruptions to transportation systems.As of 6:00 PM on August 1st, 2021, this extreme weather event had affected a staggering 1.8849 million people and resulted in 292 fatalities [1].In China, existing weather forecasting methods lack adequate prediction techniques for medium and smallscale intense convective weather events [2].Additionally, there is an urgent demand within the meteorological industry for accurate and timely forecasting of intense convective weather events.Therefore, finding effective and accurate methods to forecast intense convective weather has become a challenging task and a key focus area within meteorological research.Nowcasting is a technique used in weather forecasting aimed at providing real-time updates on weather conditions.Zhang et al. [3] and Yu and Zheng [4] highlighted the advancements made in both research and operations related to severe convective weather.Specifically, there are two primary approaches for severe convective nowcasting: one being the extrapolation technology based on radar echoes, and the other being the utilization of numerical weather forecast models [5].In recent years, deep learning has emerged as a significant approach for the prediction of severe convective weather [6], and significant progress has been made in the study of deep learning-based severe convective weather forecasting, both domestically and internationally [7].Hence, we aim to employ deep learning techniques for extrapolating radar echoes to enhance the accuracy of severe convection nowcasting.Lu et al. [8] investigated a model for recognizing heavy precipitation weather in severe convective weather conditions, utilizing physical parameters and the deep learning model DBNs.Zhou et al. [9] conducted short-term lightning forecasting by employing a deep learning fusion of a semantic segmentation model and multisource observation data.Lee et al. [10] devised a predictive model based on deep learning methods to anticipate the potential impact of a rainstorm on an area before it transpires. Currently, widely used deep learning models in severe convective nowcasting include Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), and their variants.Sherstinsky et al. [11] provided an overview of RNNs and Long Short-Term Memory Networks (LSTM).Han Feng et al. [12] effectively employed RNNs for nowcasting, while Shi et al. [13] introduced Convolutional Long Short-Term Memory Networks (ConvLSTM) for precipitation nowcasting.Additionally, Shi et al. [13] enhanced the model by incorporating a Gated Recurrent Unit (GRU) to create the Convolutional Gated Recurrent Unit (ConvGRU), which has fewer parameters.Guo et al. [5] applied the ConvGRU model to extrapolation experiments and achieved significant progress in forecast accuracy.Wang et al. [14] proposed a Spatiotemporal Long Short-Term Memory (ST-LSTM) and later constructed a Memory in Memory (MIM) network for precipitation nowcasting, both of which outperform ConvLSTM.Tian et al. [15] introduced the Generative Adversarial Convolutional Gate Recurrent Unit (GA-ConvGRU) model, effectively addressing the limitations associated with fuzzy extrapolation images.Lin et al. [16] enhanced the ConvLSTM model by combining it with a selfattention mechanism to capture extended spatiotemporal relationships.Adewoyin et al. [17] developed the Temporal Recurrent U-Net (TRU-NET), which has more parameters, longer training times, and lower nowcasting capability for severe convection. Based on the previous research, CNN models focus more on spatial information in severe convective forecasting, while traditional RNN or LSTM models face challenges in capturing long-term data and are prone to the issue of vanishing gradients.To improve the accuracy and efficiency of model training, we chose the PredRNN++ model as our network architecture.This model accurately predicts regions with detectable echo signatures.To further enhance performance, we introduced the ProbSparse selfattention mechanism and utilized a weighted loss function during training. Our contributions can be summarized as follows: (1) Addressing the issue of inaccurate forecasts caused by the aging of high-intensity echo wavelengths, we developed a predictive recurrent neural network model with a self-attention mechanism. (2) Applying the model to timely forecast severe convective weather using weather radar data. (3) Validating our approach on existing datasets, demonstrating superior performance compared to existing deep learning methods. Related Work PredRNN++ [18] is a recurrent network designed for spatiotemporal predictive learning.It differs from its predecessor, PredRNN, in several aspects.Firstly, it replaces the Spatiotemporal Long Short-Term Memory (ST-LSTM) unit with the Causal Long Short-Term Memory (Causal LSTM) unit.Additionally, it introduces the Gradient Highway unit (GHU) structure in the first and second layer stack.The Causal LSTM unit is a revolutionary recurrent unit that enhances the transition depth between adjacent states.On the other hand, the GHU facilitates the connection between input information, allowing for improved information flow within the model.This architecture allows for high-speed propagation of gradients between the first and second layers, effectively solving the issue of gradient vanishing.It also enables better learning of feature values from previous frames and more efficient utilization of longterm information.Figure 1 explains the structure of PredRNN++. The Causal LSTM introduces additional nonlinear layers in periodic transitions, enhancing feature amplification to better capture abrupt situations caused by short-term changes.It comprises two memory components: temporal memory and spatial memory, as illustrated in Figure 2. The first layer resembles the generic LSTM structure for updating the temporal state .The second layer mirrors the structure of the first layer, utilizing the updated temporal state , and the updated spatial state , from the previous layer.Lastly, the third layer updates the output gate structure, H, using inputs , , and M from the previous layer.A Causal LSTM unit takes inputs , , , and , and produces updated , , and outputs.The value from the last time step is used to predict the generated target sequence.where C t k represents temporal memory and M t k represents spatial memory, where the subscript t represents the step size, and the superscript k denotes the kth hidden layer in the stacked Causal LSTM network.The current network memory is dependent on the last state C t k and is controlled by forget gate f t , input gatei t and input modulation gate g t .The current spatial memory M t k is dependent on M t k at the input Causal LSTM uses a cascade mechanism, and spatial memory is a temporal memory function that passes through another set of gated structures.The update equation of the kth layer Causal LSTM is: '*' in the above formula represents the convolution operation, '⨀' denotes the Hadamard product, '' refers to the sigmoid function in the activation function, and represents the convolution filter. SA-PredRNN++ Due to the abundance of data and limited availability of target data, predicting severe convective weather has always been a challenge.Additionally, PredRNN++ faces difficulties in extracting deep network features while identifying local dependencies and global features.To address this issue, we introduce the ProbSparse self-attention mechanism [19] into the second layer's Gradient Highway Unit (GHU) and causal LSTM layer.This measure not only enhances the model's capability to extract global features and local dependencies but also maintains the lightweight nature of the model, without increasing its complexity.This improvement aids in model training and enhances prediction accuracy.The new model is named Self-Attention PredRNN++ (SA-PredRNN++), which effectively addresses the issue of gradient disappearance while increasing memory utilization and strengthening the model's long-term modeling capacity.The SA-PredRNN++ structure excels in capturing both short-term and long-term features.Figure 3 illustrates the structure of SA-PredRNN++. The cascaded causal LSTM structure is highly effective; however, it encounters issues such as gradient vanishing during the backpropagation process, especially in scenarios involving periodic motion or frequent occlusion.Additionally, prolonged transmission times may result in unclear features.To alleviate these challenges and ensure the model can continuously and accurately learn frame features, we introduce a Gradient Highway Unit (GHU), as depicted in Figure 4.The GHU unit is specifically designed to learn the relationship between frame skipping, thereby enabling better feature extraction. = ( * + * ) where signifies the input of the transition, signifies the switch gate, and signifies the hidden state.We can express the above as: The attention mechanism plays a crucial role in assigning varying weights to different samples based on their unique features.This enables the extraction of relevant information for data analysis and prediction, thereby improving evaluation results and accelerating model training speed.For instance, the ConvSeq2Seq model proposed by Gehring et al. [20] integrates an independent attention module in each decoder layer, combined with convolutional neural networks. In recent years, self-attention modules have garnered attention in sequence prediction tasks.The Transformer model, introduced by Vaswani et al. in 2017 [21], utilizes self-attention mechanisms to capture long-range dependencies within sequences.This mechanism facilitates the exchange of information and features between network layers, progressively strengthening spatial dependencies from local to global regions and temporal dependencies from internal to external segments.A schematic representation of the self-attention mechanism module is depicted in Figure 5.To improve the accurate determination of feature positions during the model training process and speed up the model training, we added a ProbSparse Self-Attention mechanism between the GHU and Causal LSTM layers.Among them, , and are the input embeddings and multiply a matrix with weights to obtain query, key, and value.ProbSparse Self-Attention first samples to obtain samples, and then obtains the value of for the relationship between each q i ∈ and : The self-attention mechanism differs from the attention mechanism by incorporating a scaling factor during the attention computation, thereby alleviating overflow concerns arising from large inner products.Within the selfattention mechanism, the correlation between each input sequence is computed using formula (12), where the similarity between a Query and a Key is determined through their dot product.Subsequently, the Softmax function is applied to yield the weights, which are then used to measure the Values and generate the output vector of self-attention. Loss function In severe convective weather conditions, the radar echo reflectivity values are typically expected to exceed 30dBZ (basic reflectivity).Shi et al. [22] conducted a study where they graded the rainfall intensity and assigned varying weights to different precipitation zones based on radar reflectivity values.The reflectivity values of each pixel point in the radar echogram generally range from 0dBZ to 75dBZ within the effective prediction range and are classified into distinct categories [23].As shown in formula (13), w(x) represents the weight, and x is the value of each segment dBZ. The Mean Squared Error (MSE) is frequently used to represent the overall error in recognition outcomes.The formula for calculation is: where is the standard value, is the model identification value, and represents the total sample count.MSE quantifies the accuracy of severe convection forecasts, with smaller MSE values indicating smaller errors. Data set The weather radar dataset we utilized is obtained from the Artificial Intelligence Weather Forecast Innovation Competition organized by the Shanghai Meteorological Bureau in China.Each sample in the radar dataset comprises pixel values organized in a 500×500 format.The dataset spans 3 hours, with input data covering 1 hour at 6-minute intervals and target data covering 2 hours at 12-minute intervals, resulting in a total of 20 instances. Regarding the horizontal grid point range, the radar data samples have a resolution of 0.01°.Radar echoes undergo quality control via correlation coefficient analysis.The accepted data range is from 15 to 70 dBZ.Any values exceeding 70 dBZ are capped at 70, while values below 15 dBZ or missing values are set to 0. For training purposes, we utilized a dataset consisting of 40,000 radar map samples, while the testing dataset contained 3,000 radar map samples.An example of a grayscale radar data image can be seen in Figure 6. Figure 6 Grayscale image of radar data The connection between image gray value and integrated reflectivity: 4. Experiments Model monitoring metrics In meteorology, Probability of Detection (POD), Critical Success Index (CSI), and False Alarm Ratio (FAR) are widely used to assess severe convective weather forecast outcomes.POD and CSI are utilized to measure the likelihood and accuracy of severe convective weather events.Higher values of POD and CSI indicate more accurate predictions of severe convective weather events.Conversely, lower FAR values indicate a lower false alarm rate in model predictions.Here , , and represent the number of hits, false alarms, and misses, respectively. Experimental scheme Figure 7 depicts the model's initial phase, which includes the training process.The model receives input at time t, which consists of 10 radar echo data.Subsequently, the output of the model is a future map of 10 radar echoes, which is generated by analyzing all the elements at each moment.A loss function is then computed by summing the average loss function value of each frame of the radar echo forecast map.In the training process, the objective is to minimize the loss function value.To accomplish this, the back-propagation algorithm, specifically Adam, is employed for training and learning purposes.The training process parameter specifications are as follows: the initial learning rate is set to 10 -4 , the learning rate penalty factor is 0.5, the batch size is 30, and the required number of iterations is 30.The experimental results demonstrate that the algorithm produces clearer outcomes, effectively capturing the spatial and temporal features of radar maps at various elevations.However, there is room for improvement in capturing finegrained features in later frames and scattered areas.Consequently, the model utilizes radar map data to forecast rainfall amounts for the next 2 hours with heightened precision.Upon visual comparison of forecasted images, it's evident that while the ConvLSTM model's predictions appear blurry, the PredRNN++ model yields clearer predictions.However, both models encounter challenges in accurately capturing regions with high radar echo intensity.While the MIM model excels in capturing areas with high radar echo reflectivity, it may exhibit some blurriness in subsequent frames and struggle to accurately depict features in sparse regions.Overall, experimental results suggest that the MIM model provides a more precise representation of regions with high radar echo reflectivity.However, improvements are still needed to capture finer features in subsequent frames and sparse areas. The prediction of severe convective weather relies on binarizing grid data, where we select radar reflectivity levels of 40dBZ and 50dBZ.Additionally, we evaluate the extrapolation performance of the models using grid data, employing three evaluation metrics (CSI, POD, and FAR), with results presented in Table 1.The ConvLSTM, PredRNN++, MIM, and SA-PredRNN++ models are tested on the evaluation set, predicting the next 10 frames based on the preceding 10 frames.These models are fairly compared in terms of experimental settings and hyperparameters. As depicted in Table 1, the SA-PredRNN++ model outperforms the other models and significantly reduces the model's parameters, showcasing the efficacy of the selfattention mechanism.However, ConvLSTM does not perform well with either the 40 or 50 threshold.Although ConvLSTM is proficient in handling time series problems and focuses on convolution for spatial feature extraction, it heavily relies on convolutional layers to capture spatial relationships and is unable to extract long-term features. For the accurate forecast of severe convective weather based on radar echo extrapolation, each image frame contains abundant spatial information, with closely interconnected data points.However, it still cannot capture long-term dependencies effectively.Although PredRNN++ and MIM have made further improvements on ConvLSTM, they still fall short in capturing deep network features due to deficiencies in feature extraction of images and excessive parameters.Despite this, the PredRNN++ model shows better performance according to the POD value, achieving 0.638 and 0.400 at the 10th frame and threshold values of 40 and 50, respectively.However, when compared with other evaluation indicators, the SA-PredRNN++ model outperforms at a threshold of 40 with CSI and POD reaching best results of 0.308 and 0.553, respectively, and FAR also performing better at 0.588 at the 10th frame.Similarly, at a threshold of 50, the SA-PredRNN++ model achieves the best results for CSI and POD at 0.195 and 0.294 at the 10th frame, respectively, with FAR also showing better performance at 0.632.In conclusion, the SA-PredRNN++ model in this study has demonstrated better performance compared to other models.Figure 9 illustrates the superior CSI value of the SA-PredRNN++ model in predicting the performance of 10 frames.As the forecast time increases, the csi value decreases for all models, with a slower decrease for our model.Figure 10 shows a comparison between the two-hour SA-PredRNN++ approach forecasts (b) and radar observations (a).It can be seen that SA-PredRNN++ can have a good prediction of the echo drop zone.The ablation study in Table 1 demonstrates that integrating the ProbSparse self-attention mechanism into the PredRNN++ model, along with GHU units and a second layer of causal LSTM, positively impacts experimental results.Setting the threshold to 40 increases the PredRNN++ score from 0.278 to 0.384.These findings indicate that the self-attention mechanism within the model enhances connectivity in spatiotemporal memory compared to models lacking this mechanism.Additionally, Table 1 also suggests that pairing GHU units with the self-attention mechanism consistently enhances model performance.In Table 2, various network variants are discussed to illustrate different placements for the self-attention mechanism within the model.As a control experiment, we relocated the selfattention mechanism between the second and third layers of the Causal LSTM, resulting in a CSI score of 0.317.Placing the self-attention mechanism between the third and fourth layers increased the CSI score to 0.382.This positioning indicates that configuring the unit behind the GHU unit was most effective, as it enables the GHU unit to prioritize longterm highway features, short-term deep transition path features, and spatial features extracted from the current input frame.The self-attention mechanism facilitates enhanced feature extraction. Conclusion We have provided a detailed introduction to our deep learning model SA-PredRNN++.Our model utilizes radar echo data to extrapolate variations of the reflectivity factor in both time and space, enabling the forecasting of severe convective weather.The SA-PredRNN++ model integrates improved GHU units and incorporates a self-attention mechanism into its second layer.This allows us to capture spatial information of meteorological elements such as the reflectivity factor throughout the entire time series, thereby accelerating the convergence of the model.Furthermore, our model employs a unique neural network framework and a weighted loss function based on weight construction, enhancing its predictive capabilities for regions with strong echo and enabling more accurate forecasting of severe convective weather.Due to its exceptional ability to capture both temporal and spatial features, our model is highly suited for a wide range of meteorological forecasting tasks.Experimental results demonstrate that our model is remarkably precise in forecasting the position, contour, distribution, and progression of severe convective weather.However, it is important to note the limitations of the SA-PredRNN++ model.The model heavily relies on historical data for parameter calibration, and in this study, the available dataset lacked a sufficient number of instances of severe convective weather.To address this issue, the dataset was filtered to include only data above 30dBZ and underwent noise reduction and rotation operations.The experimental results show an improvement in output precision.However, due to the limitations of available data, the accuracy of predicting intense echo areas noticeably decreases as the prediction duration increases.Future improvements to the model should focus on optimizing the construction process, such as integrating numerical simulation data at each time point to correct transmission errors.Additionally, integrating other meteorological features such as temperature and wind field information will contribute to enhancing the accuracy of nowcasting severe convection. Recommendations In the future, the prediction of meteorological disasters is a crucial area of research, playing a significant role in safeguarding lives and property, as well as promoting social stability and development.We urge for greater attention to be given to this field by the general public. Figure 1 PredRNN++ model structure diagram M Figure 3 SA-PredRNN++ model structure diagram Figure 5 Self-attention structure diagram Figure 8 Figure 7Training process and forecasting process Figure 8 Figure 8 Comparison of radar echo images of severe convection near forecast results of various models Figure 9 Frame Figure 9Frame-wise analyses of the next 10 generated radar maps
2024-05-23T15:22:08.167Z
2024-05-21T00:00:00.000
{ "year": 2024, "sha1": "050ef906787d154303da6ed023421fed665ce977", "oa_license": "CCBY", "oa_url": "https://ojs.bonviewpress.com/index.php/jdsis/article/download/2197/940", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ed4d195aa93876a517f27a9d5e8ddee9bfada4e3", "s2fieldsofstudy": [ "Environmental Science", "Computer Science", "Engineering" ], "extfieldsofstudy": [] }
52289309
pes2o/s2orc
v3-fos-license
Modeling Speech Acts in Asynchronous Conversations: A Neural-CRF Approach Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations. Our approach works in two main steps: a long short-term memory recurrent neural network (LSTM-RNN) first encodes each sentence separately into a task-specific distributed representation, and this is then used in a conditional random field (CRF) model to capture the conversational dependencies between sentences. The LSTM-RNN model uses pretrained word embeddings learned from a large conversational corpus and is trained to classify sentences into speech act types. The CRF model can consider arbitrary graph structures to model conversational dependencies in an asynchronous conversation. In addition, to mitigate the problem of limited annotated data in the asynchronous domains, we adapt the LSTM-RNN model to learn from synchronous conversations (e.g., meetings), using domain adversarial training of neural networks. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTM-RNNs provide better task-specific representations, (ii) conversational word embeddings benefit the LSTM-RNNs more than the off-the-shelf ones, (iii) adversarial training gives better domain-invariant representations, and (iv) the global CRF model improves over local models. Introduction With the advent of Internet technologies, communication media like e-mails and discussion forums have become commonplace for discussing work, issues, events, and experiences. Participants in these media interact with each other asynchronously by writing at different times. This generates a type of conversational discourse, where information flow is often not sequential as in monologue (e.g., news articles) or in synchronous conversation (e.g., instant messaging). As a result, discourse structures such as topic structure, coherence structure, and conversational structure in these conversations exhibit different properties from what we observe in monologue or in synchronous conversation Louis and Cohen 2015). Participants in an asynchronous conversation interact with each other in complex ways, performing certain communicative acts like asking questions, requesting information, or suggesting something. These are called speech acts (Austin 1962). For example, consider the excerpt of a forum conversation 1 from our corpus in Figure 1. The participant who posted the first comment, C 1 , describes his situation in the first two sentences, and then asks a question in the third sentence. Other participants respond to the query by suggesting something or asking for clarification. In this process, the participants get into a conversation by taking turns, each of which consists of one or more speech acts. The two-part structures across posts like question-answer and request-grant are called adjacency pairs (Schegloff 1968). Identification of speech acts is an important step toward deep conversational analysis (Bangalore, Di Fabbrizio, and Stent 2006), and has been shown to be useful in many downstream applications, including summarization (Murray et al. 2006;McKeown, Shrestha, and Rambow 2007), question answering (Hong and Davison 2009), collaborative task learning agents (Allen et al. 2007), artificial companions for people to use the Internet (Wilks 2006), and flirtation detection in speed-dates (Ranganath, Jurafsky, and McFarland 2009). Availability of large annotated corpora like the Meeting Recorder Dialog Act (MRDA) (Dhillon et al. 2004) or the Switchboard-DAMSL (SWBD) (Jurafsky, Shriberg, and Biasca 1997) corpus has fostered research in data-driven automatic speech act recognition in synchronous domains like meeting and phone conversations (Ries 1999;Stolcke et al. 2000;Dielmann and Renals 2008). 2 However, such large corpora are not available in the asynchronous domains, and many of the existing (small-sized) corpora use task-specific speech act tagsets (Cohen, Carvalho, and Mitchelle 2004;Ravi and Kim 2007;Bhatia, Biyani, and Mitra 2014) as opposed to a standard one. The unavailability of large annotated data sets with standard tagsets is one of the reasons for speech act recognition not getting much attention in asynchronous domains. Previous attempts in automatic (sentence-level) speech act recognition in asynchronous conversations (Jeong, Lin, and Lee 2009;Qadir and Riloff 2011;Tavafi et al. 2013;Oya and Carenini 2014) suffer from at least one of the following two technical limitations. First, they use a bag-of-words (BOW) representation (e.g., unigram, bigram) to encode lexical information of a sentence. However, consider the Suggestion sentences in the example. Arguably, a model needs to consider the structure (e.g., word order) and the compositionality of phrases to identify the right speech act for an utterance. Furthermore, BOW representation could be quite sparse, and may not generalize well when used in classification models. Recent research suggests that a condensed distributed representation learned by a neural model on the target task (e.g., speech act classification) is more effective. The task-specific training can be further improved by pretrained word embeddings (Goodfellow, Bengio, and Courville 2016). Second, existing approaches mostly disregard conversational dependencies between sentences inside a comment and across comments. For instance, consider the example in Figure 1 again. The Suggestions are answers to Questions asked in a previous comment. We therefore hypothesize that modeling inter-sentence relations is crucial for speech act recognition. We have tagged the sentences in Figure 1 with human annotations (HUMAN) and with the predictions of a local (LOCAL) classifier that considers word order for sentence representation but classifies each sentence separately or individually. Prediction errors are underlined and highlighted in red. Notice the first and second sentences of comment C 4 , which are mistakenly tagged as Statement and Response, respectively, by our best local classifier. We hypothesize that some of the errors made by the local classifier could be corrected by utilizing a global joint model that is trained to perform a collective classification, taking into account the conversational dependencies between sentences (e.g., adjacency relations like Question-Suggestion). The information available in the net and the people who wish to offer services are too many and some are misleading. ⇒ HUMAN: Statement, LOCAL: Statement, GLOBAL: Statement The preliminary preparations, eligibility, the require funds etc., are some of the issues which I wish to know from any panel members of this forum who is aware and had gone through similar procedures to obtain an admission in an university abroad. ⇒ HUMAN: Question, LOCAL: Statement, GLOBAL: Statement Figure 1 Example of a forum conversation (truncated) with HUMAN annotations and automatic predictions by a LOCAL classifier and a GLOBAL classifier for speech acts (e.g., Statement, Suggestion). The incorrect decisions are underlined and marked with red color. However, unlike synchronous conversations (e.g., meeting, phone), modeling conversational dependencies between sentences in an asynchronous conversation is challenging, especially when the thread structure (e.g., "reply-to" links between comments) is missing, which is also our case. The conversational flow often lacks sequential dependencies in its temporal/chronological order. For example, if we arrange the sentences as they arrive in the conversation, it becomes hard to capture any dependency between the act types because the two components of the adjacency pairs can be far apart in the sequence. This leaves us with one open research question: How do we model the dependencies between sentences in a single comment and between sentences across different comments? In this article, we attempt to address this question by designing and experimenting with conditional structured models over arbitrary graph structures of the conversation. Apart from the underlying discourse structure (sequence vs. graph), asynchronous conversations differ from synchronous conversations in style (spoken vs. written) and in vocabulary usage (meeting conversations on some focused topics vs. conversations on any topic of interest in a public forum). In this article, we propose to use domain adaptation methods in the neural network framework to model these differences in the sentence encoding process. More concretely, we make the following contributions in speech act recognition for asynchronous conversations. First, we propose to use a recurrent neural network (RNN) with a long short-term memory (LSTM) hidden layer to compose phrases in a sentence and to represent the sentence using distributed condensed vectors (i.e., embeddings). These embeddings are trained directly on the speech act classification task. We experiment with both unidirectional and bidirectional RNNs. Second, we train (task-agnostic) word embeddings from a large conversational corpus, and use it to boost the performance of the LSTM-RNN model. Third, we propose conditional structured models in the form of pairwise conditional random fields (CRF) (Murphy 2012) over arbitrary conversational structures. We experiment with different variations of this model to capture different types of interactions between sentences inside the comments and across the comments in a conversational thread. These models use the LSTMencoded vectors as feature vectors for learning to classify sentences in a conversation collectively. Furthermore, to address the problem of insufficient training data in the asynchronous domains, we propose to use the available labeled data from synchronous domains (e.g., meetings). To make the best use of this out-of-domain data, we adapt our LSTM-RNN encoder to learn task-specific sentence representations by modeling the differences in style and vocabulary usage between the two domains. We achieve this by using the recently proposed domain adversarial training methods of neural networks (Ganin et al. 2016). As a secondary contribution, we also present and release a forum data set annotated with a standard speech act tagset. We train our models in various settings with synchronous and asynchronous corpora, and we evaluate on one synchronous meeting data set and three asynchronous data sets-two forum data sets and one e-mail data set. We also experimented with different pretrained word embeddings in the LSTM-RNN model. Our main findings are: (i) LSTM-RNNs provide better sentence representation than BOW and other unsupervised methods; (ii) bidirectional LSTM-RNNs, which encode a sentence using two vectors, provide better representation than the unidirectional ones; (iii) word embeddings pretrained on a large conversational corpus yield significant improvements; (iv) the globally normalized joint models (CRFs) improve over local models for certain graph structures; and (v) domain adversarial training improves the results by inducing domain-invariant features. The source code, the pretrained word embeddings, and the new data sets are available at https://ntunlpsg.github.io/demo/project/ speech-act/. After discussing related work in Section 2, we present our speech act recognition framework in Section 3. In Section 4, we present the data sets used in our experiments along with our newly created corpus. The experiments and analysis of results are presented in Section 5. Finally, we summarize our contributions with future directions in Section 6. Related Work Three lines of research are related to our work: (i) compositionality with LSTM-RNNs, (ii) conditional structured models, and (iii) speech act recognition in asynchronous conversations. Relevant to our implementation, Kalchbrenner and Blunsom (2013) use a simple RNN to model sequential dependencies between act types for speech act recognition in phone conversations. They use a convolutional neural network (CNN) to compose sentence representations from word vectors. Lee and Dernoncourt (2016) use a similar model, but they also experiment with RNNs to compose sentence representations. Similarly, Khanpour, Guntakandla, and Nielsen (2016) use an LSTM-based RNN to compose sentence representations. Ji, Haffari, and Eisenstein (2016) propose a latent variable RNN that can jointly model sequences of words (i.e., language modeling) and discourse relations between adjacent sentences. The discourse relations are modeled with a latent variable that can be marginalized during testing. In one experiment, they use coherence relations from the Penn Discourse Treebank corpus as the discourse relations. In another setting, they use speech acts from the SWBD corpus as the discourse relations. They show improvements on both language modeling and discourse relation prediction tasks. Shen and Lee (2016) use an attention-based LSTM-RNN model for speech act classification. The purpose of the attention is to focus on the relevant part of the input sentence. Tran, Zukerman, and Haffari (2017) use an online inference technique similar to the forward pass of the traditional forward-backward inference algorithm to improve upon the greedy decoding methods typically used in the RNN-based sequence labeling models. Vinyals and Le (2015) and Serban et al. (2016) use RNN-based encoderdecoder framework for conversation modeling. Vinyals and Le (2015) use a single RNN to encode all the previous utterances (i.e., by concatenating the tokens of previous utterances), whereas Serban et al. (2016) use a hierarchical encoder-one to encode the words in each utterance, and another to connect the encoded context vectors. Li et al. (2015) compare recurrent neural models with recursive (syntax-based) models for several NLP tasks and conclude that recurrent models perform on par with the recursive for most tasks (or even better). For example, recurrent models outperform recursive on sentence level sentiment classification. This finding motivated us to use recurrent models rather than recursive ones. Conditional Structured Models There has been an explosion of interest in CRFs for solving structured output problems in NLP; see Smith (2011) for an overview. The most common type of CRF has a linear chain structure that has been used in sequence labeling tasks like part-of-speech (POS) tagging, chunking, named entity recognition, and many others (Sutton and McCallum 2012). Tree-structured CRFs have been used for parsing (e.g., Finkel, Kleeman, and Manning 2008). The idea of combining neural networks with graphical models for speech act recognition goes back to Ries (1999), in which a feed-forward neural network is used to model the emission distribution of a supervised hidden Markov model (HMM). In this approach, each input sentence in a dialogue sequence is represented as a BOW vector, which is fed to the neural network. The corresponding sequence of speech acts is given by the hidden states of the HMM. Surendran and Levow (2006) first use support vector machines (SVMs) (i.e., local classifier) to estimate the probability of different speech acts for each individual utterance by combining sparse textual features (i.e., bag of n-grams) and dense acoustic features. The estimated probabilities are then used in the Viterbi algorithm to find the most probable tag sequence for a conversation. Julia and Iftekharuddin (2008) use a fusion of SVM and HMM classifiers with textual and acoustic features to classify utterances into speech acts. More recently, Lample et al. (2016) proposed an LSTM-CRF model for named entity recognition (NER), which first generates a bi-directional LSTM encoding for each input word, and then it passes this representation to a CRF layer, whose task is to encourage global consistency of the NER tags. For each input word, the input to the LSTM consists of a concatenation of the corresponding word embedding and of character-level bi-LSTM embeddings for the current word. The whole network is trained end-to-end with backpropagation, which can be done effectively for chain-structured graphs. Ma and Hovy (2016) proposed a similar framework, but they replace the character-level bi-LSTM with a CNN. They evaluated their approach on POS and NER tagging tasks. Strubell et al. (2017) extended these models by substituting the word-level LSTM with an iterated dilated convolutional neural network, a variant of CNN, for which the effective context window in the input can grow exponentially with the depth of the network, while having a modest number of parameters to estimate. Their approach permits fixeddepth convolutions to run in parallel across entire documents, thus making use of GPUs, which yields up to 20-fold speed up, while retaining performance comparable to that of LSTM-CRF. Speech act recognition in asynchronous conversation posits a different problem, where the challenge is to model arbitrary conversational structures. In this work, we propose a general class of models based on pairwise CRFs that work on arbitrary graph structures. Speech Act Recognition in Asynchronous Conversation Previous studies on speech act recognition in asynchronous conversation have used supervised, semi-supervised, and unsupervised methods. Cohen, Carvalho, and Mitchell (2004) first use the term e-mail speech act for classifying e-mails based on their acts (e.g., deliver, meeting). Their classifiers do not capture any contextual dependencies between the acts. To model contextual dependencies, Carvalho and Cohen (2005) use a collective classification approach with two different classifiers, one for content and one for context, in an iterative algorithm. The content classifier only looks at the content of the message, whereas the context classifier takes into account both the content of the message and the dialog act labels of its parent and children in the thread structure of the e-mail conversation. Our approach is similar in spirit to their approach with three crucial differences: (i) our CRFs are globally normalized to surmount the label bias problem, while their classifiers are normalized locally; (ii) the graph structure of the conversation is given in their case, which is not the case with ours; and (iii) their approach works at the comment level, whereas we work at the sentence level. Identification of adjacency pairs like question-answer pairs in e-mail discussions using supervised methods was investigated in Shrestha and McKeown (2004) and Ravi and Kim (2007). Ferschke, Gurevych, and Chebotar (2012) use speech acts to analyze the collaborative process of editing Wiki pages, and apply supervised models to identify the speech acts in Wikipedia Talk pages. Other sentence-level approaches use supervised classifiers and sequence taggers (Qadir and Riloff 2011;Tavafi et al. 2013;Oya and Carenini 2014). Vosoughi and Roy (2016) trained off-the-shelf classifiers (e.g., SVM, naive Bayes, Logistic Regression) with syntactic (e.g., punctuations, dependency relations, abbreviations) and semantic feature sets (e.g., opinion words, vulgar words, emoticons) to classify tweets into six Twitter-specific speech act categories. Several semi-supervised methods have been proposed for speech act recognition in asynchronous conversation. Jeong, Lin, and Lee (2009) use semi-supervised boosting to tag the sentences in e-mail and forum discussions with speech acts by inducing knowledge from annotated spoken conversations (MRDA meeting and SWBD telephone conversations). Given a sentence represented as a set of trees (i.e., dependency, n-gram tree, and POS tag tree), the boosting algorithm iteratively learns the best feature set (i.e., sub-trees) that minimizes the errors in the training data. This approach does not consider the dependencies between the act types, something we successfully exploit in our work. Zhang, Gao, and Li (2012) also use semi-supervised methods for speech act recognition in Twitter. They use a transductive SVM and a graph-based label propagation framework to leverage the knowledge from abundant unlabeled data. In our work, we leverage labeled data from synchronous conversations while adapting our model to account for the shift in the data distributions of the two domains. In our unsupervised adaptation scenario, we do not use any labeled data from the target (asynchronous) domain, whereas in the semi-supervised scenario, we use some labeled data from the target domain. Among methods that use unsupervised learning, Ritter, Cherry, and Dolan (2010) propose two HMM-based unsupervised conversational models for modeling speech acts in Twitter. In particular, they use a simple HMM and a HMM+Topic model to cluster the Twitter posts (not the sentences) into act types. Because they use a unigram language model to define the emission distribution, their simple HMM model tends to find some topical clusters in addition to the clusters that are based on speech acts. The HMM+Topic model tries to separate the act indicators from the topic words. By visualizing the type of conversations found by the two models, they show that the output of the HMM+Topic model is more interpretable than that of the HMM one; however, their classification accuracy is not empirically evaluated. Therefore, it is not clear whether these models are actually useful, and which of the two models is a better speech act tagger. Paul (2012) proposes using a mixed membership Markov model to cluster sentences based on their speech acts, and show that this model outperforms a simple HMM. Joty, Carenini, and Lin (2011) propose unsupervised models for speech act recognition in e-mail and forum conversations. They propose a HMM+Mix model to separate out the topic indicators. By training their model based on a conversational structure, they demonstrate that conversational structure is crucial to learning a better speech act recognition model. In our work, we also demonstrate that conversational structure is important for modeling conversational dependencies, however, we do not use any given structure; rather, we build models based on arbitrary graph structures. Our Approach Let s n m denote the m-th sentence of comment n in an asynchronous conversation; our goal is to find the corresponding speech act tag y n m ∈ T , where T is the set of available tags. Our approach works in two main steps, as outlined in Figure 2. First, we use a RNN to encode each sentence into a task-specific distributed representation (i.e., embedding) by composing the words sequentially. The RNN is trained to classify sentences into speech act types, and is adapted to give domain-invariant sentence features when trained to leverage additional data from synchronous domains (e.g., meetings). In the second step, a structured model takes the sentence embeddings as input, and defines a joint distribution over sentences to capture the conversational dependencies. In the following sections, we describe these steps in detail. Learning Task-Specific Sentence Representation One of our main hypotheses is that a sentence representation method should consider the word order of the sentence. To this end, we use a RNN to encode each sentence into a vector by processing its words sequentially, at each time step combining the current input with the previous hidden state. Figure 3(a) demonstrates the process for three sentences. Initially, we create an embedding matrix E ∈ R |V|×D , where each row represents the distributed representation of dimension D for a word in a finite vocabulary V. We construct V from the training data after filtering out the infrequent words. Figure 2 Our two-step inference framework for speech act recognition in asynchronous conversation. Each sentence in the conversation is first encoded into a task-specific representation by a recurrent neural network (RNN). The RNN is trained on the speech act classification task, and leverages large labeled data from synchronous domains (e.g., meetings) in an adversarial domain adaptation training method. A structured model (CRF) then takes the encoded sentence vectors as input, and performs joint prediction over all sentences in a conversation. Figure 3 A bidirectional LSTM-RNN to encode each sentence s n m into a condensed vector z n m . The network is trained to classify each sentence into its speech act type. Given an input sentence s = (w 1 , · · · , w T ) of length T, we first map each word w t to its corresponding index in E (equivalently, in V). The first layer of our network is a lookup layer that transforms each of these indices to a distributed representation x t ∈ R D by looking up the embedding matrix E. We consider E a model parameter to be learned by backpropagation. We can initialize E randomly or using pretrained word vectors (to be described in Section 4.2). The output of the look-up layer is a matrix in R T×D , which is fed to the recurrent layer. The recurrent layer computes a compositional representation − → h t at every time step t by performing nonlinear transformations of the current input x t and the output of the previous time step − → h t−1 . We use LSTM blocks (Hochreiter and Schmidhuber 1997) in the recurrent layer. As shown in Figure 3(b), each LSTM block is composed of four elements: (i) a memory cell c (a neuron) with a self-connection, (ii) an input gate i to control the flow of input signal into the neuron, (iii) an output gate o to control the effect of neuron activation on other neurons, and (iv) a forget gate f to allow the neuron to adaptively reset its current state through a self-connection. The following sequence of equations describe how the memory blocks are updated at every time step t: (1) where U and V are the weight matrices between two consecutive hidden layers, and between the input and the hidden layers, respectively. 3 The symbols sigh and tanh denote hard sigmoid and hard tan nonlinear functions, respectively, and the symbol denotes an element-wise product of two vectors. LSTM-RNNs, by means of their specifically designed gates (as opposed to simple RNNs), are capable of capturing longrange dependencies. We can interpret h t as an intermediate representation summarizing the past, that is, the sequence (w 1 , w 2 , . . . , w t ). The output of the last time step h T = z can thus be considered as the representation of the entire sentence, which can be fed to the classification layer. The classification layer uses a softmax for multi-class classification. Formally, the probability of the k-th class for classifying into K speech act classes is where W are the classifier weights, and θ = {E, U, V} are the encoder parameters. We minimize the negative log likelihood of the gold labels. The negative log likelihood for one data point is: where I(y = k) is an indicator function to encode the gold labels: I(y = k) = 1 if the gold label y = k, otherwise 0. 4 The loss function minimizes the cross-entropy between the predicted distribution and the target distribution (i.e., gold labels). Bidirectionality. The RNN just described encodes information that it obtains only from the past. However, information from the future could also be crucial for recognizing speech acts. This is especially true for longer sentences, where a unidirectional LSTM can be limited in encoding the necessary information into a single vector. Bidirectional RNNs (Schuster and Paliwal 1997) capture dependencies from both directions, thus providing two different views of the same sentence. This amounts to having a backward counterpart for each of the equations from (1) to (5). For classification, we use the con- , where − → z and ← − z are the encoded vectors summarizing the past and the future, respectively. Adapting LSTM-RNN with Adversarial Training The LSTM-RNN described in the previous section can model long-distance dependencies between words, and, given enough training data, it should be able to compose a sentence, capturing its syntactic and semantic properties. However, when it comes to speech act recognition in asynchronous conversations, as mentioned before, not many large corpora annotated with a standard tagset are available. Because of the large number of parameters, the LSTM-RNN model usually overfits when it is trained on small data sets of asynchronous conversations (shown later in Section 5). One solution to address this problem is to use data from synchronous domains for which large annotated corpora are available (e.g., MRDA meeting corpus). However, as we will see, although simple concatenation of data sets generally improves the performance of the LSTM-RNN model, it does not provide the optimal solution because the conversations in synchronous and asynchronous domains are different in modality (spoken vs. written) and in style. In other words, to get the best out of the available synchronous domain data, we need to adapt our model. Our goal is to adapt the LSTM-RNN encoder so that it learns to encode sentence representations z (i.e., features used for classification) that are not only discriminative for the act classification task, but also invariant across the domains. To this end, we propose to use the domain adversarial training of neural networks proposed recently by Ganin et al. (2016). Let D S = {s n , y n } N n=1 denote the set of N training instances (labeled) in the source domain (e.g., MRDA meeting corpus). We consider two possible adaptation scenarios: In the following, we describe our models for these two adaptation scenarios in turn. Figure 4 shows our extended LSTM-RNN network trained for domain adaptation. The input sentence s is sampled either from a synchronous domain (e.g., meeting) or from an asynchronous (e.g., forum) domain. As before, we pass the sentence through a look-up layer and a bidirectional recurrent layer to encode it into a distributed representation z = [ − → z , ← − z ], using our bidirectional LSTM-RNN encoder. For domain adaptation, our goal is to adapt the encoder to generate z, such that it is not only informative for the target classification task (i.e., speech act recognition) but also invariant across domains. Upon achieving this, we can use the adapted LSTM-RNN encoder to encode a target sentence, and use the source classifier (the softmax layer) to classify the sentence into its corresponding speech act type. To this end, we add a domain discriminator, another neural network that takes z as input and tries to discriminate the domains of the input sentence (e.g., meeting vs. forum). Formally, the output of the domain discriminator is defined by a sigmoid function:d where d ∈ {0, 1} denotes the domain of the sentence s (1 for meeting, 0 for forum), w d are the final layer weights of the discriminator, and h d = g(U d z) defines the hidden layer of the discriminator with U d being the layer weights, and g(.) being the ReLU activations (Nair and Hinton 2010). We use the negative log-probability as the discrimination loss: The composite network ( Figure 4) has three players: (i) the encoder (E), (ii) the classifier (C), and (iii) the discriminator (D). During training, the encoder and the classifier play a co-operative game, while the encoder and the discriminator play an adversarial game. The training objective of the composite model can be written as follows: where θ = {E, U, V} are the parameters of the LSTM-RNN encoder, W are the classifier weights, and ω = {U d , w d } are the parameters of the discriminator network. 5 The hyper-parameter λ controls the relative strength of the two networks. In training, we look for parameter values that satisfy a min-max optimization criterion as follows: which involves a maximization (gradient ascent) with respect to {U d , w d } and a minimization (gradient descent) with respect to θ and W. Maximizing L(W, θ, ω) with respect to {U d , w d } is equivalent to minimizing the discriminator loss L d (ω, θ) in Equation (9), which aims to improve the discrimination accuracy. When put together, the updates of the shared encoder parameters θ = {E, U, V} for the two networks work adversarially with respect to each other. In our gradient descent training, the min-max optimization is achieved by reversing the gradients (Ganin et al. 2016) of the domain discrimination loss L d (ω, θ), when they are backpropagated to the encoder. As shown in Figure 4, the gradient reversal is applied to the recurrent and embedding layers. This optimization set-up is related to the training method of Generative Adversarial Networks (Goodfellow et al. 2014), where the goal is to build deep generative models that can generate realistic images. The discriminator in Generative Adversarial Networks tries to distinguish real images from model-generated images, and thus the training attempts to minimize the discrepancy between the two image distributions. When backpropagating to the generator network, they consider a slight variation of the reverse gradients with respect to the discriminator loss. In particular, ifd ω is the discriminator probability for real images, rather than reversing the gradients of − log(1 −d ω ), they backpropagate the gradients of − logd ω to the generator. Reversing the gradient is just a different way of achieving the same goal. Algorithm 1 presents pseudocode of our training algorithm based on stochastic gradient descent (SGD). We first initialize the model parameters by sampling from Glorot-uniform distribution (Glorot and Bengio 2010). We then form minibatches of size b by randomly sampling b/2 labeled examples from D S and b/2 unlabeled examples from D u T . For labeled instances, both L c (W, θ) and L d (ω, θ) losses are active, while only L d (ω, θ) is active for unlabeled instances. The main challenge in adversarial training is to balance the two components (the task classifier and the discriminator) of the network. If one component becomes smarter, its loss to the shared layer becomes useless, and the training fails to converge (Arjovsky, Chintala, and Bottou 2017). Equivalently, if one component gets weaker, its loss overwhelms that of the other, causing training to fail. In our experiments, we found the domain discriminator to be weaker; initially, it could not distinguish the domains often. To balance the two components, we would need the error signals from the discriminator to be fairly weak initially, with full power unleashed only as the classification errors start to dominate. We follow the weighting schedule proposed by Ganin et al. (2016, page 21), who initialize λ to 0, and then change it gradually to 1 as training progresses. That is, we start training the task classifier first, and we gradually add the discriminator's loss. b ∇ U d ,w d L d (ω, θ) // Gradient reversal to fool Discriminator (g) Take a gradient step for − 2λ b ∇ θ L d (ω, θ) until convergence; 3.2.2 Supervised Adaptation. It is quite straightforward to extend our adaptation method to a supervised setting, where we have access to some labeled instances in the target domain. Similar to the instances in the source domain (D S ), the labeled instances in the target domain (D l T ) are used for act classification and domain discrimination. The total training loss in the supervised adaptation setting can be written as where the second term is the classification loss on the labeled target data set D l T , and the last term is the discrimination loss on both labeled and unlabeled data in the target domain. We modify the training algorithm accordingly. Specifically, each minibatch in SGD training is formed by labeled instances from both D S and D l T , and unlabeled instances from D u T . Conditional Structured Model for Conversational Dependencies Given the vector representation of the sentences in an asynchronous conversation, we explore two different approaches to learn classification functions. The first and the traditional approach is to learn a local classifier, ignoring the structure in the output and using it for predicting the label of each sentence separately. Indeed, this is the approach we took in the previous subsection when we fed the output layer of the LSTM RNNs (Figures 3 and 4) with the sentence vectors. However, this approach does not model the conversational dependency between sentences in a conversation (e.g., adjacency relations between question-answer and request-accept pairs). The second approach, which we adopt in this article, is to model the dependencies between the output variables (i.e., speech act labels of the sentences), while learning the classification functions jointly by optimizing a global performance criterion. We represent each conversation by a graph G = (V, E), as shown in Figure 5. Each node i ∈ V is associated with an input vector z i = z n m (extracted from the LSTM-RNN), representing Examples of conditional structured models for speech act recognition in asynchronous conversation. The sentence vectors (z n m ) are extracted from the LSTM-RNN model. the encoded features for the sentence s n m , and an output variable y i ∈ {1, 2, · · · , K}, representing the speech act type. Similarly, each edge (i, j) ∈ E is associated with an input feature vector φ(z i , z j ), derived from the node-level features, and an output variable y i,j ∈ {1, 2, · · · , L}, representing the state transitions for the pair of nodes. We define the following conditional joint distribution: where ψ n and ψ e are the node and the edge factors, and Z(.) is the global normalization constant that ensures a valid probability distribution. We use a log-linear representation for the factors: where φ(.) is a feature vector derived from the inputs and the labels. This model is essentially a pairwise conditional random field (Murphy 2012). The global normalization allows CRFs to surmount the so-called label bias problem (Lafferty, McCallum, and Pereira 2001), allowing them to take long-range interactions into account. The log likelihood for one data point (z, y) (i.e., a conversation) is: This objective is convex, so we can use gradient-based methods to find the global optimum. The gradients have the following form: where the E[φ(.)] denote the expected feature vectors. In our case, the node or sentence features are the task-specific sentence embeddings extracted from the bi-directional LSTM-RNN model (possibly domain adapted by adversarial training), and for edge features, we use the hadamard product (i.e., element-wise product) of the two corresponding node vectors. 3.3.1 Training and Inference in CRFs. Traditionally, CRFs have been trained using offline methods like limited-memory BFGS (Murphy 2012). Online training of CRFs using SGD was proposed by Vishwanathan et al. (2006). Because RNNs are trained with online methods, to compare our two methods, we use an SGD-based algorithm to train our CRFs. Algorithm 2 gives the pseudocode of the training procedure. We use Belief Propagation (BP) (Pearl 1988) for inference in our CRFs. BP is guaranteed to converge to an exact solution if the graph is a tree. However, exact inference is intractable for graphs with loops. Despite this, Pearl (1988) advocates for BP in loopy Algorithm 2: Online learning algorithm for conditional random fields. 1. Initialize the model parameters v and w; 2. repeat for each thread G = (V, E) do (a) Compute node and edge factors ψ n (y i |z, v) and ψ e (y i,j |z, w); (b) Infer node and edge marginals using sum-product loopy BP; graphs as an approximation (see Murphy 2012, page 768). The algorithm is then called loopy BP. Although loopy BP gives approximate solutions for general graphs, it often works well in practice (Murphy, Weiss, and Jordan 1999), outperforming other methods such as mean field (Weiss 2001). Variations of Graph Structures. One of the advantages of the pairwise CRF in Equation (13) is that we can define this model over arbitrary graph structures, which allows us to capture conversational dependencies at various levels. Modeling the arbitrary graph structure can be crucial, especially in scenarios where the reply-to structure of the conversation is not known. By defining structured models over plausible graph structures, we can get a sense of the underlying conversational structure. We distinguish between two types of conversational dependencies: (i) Intra-comment connections: This defines how the speech acts of the sentences inside a comment are connected with each other. (ii) Across-comment connections: This defines how the speech acts of the sentences across comments are connected in a conversation. Table 1 summarizes the connection types that we have explored in our CRF models. Each configuration of intra-and across-connections yields a different pairwise CRF. Figure 6 shows four such CRFs with three comments -C 1 being the first comment, and C i and C j being two other comments in the conversation. Figure 6(a) shows the structure for the NO-NO configuration, where there is no link between nodes of both intra-and across-comments. In this setting, the CRF model boils down to the MaxEnt model. Figure 6(b) shows the structure for LC-LC configuration, where there are linear chain relations between nodes of both intra-and across-comments. The linear chain across comments refers to the structure, where the last sentence of each comment is connected to the first sentence of the comment that comes next in the temporal order. Figures 6(c) shows the CRF for LC-LC 1 , in which the sentences inside a comment have linear chain connections, and the last sentence of the first comment is connected to the first sentence of the other comments. Corpora In this section, we describe the data sets used in our experiments. We use a number of labeled data sets to train and test our models, one of which we constructed in this work. Additionally, we use a large unlabeled conversational data set to train our (unsupervised) word embedding models. Labeled Corpora There exist large corpora of utterances annotated with speech acts in synchronous spoken domains, for example, Switchboard-DAMSL (SWBD) (Jurafsky, Shriberg, and (Dhillon et al. 2004). However, the asynchronous domain lacks such large corpora. Some prior studies (Cohen, Carvalho, and Mitchell 2004;Feng et al. 2006;Ravi and Kim 2007;Bhatia, Biyani, and Mitra 2014) tackle the task at the comment level, and use task-specific tagsets. In contrast, in this work we are interested in identifying speech acts at the sentence level, and also using a standard tagset like the ones defined in SWBD or MRDA. Several studies attempt to solve the task at the sentence level. Jeong, Lin, and Lee (2009) created a data set of TripAdvisor (TA) forum conversations annotated with the standard 12 act types defined in MRDA. They also remapped the BC3 e-mail corpus (Ulrich, Murray, and Carenini 2008) according to this tagset. Table 2 shows the tags and their relative frequency in the two data sets. Subsequent studies (Joty, Carenini, and Lin 2011;Tavafi et al. 2013;Oya and Carenini 2014) use these data sets. We also use these data sets in our work. Table 3 shows some basic statistics about these data sets. On average, BC3 conversations are longer than those of TripAdvisor in terms of both number of comments and number of sentences. Since these data sets are relatively small in size with sparse tag distributions, we group the 12 act types into 5 coarser classes to learn a reasonable classifier. Some prior work (Tavafi et al. 2013;Oya and Carenini 2014) has also taken the same approach. More specifically, all the question types are grouped into one general class Question, all response types into Response, and appreciation and polite mechanisms into the Polite class. In addition to the asynchronous data sets -TA, BC3, and QC3 (to be introduced subsequently), we also demonstrate the performance of our models on the synchronous 6 We selected 50 conversations from a popular community question answering site named Qatar Living 7 for our annotation. We used three conversations for our pilot study and used the remaining 47 for the actual study. The resulting corpus, as shown in the last column of Table 3, on average contains 13.32 comments and 33.28 sentences per conversation, and 19.78 words per sentence. Two native speakers of English annotated each conversation using a Web-based annotation framework (Ulrich, Murray, and Carenini 2008). They were asked to annotate each sentence with the most appropriate speech act tag from the list of five speech act types. Because this task is not always obvious, we gave them detailed annotation guidelines with real examples. We use Cohen's κ to measure the agreement between the annotators. The third column in Table 5 presents the κ values for the act types, which vary from 0.43 (for Response) to 0.87 (for Question). In order to create a consolidated data set, we collected the disagreements between the two annotators, and used a third annotator to resolve those cases. The fifth column in Table 4 presents the distribution of the speech acts in the resulting data set. As we can see, after Statement, Suggestion is the most frequent class, followed by the Question and the Polite classes. Conversational Word Embeddings One simple way to exploit unlabeled data for semi-supervised learning is to use word embeddings that are learned from large unlabeled data sets (Turian, Ratinov, and Bengio 2010). Word embeddings such as word2vec skip-gram (Mikolov, Yih, and Zweig 2013) and Glove vectors (Pennington, Socher, and Manning 2014) capture syntactic and semantic properties of words and their linguistic regularities in the vector space. The skip-gram model was trained on part of the Google news data set containing about 100 billion words, and it contains 300-dimensional vectors for 3 million unique words and phrases. 8 Glove was trained on the combination of Wikipedia 2014 and Gigaword 5 data sets containing 6B tokens and 400K unique (uncased) words. It comes with 50d, 100d, 200d, and 300d vectors. 9 For our experiments, we use the 300d vectors. Many recent studies have shown that the pretrained embeddings improve the performance on supervised tasks (Schnabel et al. 2015). In our work, we have used these generic off-the-shelf pretrained embeddings to boost the performance of our models. In addition, we have also trained the word2vec skip-gram model and Glove on a large conversational corpus to obtain more relevant conversational word embeddings. Later in our experiments (Section 5) we will demonstrate that the conversational word embeddings are more effective than the generic ones because they are trained on similar data sets. To train the word embeddings, we collected conversations of both synchronous and asynchronous types. For asynchronous, we collected e-mail threads from W3C (w3c.org), and forum conversations from TripAdvisor and QatarLiving sites. The raw data was too noisy to directly inform our models, as it contains system messages and signatures. We cleaned up the data with the intention of keeping only the headers, bodies, and quotations. For synchronous, we used the utterances from the SWBD and MRDA corpora. Table 6 shows some basic statistics about these (unlabeled) data sets. We trained our word vectors on the concatenated set of all data sets (i.e., 120M tokens). Note that the conversations in our labeled data sets were taken from these sources (e.g., BC3 from W3C, QC3 from QatarLiving, and TA from TripAdvisor.) Experiments In this section, we present our experimental settings, results, and analysis. We start with an outline of the experiments. Outline of Experiments Our main objective is to evaluate our speech act recognizer on asynchronous conversations. For this, we evaluate our models on the forum and e-mail data sets introduced earlier in Section 4.1: (i) our newly created QC3 data set, (ii) the TripAdvisor (TA) data set from Jeong, Lin, and Lee (2009), and (iii) the BC3 e-mail corpus from (Ulrich, Murray, and Carenini 2008). In addition, we validate our sentence encoding approach on the MRDA meeting corpus. Because of the noisy and informal nature of conversational texts, we performed a series of preprocessing steps before using it for training or testing. We normalize all characters to their lowercased forms, truncate elongations to two characters, and spell out every digit and URL. We further tokenized the texts using the CMU TweetNLP tool (Gimpel et al. 2011). For performance comparison, we use both accuracy and macro-averaged F 1 score. Accuracy gives the overall performance of a classifier but could be biased toward the most populated classes, whereas macro-averaged F 1 weights every class equally, and is not influenced by class imbalance. Statistical significance tests are done using an approximate randomization test based on the accuracy. 10 We used SIGF V.2 (Padó 2006) with 10,000 iterations. In the following, we first demonstrate the effectiveness of our LSTM-RNN model for learning task-specific sentence encoding by training it on the task in three different settings: (i) training on in-domain data only, (ii) training on a simple concatenation of synchronous and asynchronous data, and (iii) training it with adversarial training for domain adaptation. We also compare the effectiveness of different embedding types in these three training settings. The best task-specific embeddings are then extracted and fed into the CRF models to learn inter-sentence dependencies. In Section 5.3, we compare how our CRF models with different conversational graph structure perform. Table 7 gives an outline of our experimental roadmap. Effectiveness of LSTM RNN We first describe the experimental settings for our LSTM RNN sentence encoding model-the data set splits, training settings, and compared baselines. Then we present our results on the three training scenarios as outlined in Table 7. Experimental Settings. We split each of our asynchronous corpora randomly into 70% sentences for training, 10% for development, and 20% for testing. For MRDA, we use the same train:test:dev split as Jeong, Lin, and Lee (2009). Table 8 summarizes the resulting data sets. We compare the performance of our LSTM-RNN model with MaxEnt (ME) and Multi-layer Perceptron (MLP) with one hidden layer. 11 In one setting, we fed them with the bag-of-words (BOW) representation of the sentence, namely, vectors containing binary values indicating the presence or absence of a word in the training set vocabulary. In another setting, we use a concatenation of the pretrained word embeddings as the sentence representation. We train the models by optimizing the cross entropy in Equation (7) using the gradient-based learning algorithm ADAM (Kingma and Ba 2014). 12 The learning rate and other parameters were set to the values as suggested by the authors. To avoid overfitting, we use dropout (Srivastava et al. 2014) of hidden units and early-stopping based on the loss on the development set. 13 Maximum number of epochs was set to 50 for RNNs, ME, and MLP. We experimented with dropout rates of {0.0, 0.2, 0.4}, minibatch sizes of {16, 32, 64}, and hidden layer units of {100, 150, 200} in MLP and LSTMs. The vocabulary V in LSTMs was limited to the most frequent P% (P ∈ {85, 90, 95}) words in the training corpus, where P is considered a hyperparameter. We initialize the word vectors in our model either by sampling randomly from the small uniform distribution U (−0.05, 0.05), or by using pretrained embeddings. The dimension for random initialization was set to 128. For pretrained embeddings, we experiment with off-the-shelf embeddings that come with word2vec (Mikolov et al. 2013b) and Glove (Pennington, Socher, and Manning 2014) as well as with our conversational word embeddings (Section 4.2). We experimented with four variations of our LSTM-RNN model: (i) U-LSTM rand , referring to unidirectional RNN with random word vector initialization; (ii) U-LSTM pre , referring to unidirectional RNN initialized with pretrained word embeddings of type pre; (iii) B-LSTM rand , referring to bidirectional RNN with random initialization; and (iv) B-LSTM pre , referring to bidirectional RNN initialized with pretrained word vectors of type pre. 5.2.2 Results for In-Domain Training. Before reporting the performance of our sentence encoding model on asynchronous domains, we first evaluate it on the (synchronous) MRDA meeting corpus where it can be compared to previous studies on a large data set. Results on MRDA Meeting Corpus. Table 9 presents the results on MRDA for in-domain training. The first two rows show the best results reported so far on this data set from Jeong, Lin, and Lee (2009) for classifying sentences into 12 speech act types; the first row shows the results of the model that uses only n-grams, and the second row shows the results using all of the features, including n-grams, speaker, part-of-speech, and dependency structure. Note that our LSTM RNNs and their n-gram model use the same word sequence information. The second group of results (third and fourth rows) are for ME and MLP models with BOW sentence representation. The third group shows the results for unidirectional LSTM with random and pretrained off-the-shelf embeddings. The fourth group shows the corresponding results for bi-directional LSTMs. Finally, the fifth row presents the results for bi-directional LSTM with our conversational embeddings. To compare our results with the results of Jeong, Lin, and Lee (2009), we ran our models on 12-class classification task in addition to our original 5-class task. It can be observed that all of our LSTM-RNNs achieve state-of-the-art results, and the bi-directional ones with pretrained embeddings generally perform better than others in terms of the F 1 -score. The best results are obtained with our conversational embeddings. Our best model B-LSTM conv-glove (B-LSTM with Glove conversational embeddings) gives absolute improvements of about 5.0% and 3.5% in F 1 compared to the n-gram and all-features models, respectively, of Jeong, Lin, and Lee (2009). This is Table 9 Results on MRDA (synchronous) meeting corpus in macro-averaged F 1 and accuracy. Accuracy numbers are shown in parentheses. Top two rows report results from Jeong, Lin, and Lee (2009) for their model with n-gram and all feature sets. Best results are boldfaced. Accuracy numbers significantly superior to the best baselines are marked with *. remarkable because our LSTM-RNNs learn the sentence representation automatically from the word sequence and do not use any hand-engineered features. Table 10 for the asynchronous data sets-QC3, TA, and BC3. We show the results of our models based on 5-fold cross validation in addition to the random (20%) test set in Table 8. The 5-fold setting allows us to get more generic performance of the models on a particular data set. For simplicity, we only report the results for Glove embeddings that were found to be superior to word2vec embeddings. We can observe trends similar to those for MRDA: (i) bidirectional LSTMs outperform their unidirectional counterparts, (ii) pretrained Glove vectors provide better results than the randomly initialized ones, and (iii) conversational word embeddings give the best results among the embedding types. When we compare these results with those of the baselines (ME bow and MLP bow ), we see our method outperforms those on QC3 and TA (3.8% to 8.0%), but fails to do so on BC3. This is due to the small size of the data that affects deep neural methods like LSTM-RNNs, which usually require much labeled data to learn an effective compositional model. In the following, we show the effect of adding more labeled data from the MRDA meeting corpus. Adding Meeting Data. To validate our claim that LSTM-RNNs can learn a more effective model for our task when they are provided with enough training data, we create a concatenated training setting by merging the training and the development sets of the four corpora in Table 8 (see the Train and Dev. columns in the last row); the test set for each data set remains the same. We will refer to this train-test setting as CONCAT. Table 11 shows the results of the baseline and the B-LSTM models on the three test sets for this concatenated training setting. We notice that our B-LSTM models with pretrained embeddings outperform ME bow and MLP bow significantly. Again, the conversational Glove embeddings prove to be the best word vectors giving the best results across the data sets. Our best model gives absolute improvements of 2% to 12% in F 1 across the data sets over the best baselines. When we compare these results with those in Table 10, we notice that with more heterogeneous data sets, B-LSTM, by virtue of its distributed and condensed representation, generalizes well across different domains. In contrast, ME and MLP, because of their BOW representation, suffer from the data diversity of different domains. These Table 11 Macro-averaged F 1 and Accuracy (in parentheses) results for training on the concatenated (CONCAT) data set without any explicit domain adaptation. Best results are boldfaced. Accuracy numbers significantly higher than the best baseline MLP bow are marked with *. results also confirm that B-LSTM gives better sentence representation than BOW when it is given enough training data. Comparison with Other Classifiers and Sentence Encoders. Now, we compare our best B-LSTM model (i.e., B-LSTM conv-glove ) with other classifiers and sentence encoders in the concatenated (CONCAT) training setting. The models that we compare with are: (a) ME conv-glove : We represent each sentence as a concatenated vector of its word vectors, and train a MaxEnt (ME) classifier based on this representation. For word vectors, we use our best performing conversational Glove vectors as we use in our B-LSTM conv-glove model. We set a maximum sentence length of 100 words, and used zero-padding for shorter sentences. This model has a total of 100 (input words) × 300 (embedding dimensions) × 5 (class labels) = 150,000 trainable parameters. 14 (b) MLP conv-glove : We represent each sentence similarly as above, and train a one-hidden layer Multi-layer Perceptron (MLP) based on the representation. The hidden layer has 1, 000 units, which is determined based on the performance on the development set. This model has a total of 100 × 300 × 1000 × 5 = 150,000,000 parameters. (c) ME conv-glove-averaging : We represent each sentence as a mean vector of its word vectors, and train a MaxEnt classifier using this representation. This model has a total of 300 × 5 = 1, 500 trainable parameters. (d) SVM conv-glove-averaging : We train a SVM classifier based on the mean vector. 15 In our training, we use a linear kernel with the default C value of 1.0. (e) ME skip-thought : We encode each sentence with the skip-thought encoder of Kiros et al. (2015). The skip-thought model uses an encoder-decoder framework to learn the sentence representation in a task-agnostic (unsupervised) way. It encodes each sentence with a GRU-RNN (Cho et al. 2014), and uses the encoded vector to decode the words of the neighboring sentences using another GRU-based RNN as a language model. The model is originally trained on the BookCorpus 16 with a vocabulary size of 20K words. It then uses the CBOW word2vec vectors (Mikolov et al. 2013a) to expand the vocabulary size to 930,911 words. Following the recommendation from the authors, we use the combine-skip model that concatenates the vectors encoded by a uni-directional encoder (uni-skip) and a bi-directional encoder (bi-skip). The resulting vectors are of 4,800 dimensions-the first 2,400 dimensions is the uniskip vector, and the last 2,400 dimensions is the bi-skip vector. We learn a ME classifier based on this representation. This model has a total of 4,800 × 5 = 24,000 parameters. We notice that all these models have a large number of parameters to learn an effective classification model for our task using the sentence representation as input features. Similar to our B-LSTM, the B-GRU and the skip-thought models are compositional, that is, they compose the sentence representation from the representation of its words using the sentence structure. Although the 4,800 dimensional sentence representation for skipthought is not learned on the task, the associated weight parameters in the ME skip-thought model are trained on the task. Table 12 presents the results. It can be observed that in general the compositional methods perform better than the non-compositional ones (e.g., averaging, concatenation), and when the compositional method is trained on the task, we get the best performance on two out of three data sets. In particular, our B-LSTM conv-glove gets the best results on QC3 and TA, outperforming B-GRU conv-glove by a slight margin in F 1 . 17 The ME skip-thought performs the best on BC3, and close to the best results on TA. This is not so surprising because the skip-thought model encodes a sentence like a neural conversation model (Vinyals and Le 2015), and it has been shown that such models capture information relevant to speech acts (Ritter, Cherry, and Dolan 2010). To further analyze the cases where B-LSTM conv-glove makes a difference, Figure 7 shows the corresponding confusion matrices for B-LSTM conv-glove and MLP conv-glove on the concatenated testsets of QC3, TA, and BC3. In general, our classifiers get confused between Response and Statement, and between Suggestion and Statement the most. We noticed a similar observation in the human annotations, where annotators had difficulties with these three acts. It is noticeable that B-LSTM conv-glove is less affected by class imbalance, and it can detect the Suggestion and Polite acts much more correctly than MLP conv-glove . This indicates that LSTM-RNNs can model the grammar of the sentence when composing the words into phrases and sentences sequentially. Effectiveness of Domain Adaptation. We have seen that semi-supervised learning in the form of word embeddings learned from a large unlabeled conversational corpus benefits our B-LSTM model. In the previous section, we witnessed further performance gains by exploiting more labeled data from the synchronous domain (MRDA). However, these methods make a simplified assumption that the conversational data comes from the same distribution. As mentioned before, the conversations in QC3, TA, or BC3 are quite different from MRDA meeting conversations in terms of style (spoken vs. written) Table 12 Comparison of different sentence encoders on the concatenated (CONCAT) data set. Best results are boldfaced. Accuracies significantly higher than ME skip-thought are marked with *. and vocabulary usage. We believe that the results can be improved further by modeling the shift of domains (or distributions) explicitly. In Section 3.2, we described two adaptation scenarios: (i) unsupervised, where no annotated data is available in the target domains, and (ii) supervised, where some annotated data is available in the target domain. We use all the available labels in the CONCAT data set for our supervised training. This makes the adaptation results comparable with our pre-adaptation results reported earlier in Table 12. Table 13 presents the results for the adapted B-LSTM conv-glove model under the above training conditions (last two rows). For comparison, we have also shown the results for two baselines: (i) a transfer B-LSTM conv-glove model in the first row that is trained on only MRDA (source domain) data, and (ii) a merge B-LSTM conv-glove model in the second row that is trained on the concatenation of MRDA and the target domain data (QC3, TA, or BC3). Recall that the merge model is the one that gave the best results so far (i.e., last row of Table 11). We can observe that without any labeled data in the target domain, the adapted B-LSTM conv-glove in the third row performs worse than the transfer baseline in QC3 and TA. In this case, because the out-of-domain labeled data set (MRDA) is much larger, it overwhelms the model, inducing features that are not relevant for the task in the target domain. However, when we provide the model with some labeled in-domain examples (a) MLP conv-glove (b) B-LSTM conv-glove Figure 7 Confusion matrices for (a) MLP conv-glove and (b) B-LSTM conv-glove on the test sets of QC3, TA, and BC3. P stands for Polite, Q for Question, R for Response, ST for Statement, and SU stands for Suggestion. in the supervised adaptation setting (last row), we observe gains over the merge model (second row) in all three data sets. Remarkably, the absolute improvements in F 1 for BC3 and TA are more than 11% and 3%, respectively. To analyze further the performance of our adapted model, Figure 8 presents the F 1 scores with varying amounts of labeled data in the target domain. It can be noticed that for all three data sets, the largest improvements come from the first 25% of the labeled data. The gains from the second quartile are also relatively higher than the last two quartiles for TA and QC3. This demonstrates the effectiveness of our adversarial domain adaptation method. In the future, it will be interesting to compare adversarial training with other domain adaptation methods. Conversation-Level Data Set for CRFs. To demonstrate the effectiveness of CRFs in capturing inter-sentence dependencies in an asynchronous conversation, we create another training setting called CONV-LEVEL (Conversation-level), in which training instances are entire conversations and the random splits are done at the conversation level (as opposed to sentence) for the asynchronous corpora. This is required because the CRFs perform joint learning and inference based on an entire (788) conversation. Table 14 shows the resulting data sets that we use to train and evaluate our CRFs. We have 229 conversations for training and 27 conversations for development. 18 The test sets contain 5, 20, and 5 conversations for QC3, TA, and BC3, respectively. Baselines and CRF Variants. We use the following three models as baselines: (a) ME bow : a MaxEnt model with BOW representation. (b) Adapted B-LSTM conv-glove (semi-supervised): This model performs adversarial semi-supervised domain adaptation using labeled sentences from MRDA and CONV-LEVEL training sets. Note that this is our best system so far (see Table 13). (c) ME adapt-lstm : a MaxEnt learned from the sentence embeddings extracted from the adapted B-LSTM conv-glove (semi-supervised), that is, the sentence embeddings are used as feature vectors. We experiment with the CRF variants shown in Table 1. Similar to ME adapt-lstm , the CRFs are trained on the CONV-LEVEL training set using the sentence embeddings extracted by applying the adapted B-LSTM conv-glove (semi-supervised) model. The CRF models are therefore the structured versions of the ME adapt-lstm baseline. . Table 15 shows our results on the CONV-LEVEL data sets. We can notice that CRFs generally outperform MEs in accuracy, and for some CRF variants we get better results in both macro F 1 and accuracy. This indicates that there are conversational dependencies between the sentences in a conversation. Results and Discussion If we compare the CRF variants, we can see that the model that does not consider any link across comments (CRF LC-NO ) performs the worst. A simple linear chain connection between sentences in their temporal order (CRF LC-LC ) does not improve much. This indicates that the linear chain CRF (Lafferty, McCallum, and Pereira 2001) is not the most appropriate model for capturing conversational dependencies in asynchronous conversations. The CRF LC-LC 1 is one of the well performing models and performs significantly better than the adapted B-LSTM conv-glove . 19 This model considers linear chain connections between sentences in a comment and only to the first comment. When we change this model to consider relations with every sentence in the first comment (CRF LC-FC 1 ), this improves the performance further, giving the best results in two of the three data sets. This indicates that there are strong dependencies between the sentences of the initial comment and the sentences of the rest of the comments, and these dependencies are better captured if the relations between them are explicitly considered. The CRF FC-FC also yields as good results as CRF LC-FC 1 . This could be attributed to the robustness of the fully connected CRF, which learns from all possible pairwise relations. Another interesting observation is that no single graph structure performs the best across all conversation types. For example, CRF LC-FC 1 gives the highest F 1 scores for QC3 and BC3, whereas CRF FC-FC gives the highest results for TA. This shows the variation and the complicated ways in which participants communicate with each other in these conversations. One interesting future work would be to learn the underlying conversational structure automatically. However, we believe that in order to learn an effective model, this would require more labeled data. To see some real examples in which CRF by means of its global learning and inference makes a difference, let us consider the example in Figure 1 again. We notice that the two sentences in comment C 4 were mistakenly identified as Statement and Response, respectively, by the B-LSTM conv-glove local model. However, by considering these two sentences together with others in the conversation, the global CRF LC-LC 1 , CRF LC-FC 1 , and CRF FC-FC models were able to correct them (see GLOBAL). CRF LC-LC could correctly identify the first one as Question. Conclusions and Future Directions We have presented a novel two-step framework for speech act recognition in asynchronous conversation. An LSTM-RNN first composes sentences of a conversation into vector representations by considering the word order in a sentence. Then a pairwise CRF jointly models the inter-sentence dependencies in a conversation. In order to mitigate the problem of limited annotated data in the asynchronous domains, we further adapt the LSTM-RNN to learn from synchronous meeting conversations using adversarial training of neural networks. We experimented with different LSTM variants (uni-vs. bi-directional, random vs. pretrained initialization), and different CRF variants, depending on the underlying graph structure. We trained word2vec and Glove conversational word embeddings from a large conversational corpus. We trained our models on many different settings using synchronous and asynchronous corpora, including in-domain training, concatenated training, unsupervised adaptation, supervised adaptation, and conversation level CRF joint training. We evaluated our approach on a synchronous data set (meeting) and three asynchronous data sets (two forum data sets and one e-mail data set), one of which is presented in this work. Our experiments show that conversational word embeddings, especially conversational Glove, are quite beneficial for learning better sentence representations for the speech act classification task through bidirectional LSTM. This is especially true when the amount of labeled data in the target domain is limited. Adding more labeled data from synchronous domains yields improvements for bi-LSTMs, but even more gains can be achieved by domain adaptation with adversarial training. Further experiments with CRFs show that global joint models improve over local models given that the models consider the right graph structure. In particular, the LC-FC 1 and FC-FC graph structures were among the best performers. This work leads us to a number of future directions. First, we would like to combine CRFs with LSTM-RNNs for doing the two steps jointly, so that the LSTM-RNNs can learn the embeddings directly using the global thread-level feedback. This would require the backpropagation algorithm to take error signals from the loopy BP inference. Second, we would also like to apply our models to conversations, where the graph structure is extractable using metadata or other clues, for example, the fragment quotation graphs for e-mail threads (Carenini, Ng, and Zhou 2008). One interesting future work would be to jointly model the conversational structure (e.g., reply-to structure) and the speech acts so that the two tasks can inform each other. In another direction, we would like to evaluate our speech act recognition model on extrinsic tasks. In a separate thread of work, we are developing coherence models for asynchronous conversations (Nguyen and Joty 2017;Joty, Mohiuddin, and Tien Nguyen 2018). Such coherence models could be useful for a number of downstream tasks including next utterance (or comment) ranking, conversation generation, and thread reconstruction (Nguyen et al. 2017). We are now looking into whether speech act information can help us in building better coherence models for asynchronous conversations. We also plan to evaluate the utility of speech acts in downstream NLP tasks involving asynchronous conversations like next utterance ranking (Lowe et al. 2015), conversation generation (Ritter, Cherry, and Dolan 2010), and summarization (Murray, Carenini, and Ng 2010). Finally, we hope that the new corpus, the conversational word embeddings, and the source code that we have made publicly available in this work will facilitate other researchers in extending our work and in applying speech act models to their NLP tasks. Bibliographic Note Portions of this work were previously published in the ACL-2016 conference proceeding (Joty and Hoque 2016). This article significantly extends the published work in several ways, most notably: (i) we train new word2vec and Glove word embeddings based on a large conversational corpus, and show their effectiveness by comparing with off-the-shelf word embeddings (Section 4.2 and the Results section), (ii) we extend the LSTM-RNN for domain adaptation using adversarial training of neural networks (Section 3.2), (iii) we evaluate the domain adapted LSTM-RNN model on meeting and forum data sets (Section 5.2), and (iv) we train and evaluate CRFs based on sentence embeddings extracted from the adapted LSTM-RNN (Section 5.3). Beside these extensions, a significant portion of the article was rewritten to adapt to a journal-style publication.
2018-09-23T00:24:57.896Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "2becb7cb7be9bde86a2f73afdf21a7ecd9e7df46", "oa_license": "CCBYNCND", "oa_url": "https://www.mitpressjournals.org/doi/pdf/10.1162/coli_a_00339", "oa_status": "GOLD", "pdf_src": "ACL", "pdf_hash": "af13efb42b3c2bfff6ac9c15e12877044ec0b633", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
54637789
pes2o/s2orc
v3-fos-license
Modelling of sedimentation processes inside Roseires Reservoir (Sudan) Roseires Reservoir, located on the Blue Nile River, in Sudan, is the first trap to the sediments coming from the vast upper river catchment, in Ethiopia, suffering from high erosion and desertification problems. The reservoir lost already more than one third of its storage capacity due to sedimentation in the last four decades. Appropriate management of the eroded soils in the upper basin could mitigate this problem. In order to do that, the areas providing the highest sediment volumes to the river have to be identified, since they should have priority with respect to the application of erosion control practices. This requires studying the sedimentation record inside Roseires Reservoir, with the aim of identifying when and how much sediment is deposited, as well as its source. This paper deals with the identification of deposition time and soil stratification inside the reservoir, based on historical bathymetric data, numerical modeling and newly acquired soil data. The remoteness of the study area and the extreme climate result in expensive and difficult coring campaigns. For this, these activities need to be optimized and coring locations selected beforehand. This was Introduction The construction of dams and reservoirs represents a great achievement for the management of the water resource, but at the same time it creates a relevant disturbance to the river ecology and morphology.Nevertheless, reservoirs are built for a number of reasons, such as hydropower, irrigation, drinking water supply and flood mitigation (Rãdoane and Rãdoane, 2005).Unfortunately, reservoirs have a limited life, mainly due to sedimentation (White, 2001).Sedimentation not only decreases the lifespan of the reservoir but also changes the river morphology in the downstream and upstream parts of the river.In addition, the settling of river sediment in reservoirs can have important implications for the riverine ecosystem and the coastal development downstream of the dam (de Vente et al., 2005;WCD, 2000).Therefore, it is vital to study and predict the sediment yield at the basin scale and realise which factors determine the sedimentation rate of reservoirs.This knowledge allows for effective measures to be taken against reservoir sedimentation, water shortage, and erosion of coast and river banks.Sedimentation processes in reservoirs have been addressed by many authors, e.g.Duquennois (1956), Borland (1971), Bowonder et al. (1985), Mohmood (1987), Annandale (1987), Hotchkiss and Parker (1991), Fan and Morris (1992), Sloff (1997) and White (2001), among many others.In recent years, many experimental and numerical research studies in this field have been pursued all over the world (e.g.Toniolo and Parker, 2003;Hu et al., 2010). Roseires Reservoir is located on the Blue Nile River, in Sudan (Fig. 1a).It is the first trap to the sediments coming from the upper catchment in Ethiopia, which suffers from high erosion and desertification problems.The reservoir has already lost more than one-third of its storage capacity due to sedimentation in the last four decades.This is a large economic loss to Sudan, in addition to the high maintenance costs of sediment clearance in front of the turbines to facilitate hydropower production.This problem could be mitigated by appropriate management of the upper catchment, where the sediment is coming from.Given the vastness and the remoteness of the region involved, and the extension of the soil erosion problem, the areas providing the highest sediment volumes to the river should be identified first.These areas should then have priority with respect to the application of erosion control practices. Sedimentation inside reservoirs is influenced by many factors, but primarily it is dependent upon the sediment fall velocity, the water flow through the reservoir and the reservoir operation (Gottschalk, 1964).Due to the complexity and interaction of many parameters, there are no di-rect analytical solutions to predict the reservoir sedimentation rates.Most of the available methods are therefore either empirical models, derived from historical data and information from other reservoirs, or mathematical or physical models (Hama, 2006).The empirical methods are used mostly during the design phase of the reservoir, such as the area reduction method (Annandale, 1987;Borland and Miller, 1958).Other empirical methods include the Brune method (1953), the Churchill method (1948) and the Brown method (1953) (Cristofano, 1953;Bashar et al., 2010;Garg and Jothiprakash, 2008).Numerical models are increasingly used to study the sedimentation process inside reservoirs (e.g.Toniolo and Parker, 2003;Hu et al., 2010). Unfortunately, to identify the causes of the observed high sedimentation rates inside Roseires Reservoir, empirical methods are not effective.For this, it is necessary to locate the area of origin of the sediment, from the mineralogical analysis of the deposited sediment and from the assessment of the time elapse between the moment the sediment is eroded and the moment the sediment is deposited.This is essential information to identify the cause of soil erosion, for instance by comparing land use maps, for which it is necessary to consider the right timing.All this requires expensive coring campaigns in the upper catchment and inside the reservoir, which need to be optimised.In particular, sediment deposition can only be studied at locations inside the reservoir where neither soil erosion nor the movement of bars, reworking the bed sediment, occur.Moreover, at the coring locations, the type of soil stratification should allow for different depositional years to be identified. In long and narrow reservoirs, like Roseires, the bathymetric profile commonly associated with delta deposits may be absent, but an area characterised by a rapid shift in grain size may mark the downstream limit of coarse material deposition (Morris and Fan, 2009;Vanoni, 2006;Chang and Hill, 1977;Chang, 1982;Hama, 2006).As explained by Morris and Fan (2009), most coarse sediment is delivered to reservoirs by high water flows.The irregular nature of sediment delivery may be recorded in reservoir deposits as alternating layers of coarse and fine sediment.Typically, lenses of sand delivered during high inflow are embedded between layers of fines deposited during periods of low flow.Seasonal discharges may produce regular sequences of strata.In continuously depositional areas, these sediment sequences may be interpreted to reconstruct the depositional history of the reservoir. In Roseires Reservoir, the use of the same operation rules (release of turbid water and storing of clean water) since the first impounding in 1966, together with the presence of a clear high-flow season and of both sand and fine sediment in the Blue Nile River, may indeed result in clear soil layering in certain parts of the reservoir.The idea is that sand settles inside the reservoir during the high-flow season, when turbid water is released, while fine sediment is deposited mainly during the low-flow season, when all gates are set at the highest level. To substantiate this, we carried out a preliminary coring campaign inside the reservoir, in an area becoming dry at the end of the low-flow season, close to the dam.The grain size distribution of the sediment, determined at intervals of 30 cm from the bed surface from a 3.9 m deep trench, showed that soil layering could be recognised from the analysis of the vertical variation of the sediment D 90 .The coring location ((the coordinates system is UTM, WGS84, Zone 36N), however, appeared subject to erosion, which made it impossible to use those data for the study of historical deposition.Moreover, at that specific location the deposited sediment is mainly sand, most probably settling only during the highflow seasons, which does not allow for the yearly sequences of deposition to be recognised.This preliminary field campaign showed the necessity of returning to the field, but this time only after having identified the best coring locations.This would maximise efficiency and minimise costs, considering that the area is difficult to access and that coring can be carried out only at the end of the low-flow season, when large parts of the reservoir bed become dry. In order to identify the most promising coring locations inside Roseires Reservoir, we simulated the evolution of the bed topography of the reservoir since 1985 with a physicsbased quasi-three-dimensional morphodynamic model, using the time series of the discharge entering the reservoir.The model computed and recorded the fate of incoming sand and silt, the time variations of the bed level, and the horizontal and vertical variations of soil composition.With this tool, we explored the trends of sedimentation of sand and silt with time inside the entire reservoir.The model allowed recognising the presence of soil stratification in certain areas and the locations where soil erosion is absent, which resulted in the identification of two promising coring locations. We then returned to the field to study the reservoir soil at the identified locations.This allowed us to study the sediment depositing in the reservoir and gave a unique opportunity to validate the model. The Blue Nile River The Blue Nile is the main tributary of the Nile River.It originates at the outlet of Lake Tana and flows for nearly 900 km through Ethiopia before reaching the Sudanese border.In Ethiopia, the Blue Nile has 14 major tributaries (Fig. 1a), which represent the majority of the estimated annual flow of the river, which is 46.2 ×10 9 m 3 yr −1 .Here, the river falls from 1800 m above sea level at Lake Tana to about 490 m above sea level at the Sudan border, which gives an averaged longitudinal slope of 1.5 m km −1 (Fig. 1b).The upper basin suffers from high erosion problems due to intensive land use and upper catchment desertification and delivers huge quantities of sediment to the river system (Ayalew and Yamagishi, 2004).After leaving Ethiopia, the Blue Nile runs through Sudan for about 735 km to Khartoum, where it joins the White Nile to form the Nile River.Presently, the Blue Nile waters encounter two dams: the Roseires Dam and the Sennar Dam (Fig. 1a), both in Sudan, but a new dam is currently under construction in Ethiopia, the Grand Renaissance Dam, and other dams are planned. The slope from the Ethiopian border to Khartoum is 1 order of magnitude milder than in the Ethiopian side, only 15 cm km −1 (Abdelsalam and Ismail, 2008).The Dindir and Rahad rivers join the Blue Nile downstream of Sennar Dam in Sudan, contributing with an average annual flow of 4 ×10 9 m 3 yr −1 together. The flow of the Blue Nile reflects the seasonality of rainfall over the Ethiopian highlands, which can be distinguished in the wet season, from July to October, with maximum flow in August-September and the dry season, from November to June.Consequently, the annual Blue Nile hydrograph at the Ethiopia-Sudan border has a bell-shape pattern.The daily flow of the river fluctuates between 10 ×10 6 m 3 in April to 500 ×10 6 m 3 in August with a ratio of 1 : 50 (Awulachew et al., 2008). Roseires Reservoir Roseires Reservoir is located in Sudan 550 km southeast of Khartoum, near the border with Ethiopia (Fig. 1a).It is one of the oldest reservoirs in the basin, since the dam was finalised in 1966 (Fig. 2).This reservoir plays an important role in the economy of Sudan, since it provides hydropower, water for irrigation, and flood control.The maximum length of the reservoir is about 80 km and the wet area surface is up to 290 km 2 .The total storage capacity was 3 × 109 m 3 before the first impounding of the reservoir in 1966 (Bashar and Eltayeb, 2010), but in the mean time the reservoir has lost 40 % of this storage capacity due to sediment deposition (Ali, 2014).To limit sedimentation (Sloff, 1997), the gates are kept to the minimum level (open) in the wet season and are raised to the maximum level 1 month before the end of the highflow season and kept so during the dry season (Hussein et al., 2005).During the gate-opening period the water drops by approximately 13 m and the area surrounding the channel becomes dry.Dredging is carried out every year in front of the power intake.Dredged sediment is dumped in front of the deep sluice gates to be transported away during the flood season, when all the dam gates are open.The process is generally carried out before the flood season (Bashar and Eltayeb, 2010). To increase the storage capacity of the reservoir, the dam was recently heightened by 10 m (in 2013).Before height-ening, the full supply level was 481 m above sea level (irrigation datum) and the minimum supply level of the power generation during flood season was 467.6 m above sea level (irrigation datum) (Hussein et al., 2005).The bed of the reservoir is cut through by a 10 m deep channel. Reservoir sedimentation from bathymetric data The reservoir was surveyed in 1976in , 1985in (DEMAS, 1985)), 1992 (Gismalla, 1993), 2007(Abd Alla and Elnoor, 2007) and 2009 (Omer, 2011).The bathymetric surveys prior to 2009 do not cover the upstream river reach up to El Deim station, nor the left and right wings of the reservoir (Fig. 3).The bathymetry was measured using an acoustic Doppler current profiler (single beam) and echo-sounders for the wet areas and a differential GPS and survey levels for the temporarily dry areas along specific transects at intervals of 2-5 km or more in most of the surveys.This low resolution created uncertainty in the generated topographical maps.Moreover, the assessment of the changes in bed elevation was made difficult by the different coordinate systems used and by inaccuracies in the horizontal coordinates.All data were therefore checked, corrected and transformed using the irrigation datum as the reference vertical level. The local changes in bed level were obtained by comparing the bathymetric data of 20 sections surveyed in 1985, 1992 and 2007.The resulting temporal variations in storage capacity of Roseires Reservoir are given in Table 1.By subtracting the bed topographies derived from the measured bathymetries, the total storage volume lost in the two periods 1985-1992 and 1992-2007 was quantified as 146 and 238 ×10 6 m 3 , respectively. The areas with net deposition or erosion were identified by subtracting the bed topography in the two time intervals described above: 2007-1992 and 1992-1985. Available hydrodynamic and sediment data Most data were provided by the Ministry of Irrigation and Water Resources (MoIWR) and the Dams Implementation Unit of the Ministry of Dams and Electricity, Sudan. Figure 2 shows the location of the measuring stations for water levels and sediment concentration inside Roseires Reservoir. El Deim gauging station for water level, discharge and sediment concentration is located near the Sudan-Ethiopia border.It was established in 1962 during the construction of the dam 110 km upstream along the river and 85.5 km linear distance (Fig. 2).The station is situated in a deep rock gorge, which is supposed to provide a very stable control.However, in the last three decades El Deim station has deteriorated and is no longer working properly (Ahmed and Ismail, 2008).Water level, discharge and sediment concentration are also available at Famaka at the reservoir inlet; at Wad Almahi inside the reservoir; and at Wad Alies, just downstream of the dam. The concentration of suspended solids was measured during the flood season at El Deim on a daily basis during the last four decades.The data show a high variability in suspended solids concentrations from year to year and substantial differences between the rising limb and the falling limb of the flood curve (Hussein et al., 2005;Ahmed and Ismail, 2008;Billi and el Badri Ali, 2010;Ahmed et al. 2010).Considering the long-term character of the investigation, in order to represent the historical inputs of suspended solids during the high-flow seasons, we derived the averaged values of suspended solids concentrations from the collected data for three periods: the 1970s-1980s, the 1990s and the 2000s.For the low-flow seasons, due to a lack of historical data, we adopted the averaged sediment concentration of 0.024 kg m −3 that we measured during a field campaign in 2011.The mean diameter of suspended sediment is 18.5 and 22 µm at Wad Almahi and Wad Alies, respectively.Silt is the dominant type of sediment in suspension, and it represents more than 80 % of the samples.Sand represents about 15 % of the suspended sediment inside the reservoir (at Wad Almahi) and 10 % of the suspended sediment downstream of the dam (at Wad Alies), as shown in Fig. 4. The analysis of the bed material (Omer, 2011) shows that at some locations sediment contains up to 30 % of silt and clay.Averaging results in a D 50 of 1200 µm upstream of Famaka and a D 50 of 140 µm just upstream of the dam. Methodology Deposition time and soil stratification inside the reservoir are assessed based on the historical bathymetric data, numerical modelling and newly acquired soil data.The study is based on the hypothesis that the alternated stratification of sand and silt reflects the alternation of wet and dry years, respectively. The analysis of the sediment deposited in the reservoir, however, requires extensive field campaigns, but environmental conditions and economical issues limit the possibility to perform wide-ranging fieldwork.Therefore, field campaigns had to be optimized beforehand, using numerical modelling.For this, field campaigns had to be optimised beforehand. With the aim of identifying the most promising sampling areas to investigate soil stratification in the reservoir, we combined the analysis of bathymetric data and the results of a morphodynamic model.Data alone allow for areas characterised by net sedimentation to be identified, but these areas might experience periods of erosion in which parts of the layers are lost.Instead, by recording erosion and deposition during the development of the bed topography, the model would allow for the best areas to be recognised.Another advantage of using a numerical model lies in the possibility of better analysing the sedimentation process in Roseires Reservoir, especially if data are scarce and not always reliable, particularly regarding the time evolution.We adopted a physics-based model that allowed obtaining vertical and horizontal sediment sorting inside the reservoir.The morphodynamic model was constructed using the Delft 3-D software.The setup of the model required two steps: (1) the development of a 2-D depth-averaged hydrodynamic model, and (2) the development of a 2-D morphodynamic model considering two types of sediments -silt and sandaccording to the two types of sediment transported by the Blue Nile River. Modelling 6.1 Model description Lesser et al. (2004) extensively describe the open-source Delft3D code which is applied in the current study (see also http://oss.deltares.nl/web/delft3d). The hydrodynamic part of the model is based on the 3-D Reynolds-averaged Navier-Stokes (RANS) equations for incompressible fluid and water (Boussinesq approximation: Boussinesq, 1903).The closure scheme for turbulence is a k-ε model, in which k is the turbulent kinetic energy and ε is the turbulent dissipation.The equations are formulated in orthogonal curvilinear coordinates.The set of partial differential equations in combination with the set of initial and boundary conditions is solved on a finite-difference grid. We used a 2-D depth-averaged version of the model with an appropriate parameterisation of two relevant 3-D effects of the spiral motion that arises in curved flow (Blanckaert et al., 2003).First, the model corrects the direction of sediment transport through a modification in the direction of the bed shear stress, which would otherwise coincide with the direction of the depth-averaged flow velocity vector.Second, the model includes the transverse redistribution of main flow velocity due to the secondary-flow convection, through a correction in the bed friction term.Taking into account these 3-D effects becomes important not only in curved channels but also in straight channels with bars. The morphodynamic part of the model simulates the processes of sand (capacity-limited transport) and silt (supplylimited transport) separately.For capacity-limited sediment transport, the evolution of the bed topography is computed from a sediment mass balance equation and a sediment transport formula (Exner approach).A number of capacity-limited transport formulas are available, such as Meyer-Peter and Muller's (1947), Engelund and Hansen's (1967) and van Rijn's (1984).The model accounts for the effects of gravity along longitudinal and transverse bed slopes on bed load direction (Bagnold, 1966;Ikeda, 1982). For supply-limited transport (fine sediment travelling in suspension), the evolution of the bed topography is computed from the sediment mass balance and an advection-diffusion formulation describing the temporal and spatial evolution of suspended solids concentration, coupled to two formulas describing the entrainment and deposition processes.The adopted 2-D (depth-averaged) advection-diffusion equation is where c is the mass (depth-averaged) concentration of the fine sediment fraction (k m −3 ) and u and v are the flow velocity components in the x and y direction, respectively (m s −1 ). The velocity and eddy diffusivity (ε s,x,y or z ε s,x,y ) components are obtained from the hydrodynamic model. The following formula (Ariathurai and Arulanandan, 1978;Partheniades, 1964) describes the entrainment of fine sediment from the bed: where E is the erosion flux (kg m −2 s −1 ), τ is the bed shear stress (N m −2 ) and τ c is the critical shear stress for erosion (N m −2 ).M is a coefficient quantifying the erosion rate (kg m −2 s −1 ).The following formula describes the deposition rate: in which D is the deposition rate (kg m −2 s −1 ), C is the sediment concentration (depth-averaged) (e.g.Montes et al., 2010) and w s is the fall velocity of suspended solids (m s −1 ). It is assumed that deposition occurs only if the bed shear stress does not exceed the critical shear stress for deposition, τ d (N m −2 ). For the description of the soil processes, the model records the bed level changes in vertical direction with time together with the composition of the deposited sediment, according to an adapted version of Hirano's (1971) bed layer model (Blom, 2008).This permits study of the evolution of the vertical stratification of sediment deposits. Hydrodynamic model: setup, calibration and validation The 2-D depth-averaged model was built to cover the reservoir area from the dam to El Deim, 30 km upstream of the end of the reservoir (about 110 km in total).The reservoir shape is rather complex, as shown in Fig. 2. Consequently, the computational curvilinear grid size is variable, ranging from 25 to 280 m (Fig. 6).The upstream boundary condition was represented by the daily discharge time series measured at El Deim.The downstream boundary condition was represented by the corresponding water levels measured at Wad Almahi and by the dam outflow discharges.The selection of the simulation time step depends on several parameters, such as the grid size of the model, the water depth, the required accuracy and the stability of the model during simulation.The Courant number (C r ) is defined by In general, C r should not exceed a value of 10. (Deltares: Delft 3D-Flow User manual, Simulation of Multidimensional Hydrodynamic Flows and Transport Phenomena, Delft, the Netherlands, 38-43, 2010.)For the hydrodynamic model and the selected schematisation of the grid cells, the time step used is 30 s and the value of C r varies in space and time.The values of other numerical parameters adopted in the model coincide with the default values of the Delft3D software. During the model setup phase, inaccuracies due to the large size of the computational grid cells were compensated for by manual adjustments of topographic levels, ensuring that the thalweg elevation in the model is close to the measured one. The hydrodynamic model was calibrated on the 2009 water levels measured at Famaka, a measuring station located inside the reservoir, about 80 km upstream of the dam (Fig. 2).The chosen values of the calibration parameters and the closure coefficients for the k-ε model are given in Table 2.In particular, the bed roughness resulted in a Chézy coefficient of 80 m 1/2 s −1 .Figure 7a shows the results of model calibration. The hydrodynamic model was validated using the daily discharge time series measured at El Deim (upstream open boundary condition), the daily dam outflow (downstream open boundary condition) and the water levels measured at Wad Almahi (inside the reservoir) in 2010. Morphodynamic model: setup, calibration and validation The hydrodynamic model was then used to set up the morphodynamic model, with the aim to simulate sediment deposition and erosion inside the reservoir during the two periods 1985-1992 and 1992-2007.The model was calibrated and validated on the measured bed level changes in these two periods derived from the bathymetric data.There were no data available on soil stratification.The hydrodynamic boundary conditions were the time series of monthly inflow and outflow discharges and averaged water levels inside the reservoir.The morphodynamic computations were excessively timeconsuming due to the large number of computational cells.This large number of cells was inevitable due to the vastness of the reservoir and the complexity of the processes to be simulated, such as 2-D hydrodynamics, bed and suspended load transport, soil erosion and sediment deposition, together with the storing of the bed composition.To limit the duration of each computational run to a couple of weeks, the morphodynamic model was speeded up using the morphological factor introduced by Roelvink (2006).We adopted a morphological factor equal to 30 to represent the morphological changes occurring in 1 month by simulating one single day (hydrodynamically).This is obtained by multiplying the corresponding morphological changes by a factor of 30.The approach creates a water balance problem in the reservoir, since the hydrodynamic part could not represent the behaviour of 1 month.To respect the balance, water was added to or subtracted from the reservoir during the computations, which could be done only in a distributed way, to be applied to the wet surface of the reservoir, similar to rain and evaporation.The relation surface elevation was derived from the bathymetric data and then used for this purpose (Fig. 5).This method allowed for the water balance and the water levels in the reservoir to be respected, but not the flow velocity distribution inside the reservoir, which suffered from inaccuracies.These inaccuracies were mainly due to the gradual increase/decrease in water discharge along the reservoir and particularly affected the filling-in and flushing times. Dredging was implemented as a yearly operation.The upstream input of suspended sediment concentrations during The model was calibrated on the period 1985-1992.This period was selected due to the availability of topographical surveys.This means that the model was run for 7 years with the required input data starting from the bed topography measured in 1985.The final result of the model was then compared with the measured topography of 1992.The model parameters were tuned until the simulated topography was comparable to the measured one in a satisfactory way. Given the large variety of sediments settling in the reservoir and the necessity to consider only two components (sand and silt), the transport formula for sand with an averaged diameter of 700 µm, the fall velocity, the critical shear stress for erosion, and the erosion speed of fine suspended solids were all used as calibration parameters (Table 2). The transport formula that gave the best results for the sand component was van Rijn's (1984).The optimised fall velocity of the fine suspended solids resulted in 0.005 mm s −1 and the critical shear stress for the erosion of deposited silt in 1 N m −2 .For bed shear stresses above this value, the bed of the reservoir eroded with an erosion rate of 2 mg m −2 s −1 .In natural systems, the erosion rate is a function of bed density (consolidation) and bed shear stress.However, consolidation of sediment deposits is not taken into account in the model.This means that all densities are prescribed initially and kept constant in time.We applied a dry-bed density of 1200 kg m −3 for the deposited silt (porosity = 0.55) and of 2000 kg m −3 for sand (porosity = 0.25).In the model, the value of the dry density does not have any influence on the erosion rate, but it has some influence on the bed-level changes (the smaller the imposed dry density is, the larger the change in volume due to the presence of the pores between the sediment particles). The critical shear stress for deposition resulted in 1000 N m −2 .Below this value, any particle was free to deposit according to its fall velocity and depending on the computed bed shear stress. Figure 8 shows the measured and simulated cross sections 18 and 19B (10.8 km and 15.4 km upstream of the dam, respectively).The model does not provide accurate results at this level of detail.Computed section 18 shows that the model fails to simulate the main channel shift (compare measured and simulated 1992 topography).The same applies to section 19B.This might be due to the relatively large grid size and the distance of the measured cross sections (2-5 km), which does not allow for proper reproduction of curved flow effects inside the main channel. Figure 9a shows the measured difference in bed topography and Fig. 9b the simulated difference for 1992-1985.In the figures, the ellipses show the areas for which the bed topography of 1985 and 1992, being unknown, was made equal to the bed topography of 2009.These areas should therefore not be considered, although they might have influenced the bed level changes in other areas.By comparing simulated with measured differences in bed elevation, it can be observed that the upstream part (1), subjected to deposition, has the same deposition pattern, but a smaller deposited volume, in the simulation.Some eroded areas (2) can also be recog- nised in the simulation, especially the area closer to the dam and the narrow area more upstream. The total computed cumulative deposited volume of sediment in the period 1985-1992 is 188 ×10 6 m 3 , which is 29 % larger than the measured volume (146 ×10 6 m 3 , from Table 1).Based on this we considered the results of the calibration to be satisfactory. The model was then validated on the developments occurred in the next 15 years, from the end of 1992 to the end of 2007.The runs started with the bed topography of 1992.The results representing the bed topography after 15 years were compared to the measured bed levels in 2007.The simulated morphological changes inside Roseires Reservoir show significantly higher deposition rates than the measured ones.The computed total cumulative sediment deposit in this period is 567 ×10 6 m 3 , which is more than double that of the measured 238 ×10 6 m 3 (Table 1).To analyse the implications of this overestimation at the cross-sectional scale, we compare the measured and simulated sections 18 and 19B in Fig. 10.The simulated section 18 in 2007 shows a deposition of 2-2.5 m with respect to section 18 in 1992, which is larger than the measured one.In particular, the model does not correctly reproduce the main channel shift inside the reservoir at section 19B. We believe that the unavailability of good field data reflects on the model accuracy and output reliability.In this study, data were not available in sufficient detail and were limited in terms of quality and extent.For instance, the cross sections measured during the bathymetric surveys of 2009 are 2 to 5 km far from each other and the surveys do not cover the entire length of the reservoir.This creates inaccuracy in preparing the reservoir bed topography considering the length of Roseires Reservoir (80 km) and its meandering shape.The relatively large grid size adopted in the model adds to this.The combination of poor data and model accuracy resulted in the smoothing-down of the bed topography differences, which is particularly impacting on the simulated channel inside the reservoir.The simulated flow velocity is more uniform than in reality and remarkably lower in the channel.Lower velocity in the channel results in excessive sedimentation of the sand component in the upstream part of the reservoir, a lack of sand in the deposits more downstream, higher deposition rates of the silt component, and less efficient sediment flushing in the model than in reality.Since the channel becomes more important with time as reservoir sedimentation progresses, this effect is more important for the validation period than for the calibration period.This might explain the increased overestimation of sediment deposition for the validation period. The discrepancies between model and measurements could also be caused by an overestimation of the sediment inputs.In particular, the suspended solids concentrations for the 2000s seem to be overestimated by the adopted averaged value.absence of bar movement, destroying soil stratification; and (4) recognisable soil stratification.Two areas were selected for the sampling, as shown in Fig. 11, based on the absence of erosion and bar migration in the model outputs. Area 1. Figure 12 shows the vertical profiles of deposition at Area 1.In Fig. 12a, the solid lines represent the final bed levels of 1985, 1988and 1991. In Fig. 12b, the solid lines represent the final bed levels of 1992, 1995, 1998, 2001, 2004 and 2007.In areas characterised by the absence of bed erosion, the lowest solid line represents the first year, whereas the top line represents the final year of the computation.In Area 1, according to the model, deposition always occurs at the right side of the reservoir, from 0 to 250 m from the right bank.Erosion occurs due to channel shift in the middle of the reservoir.The last 200 m on the left side of the reservoir, from 1500 to 1700 m, is again characterised by deposition only.The dominant deposited sediment in 1986 (dry year) is sand.Sand content is higher in the period 1985-1992 (Fig. 12a) than in the following 15 years (Fig. 12b).The general trend in the years 1989, 1990, 1991 and 1992 is deposition of coarser sediment in the deepest area (main channel).Deposition and stratification occur at the sides of the reservoir.These areas become dry at the end of the dry season and are always characterised by deposition, which makes them promising coring areas. Area 2. According to the model, the left side of this section is subject to deposition only, for approximately 3 km.In this area, the reservoir is relatively wide.Most of the sediment deposited in this section is silt with only a minor percentage of sand.Stratification is less visible or absent, and for this reason Area 2 seems less suitable than Area 1 for coring. Model verification based on soil stratification data A subsequent field campaign was carried out in summer 2012 in the areas identified by the model.We visited those areas during the rainy season, when the reservoir gates are open and the water level is the lowest.This allowed us to reach zones that are normally submerged.The central channel, however, was not reachable and sampling was carried out only in the areas that had become dry on the right or left bank, as shown in Table 3.In Area 1, about 45 km upstream of the dam, the reservoir width is 1.5 km, whereas in Area 2, 25 km upstream of the dam, the reservoir width is 4 km.The larger width allowed for a wider transect to be covered, but, due to a number of logistic constraints during the field campaign, we were able to excavate only four trenches.Three trenches were excavated in Area 2 and a fourth trench was excavated in Area 1.The characteristics and locations of the four trenches are summarised in Table 2 and shown in Fig. 13. Sampling in these areas allowed for study of the granulometric characteristics of the deposited sediment and validation of the model results in terms of soil stratification.The analysis of the sediment showed that, at least in the selected areas, the reservoir soil is indeed stratified.However, the layers are not distinguishable from alternations of sand and silt, but they are from alternations of coarse and fine sand.These alternations are visible from the vertical profile of the D 90 of the sediment (Fig. 14).This important difference between model and field data can be attributed to systematic underestimation of the flow velocities inside the reservoir, particularly inside the channel, where most sand is transported downstream.This is most probably due to the difficulty of guaranteeing the water balance (due to the adoption of a morphological factor) and reproducing the channel excavated within the reservoir soil (due to low model and data resolution).Both the morphological factor and the poor resolution were inevitable, though, given the long simulation times (each computation had a duration of weeks) and the data available for the study.Underestimation of flow velocity results in higher sedimentation rates and in sand being deposited in the upstream part of the reservoir (delta formation: Fan and Morris, 1992;Kostic and Parker, 2003a, b). No signs of soil erosion could be detected from the analysis of the trenches.This means that the model was successful in identifying the areas in which no soil erosion has occurred. Sediments are believed to become finer farther from the central channel in the transverse direction, where the flow velocity (during high water) is lower, and in downstream direction, due to selective deposition in the reservoir, the coarser material being deposited in the upper parts.However, clear sediment sorting trends cannot be observed from the field data, neither in the transverse direction nor in the longitudinal direction.We believe that this is due to the limited number of excavated trenches.Sedimentation strongly depends on local hydrodynamic conditions, and for this a general sorting trend can only be detected with a large number of spatially distributed coring locations. Conclusions Two promising coring areas inside Roseires Reservoir were selected by combining bathymetric data analysis with the results of a 2-D morphodynamic model including horizontal and vertical sorting.The model allowed for study of the contribution of two sediment types, sand and silt, both transported by the Blue Nile into the reservoir.The model setup was based on the hypothesis that sand is deposited during high flows, whereas fine material, mainly originating from upper catchment erosion, is deposited during low-flow periods.This would create soil stratification inside the reservoir, allowing for the recognition of specific wet or dry years.The model, with recorded bed level changes and soil composition in vertical direction, shows vertical stratification in the reservoir soil at several cross sections.Two of these cross sections were selected as the best areas for analysing the sedimentation process in the reservoir. The results of the subsequent field campaign carried out in the summer 2012 show clear soil stratification at the four trenches excavated in the selected areas, but layers are mainly distinguishable from the presence of coarse and fine sand rather than from alternations of sand and silt.Coarse sand was mainly deposited there during distinguishable wet years, which allowed for the progression of sediment deposition in the reservoir to be recognised from the collected soil data. Sand appears to be transported and deposited in the reservoir much further downstream than the model predicts.This can be explained by systematic underestimation of the flow velocity in the reservoir during high flows.The cause seems to lie in the adoption of a morphological factor to speed up the computations leading to inaccuracies in the flow velocity estimation and in the poor resolution of data and model, resulting in a more uniform bed topography and flow velocity, leading to lower velocity in the central channel, where sand is transported downstream.The discrepancies between model and measurements could also be due to an overestimation of the sediment inputs.In particular, the suspended solids concentrations for the 2000s seem to be overestimated by the adopted averaged value.For this reason, suspended solids concentrations should be carefully measured in the future.In particular, more measurements are required during the low-water season, at least for modelling purposes. To implement the model in a more reliable way in the future, it is suggested to reduce its cell size, to reduce or eliminate the morphological factor, and to perform more accurate bathymetric surveys, preferably with a side-scan sonar.If computational cell size reduction is not feasible, due to unacceptably long computational times, the suggestion is to nest the model with smaller cells in the central area occupied by the channel. Notwithstanding these limitations, the model allowed for two appropriated coring areas to be recognised, where the soil had never been eroded and was indeed stratified.Moreover, the model allowed for analysis of the sedimentation process in the reservoir with a level of detail that would not have been possible by solely analysing the available data, allowing for data correction at several locations where the horizontal coordinates were uncertain. Figure 1 . Figure 1.Blue Nile Basin and Roseires Dam.The elevation map was derived from STRM (90 m) and is in m a.s.l.(WGS84 datum). Fig. 2 . Fig. 2. Measurement stations and the areas (circled) that are not covered by the surveys of 1985 and 1992.The thick black line represents the contour of the reservoir at an elevation of 481m.a.s.l.(Irrigation Datum). Figure 2 . Figure 2. Measurement stations and the areas (circled) that are not covered by the surveys of 1985 and 1992.The thick black line represents the contour of the reservoir at an elevation of 481 m a.s.l.(irrigation datum). Fig. 5 . Fig. 5. Water surface area inRoseires Reservoir as a function of elevation (Irrigation Datum).The dotted line represents the trend line used in the study. Figure 5 . Figure 5. Water surface area in Roseires Reservoir as a function of elevation (irrigation datum).The dotted line represents the trend line used in the study. Figure 6 . Figure 6.Upstream and downstream boundaries, computational grid and bed elevations in 2009, in m a.s.l.(irrigation datum). Fig. 8 . Fig. 8. Cross-sections 18 and 19B seen from downstream.Measured 1985 and 1992 bed elevations and simulated 1992 bed elevation.Cross-section 18 and 19B are located 10.8 km and 15.4 km upstream of the dam, respectively. Figure 8 . Figure 8. Cross sections 18 and 19B seen from downstream.Measured 1985 and 1992 bed elevations and simulated 1992 bed elevation.Cross sections 18 and 19B are located 10.8 and 15.4 km upstream of the dam, respectively. 9Fig. 9 .Figure 9 . Fig. 9. Comparison between the difference in measured bed topography 1992-1985 and the model cumulative erosion and deposition during the seven-years model run. Figure 14 . Figure 14.Soil stratification at the four trenches. Table 1 . Storage capacity of Roseires Reservoir in ×10 6 m 3 at different years as a function of level, derived from measured bed topographies.ID: irrigation datum Table 2 . The values of physical parameters derived from the calibration process. Table 3 . Summary of the characteristics of the four trenches. a These coordinates are in UTM, WGS84, Zone 36 North.b When reservoir is empty.
2015-09-23T00:31:53.000Z
2013-06-09T00:00:00.000
{ "year": 2015, "sha1": "b1612e87fb2fef812b91098674789a7ac09d491f", "oa_license": "CCBY", "oa_url": "https://esurf.copernicus.org/articles/3/223/2015/esurf-3-223-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b1612e87fb2fef812b91098674789a7ac09d491f", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geology" ], "extfieldsofstudy": [ "Geography" ] }
268331778
pes2o/s2orc
v3-fos-license
VERTICAL CONCENTRATION DISTRIBUTIONS OF ATMOSPHERIC PARTICULATES IN TYPICAL SEASONS OF WINTER AND SUMMER DURING WORKING AND NON-WORKING DAYS: A CASE STUDY OF HIGH-RISE BUILDINGS It is important to understand the vertical distribution characteristics of outdoor particulates concentration in typical seasons of winter and summer when people's living spaces are getting higher and higher above the ground. The different heights of floors (1st, 7th, 11th, 17th, and 27th) of a high-rise building in Xi'an at 8:00, 12:00, 15:00, 18:00, and 22:00, respectively, were tested and analyzed in this paper. The results showed that the concentrations on non-working days were much lower than that on working days at different times and on different floors, and the concentrations of particulates were relatively low in summer. The particulates reached the highest at 12:00 in summer, with the average concentrations of PM 10 , PM 2.5 , and PM 1.0 were 37.3, 31.6, and 29.4 μg/m 3 . While reached the highest at 15:00 in winter, with the average concentrations of PM 10 , PM 2.5 , and PM 1.0 were 82.4, 64.8, and 57.7 μg/m 3 . The distribution of atmospheric environment in Xi'an is mainly dominated by small particulates. The particle sizes of low floors are mainly range from 1.0 to 2.5 μm, and the high floors are less than 1.0 μm. With the increase of floors and time, PM 1.0 /PM 2.5 and PM 2.5 /PM 10 show a trend of first decreasing and then increasing on working days, while PM 1.0 /PM 2.5 and PM 2.5 /PM 10 show a trend of first increasing, then decreasing and next increasing on non-working days. In addition, outdoor meteorological parameters will also have a certain impact on particulates concentration distribution. It provides reference values for controlling the particulates concentration in high-rise buildings. Introduction In recent years, the number of high-rise and super high-rise buildings has gradually increased, which make people's living space had been more extensive.People can work and live at a height of 40m or more higher height from the ground [1][2][3].However, high concentrations of particulates in the environment not only cause inconvenience to daily travel, but also cause various diseases in the human body [4,5], or death [6].Many people spend 80-90% of their time indoors [7], and good indoor living environment are particularly important, especially for living in high-rise environments. At present, people have conducted relevant research on the concentration of particulates in the atmospheric environment [8][9][10][11][12][13].Mainly focused on the distribution characteristics of particulates [8,9], correlation with related pollutants [10,11], comparison before and after heating period [12], and regional distribution of particulates [13].Although certain research results have been achieved, the research on the vertical distribution of particulates in typical seasons is still insufficient now.The main reasons that the typical seasons of summer and winter are the mainly emission times for atmospheric particulates, and the distribution of particulates at different heights varies with different particle sizes.In addition, the occurrence of high concentration haze is more frequent in winter cities in China [14].Studying the vertical distribution of particulates in typical seasons will be more meaningful. Half of people's life is work time and half is rest time at the same time.Besides work, people spent more time are resting.Therefore, it can better provide data reference values for people living at different heights through in-depth understanding of the vertical concentration distribution of atmospheric particulates on working days and on non-working days in typical seasons.There is relatively little research on the variation characteristics of particulates concentration with different heights of floors on working and non working days in actual work and life, which cannot truly reflect the vertical distribution of particulates under the two working and non-working periods, and even more cannot fully reflect the correlation between working and non-working days and people's living environment [15].The main reasons are that there is currently a lack of professional monitoring equipment in China, and most research is still conducted around the ground [16].In addition, sampling analysis is also influenced by weather factors, distribution of pollution sources, and human factors.Therefore, there is relatively little research on the vertical distribution of particulates [17][18][19].In fact, people pay little attention to the concentration distribution of outdoor atmospheric particulates in different building floors, but more attention to the floors, orientation and lighting [20].Therefore, there is a lack of certain understanding of the air pollution at different heights of high-rise buildings.However, the existing research cannot effectively give the vertical concentration distribution of atmospheric particulates in typical seasons with the increasing demand for indoor environment, especially in winter and summer on working days and non-working days.Therefore, considering the existing issues mentioned above, there are still many shortcomings in recent years in the research on the vertical distribution of atmospheric particulates at different heights of high-rise buildings during the winter and summer seasons on working and non-working days [21]. A high-rise building in Xi'an in typical winter and summer working days and non working days were tested and analyzed in this paper to solve those problems, and the outdoor atmospheric particulates concentration distribution at different heights was also tested.It can provide reference values for the control of atmospheric particulates at different heights of high-rise buildings. Methods A high-rise building in Xi'an was selected for testing, with a height of 3.3 m and a total of 29 floors.The standard was used to for testing [22], and different heights of floors were 1st, 7th, 11th, 17th, and 27th floors respectively, with heights of 1.5 m, 23.1 m, 36.3 m, 56.1 m, and 89.1 m from the ground.The testing time was 8:00, 12:00, 15:00, 18:00, and 22:00.Each test point was tested for 10 minutes, and the data were taken as the average of 10 minutes.The testing period is from July 22 to 24, 2021, and from December 12 to 15, 2021. The GRIMM1.109 portable aerosol particle size spectrometer was used to test atmospheric particulates, with a measurement range of 0.1 to 100000 μg/m 3 .The counting range was from 0 to 2000000 P/L, and particle size between 0.25 and 32 μm are divided into 31 channels, with repeatability of 5%.The temperature and humidity parameters during the experiment were measured and recorded using the TSI 7545 air quality instrument.The temperature range was from 0 to 60 ℃, with error ± 0.6 ℃, and resolution 0.1 ℃.The relative humidity range was from 5 to 95% RH, with accuracy ± 3.0% RH, and resolution 0.1% RH.The HD37AB1347 indoor air quality monitor was used to measure the wind speed parameters during the experimental process.The wind speed range was form 0 to 50 m/s, the resolution was 0.01 m/s.The accuracy was ± 3.0% of the reading.The reference standard provides concentration limits for each pollutant [23], as shown in Tab. The variation of particulates concentration on different floors during working days The variation of particulates mass concentration on different floors during typical working days in winter and summer is shown in Tab. 2. As shown in Tab. 2, the concentration of PM 10 showed a trend of first increasing and then decreasing with the increase of building height.While the concentration of PM 2.5 and PM 1.0 showed a gradually trend of first decreasing and then increasing.The main reasons are that the temperature is relatively low in the morning and evening, and there is a temperature inversion layer phenomenon [24], which is not conducive to the diffusion of particulates.As a result, the concentration of particulates in the high altitude is lower and higher on the ground.The concentration range of PM 10 at different heights during summer working days is 20.0 to 41.1 μg/m 3 .The concentration range of PM 2.5 is 16.1 to 32.8 μg/m 3 , the concentration range of PM 1.0 is 15.0 to 31.1 μg/m 3 .While the concentration range of PM 10 at different heights during winter working days is 82.0 to 131.5 μg/m 3 .The concentration range of PM 2.5 is 67.7 to 107.4 μg/m 3 .The concentration range of PM 1.0 is 62.7 to 97.7 μg/m 3 .Overall, the concentrations of particulates are generally low during summer working days, the reason is that with the combustion of fossil fuels during the winter heating season intensifies particulates emissions, resulting in higher concentrations of particulates [25].In addition, relevant literature showed that China's heating system heavily relies on fossil fuels, and the combustion of fuels can cause serious air pollution and greenhouse effect [26].This conclusion verifies the correctness of the data in this paper. The variation of particulates concentration on different floors during non-working days The variation of particulates mass concentration on different floors during typical non-working days in winter and summer is shown in Fig. 1. 1 showed that during non-working days, with the increased of floor heights, the concentration of PM 10 showed different trends at different times in summer.Most of them showed a trend of first increasing and then decreasing, but there was also a trend of first decreasing and then increasing.The concentration of PM 2.5 and PM 1.0 both showed a trend of first increasing, then decreasing and next increasing.At different times during non-working days in winter, the concentration of PM 10 showed a trend of first decreasing, then increasing, and next decreasing with the increase of building height.However, the concentration of PM 2.5 and PM 1.0 showed a gradually decreasing trend.The concentration of particulates on the lower floors is relatively high, while the concentration of particulates on the higher floors is relatively low.The concentration range of PM 10 at different heights during non-working days in summer is 9.5 to 39.5 μg/m 3 , the concentration range of 22:00 PM 2.5 is 8.0 to 35.2 μg/m 3 , the concentration range of PM 1.0 is 6.9 to 33.3 μg/m 3 .The concentration range of PM 10 at different heights during non-working days in winter is 25.7 to 48.1 μg/m 3 .The concentration range of PM 2.5 is 18.0 to 30.4 μg/m 3 , the concentration range of PM 1.0 is 14.5 to 26.5 μg/m 3 .It can be seen that the concentration of particulates is generally low on non-working days in summer, and overall, the concentration on non-working days is much lower than that on working days.This is because during the working days, there is a high concentration of people and the using of transportation is relatively frequent, while during non-working days, there is relatively less travelling and the frequency of transportation using is also relatively low [27].Related literature indicates that road traffic is also one of the main causes of pollutant generation [28].Therefore, the concentration on non-working days is relatively low. The variation of particulates concentration over time during typical seasons The variation of particulates concentration over time during typical winter and summer seasons is shown in Tab. 3. Table 3 showed that with the extension of time, the concentration of PM 10 showed a trend of first increasing, then decreasing, and next increasing in summer.While the concentration of PM 2.5 and PM 1.0 both showed a trend of first increasing, then decreasing, and next increasing.The concentration of PM 10 , PM 2.5 , and PM 1.0 all showed a trend of first increasing and then gradually decreasing in winter.The particulates reached the highest at 12:00 in summer, with the average concentrations of PM 10 , PM 2.5 , and PM 1.0 were 37.3, 31.6, and 29.4 μg/m 3 .While reached the highest at 15:00 in winter, with the average concentrations of PM 10 , PM 2.5 , and PM 1.0 were 82.4,64.8, and 57.7 μg/m 3 .This is related to human activities, lifestyle habits, weather and other conditions in the testing location [29].There is often a phenomenon of temperature inversion under the influence of cold high pressure in the morning, while the outdoor temperature is not too high, which is not conducive to the diffusion of fine particulates.With time goes on, there is sufficient sunlight, solar radiation increases, and ground temperature increases rapidly.As a result, the inversion phenomenon gradually disappears, and particulates gradually diffuses into the high altitude [30].In the afternoon, particulates are carried by vehicles, and the resuspension effect intensifies, with some particulates gradually spreading into the high air.The nighttime cooling gradually intensifies the phenomenon of temperature inversion, and the amount of particulates near the ground gradually increases.While the temperature in winter is relatively lower in the morning, and particulates are not easily diffused at night, which resulting in a relatively high concentration in the morning.As people travel daily, the concentration of particulates gradually increases.But with the increase of atmospheric temperature, the movement of particulates is accelerated.In addition, the movement of people and transportation after work hours will also intensify the movement of particulates [21], as a result, there is a decreasing trend in its concentration.At night, with the effect of the inversion layer, the concentration of particulates increases again.Therefore, the vertical distribution of different concentrations of particulates at different times are mainly affected by the phenomenon of temperature inversion.This conclusion is consistent with relevant research results [20,31], which verifying the correctness of this paper. The variation of particulates counting concentration with different floors The variation of particulates counting concentration on different floors during the typical winter and summer seasons on working and non-working days is shown in Fig. 2. Fig. 2. Trend of average particulates counting concentration variation at different heights From Fig. 2, it can be seen that with the height of the floor increases, the particle size of less than 1.0 μm showed a trend of first decreasing and then gradually increasing.The difference in particle size of less than 1.0 μm during working days and non-working days in summer is 0.036% and 0.057%, respectively.While the difference in particle size of less than 1.0 μm during working days and non-working days in winter is 0.014% and 0.032%, respectively.The particle size from 1.0 to 2.5 μm and above 2.5 μm showed a trend of increasing first and then gradually decreasing.For the particle size from 1.0 to 2.5 μm, the difference during working days and non-working days in summer is 0.019% and 0.043%, respectively.While the difference during working days and non-working days in winter is 0.010% and 0.022%, respectively.For the particle size larger than 2.5 μm, the difference during working days and non-working days in summer is 0.017% and 0.013%, respectively.While the difference during working days and non-working days in winter is 0.004% and 0.012%, respectively.The main reasons are that large particulates are relatively heavy, and will settle down due to their own gravity.Therefore, there are relatively fewer large particulates in the atmosphere, especially in high floor.While small particulates are lighter, and will be suspended in the air [32].However, small particle size particulates are more susceptible to the influence of the external environment, such as temperature, wind speed, humidity, etc, which have certain fluctuations.Overall, the particle sizes of low floors are mainly range from 1.0 to 2.5 μm, and the high floors are less than 1.0 μm.The concentration of particulates in summer is lower than that in winter.This is because the temperature in winter is relatively low, which is not conducive to the movement of particulate matters.As a result, the particulates stay in the air and have a relatively high concentration.While the temperature in summer is relatively high, which accelerating the movement of particulate matter, and resulting in a relatively low concentration of particulate matter in summer.The results of this paper is consistent with the trend of literature results, which verifying the correctness of this paper [33].Figure 3 shows the changes in counting concentration of particulates with different particle size ranges at different times. Fig. 3. Counting concentration changes of particulates concentration at different times It can be seen that with time delay, the particle size of less than 1.0 μm showed a trend of first decreasing, then increasing, and next decreasing from Fig. 4. The average concentrations of particulates during summer working days, summer non-working days, winter working days, and winter non-working days were 99.9%, 99.8%, 99.9%, and 99.7%, respectively.Overall, the particulates concentration during the working day period were higher than that during the non-working day period.The particle size of larger than 1.0 μm showed a trend of first increasing and then decreasing, mainly due to the influence of gravity on large particulates and the influence of temperature, which quickly settles to the ground [34].On the other hand, it indicates that the main distribution of atmospheric particulates in Xi'an is smaller particles [35], which is consistent with the results of relevant literature and verifies the correctness of this paper [11,12,36]. The variation of relative content of particulates with different floors The relative content of particulates on working and non-working days in typical winter and summer seasons varies with different floors, as shown in Fig. 4. 4 showed with the height increases, both PM 1.0 /PM 2.5 and PM 2.5 /PM 10 showed a trend of first decreasing and then increasing during the working day period.On non-working days, both PM 1.0 /PM 2.5 and PM 2.5 /PM 10 showed a trend of first increasing, then decreasing, and next increasing.With the increase of building height, the variation range of PM 1.0 /PM 2.5 on working days ranges from 90.3% to 94.7%, with an average of 92.4%.The variation range on non-working days ranges from 81.0% to 91.5%, with an average of 85.9%.The average of PM 1.0 /PM 2.5 on working days is 6.6% higher than that on non-working days.At the same time, with the increase of building height, the variation range of PM 2.5 /PM 10 during working days ranges from 79.1% to 87.8%, with an average of 84.0%.The variation range on non-working days ranges from 61.0% to 84.8%, with an average of 73.8%.The average of PM 2.5 /PM 10 on working days is 10.2% higher than that on non-working days, and the particle property in summer is lower than that in winter.It can be seen that the atmospheric environment in Xi'an is mainly composed of small particles, which are relatively light in quantity and can be suspended in the air, as a result small particles are high in atmospheric.This is consistent with the conclusions given in the literature [11,12,36].The distribution of relative concentration of particulates over time is shown in Fig. 5. 5 showed that with time increases, PM 1.0 /PM 2.5 and PM 2.5 /PM 10 also showed a trend of first decreasing and then increasing during the working day period.On non-working days, both PM 1.0 /PM 2.5 and PM 2.5 /PM 10 showed a trend of first increasing, then decreasing, and next increasing.With the increase of time, the variation range of PM 1.0 /PM 2.5 on working days ranges from 90.7% to 94.1%, with an average of 93.7%.The variation ranges on non-working days ranges from 79.1% to 92.8%, with an average of 83.3%.The average of PM 1.0 /PM 2.5 on working days is 10.4% higher than the average on non-working days.At the same time, the variation range of PM 2.5 /PM 10 during working days ranges from 76.9% to 87.5%, with an average of 85.9%.The variation range on non-working days ranges from 61.8% to 86.1%, with an average of 71.9%.The average of PM 2.5 /PM 10 on working days is 14.0% higher than the average on non-working days.It can be seen that time changes have a greater impact on the average of PM 1.0 /PM 2.5 and PM 2.5 /PM 10 compared to floor height changes, which is 3.8% and 3.8% higher on working days than on non-working days.This is because large number of small particles in the environment are more prone to diffusion, and also affected by temperature, which resulting in significant changes in the concentration of small particles [37].The relative content of particulates during summer working days and non-working days is relatively high at different times.This is because the summer temperature is relatively high, there is sufficient sunlight, and the near surface inversion layer is damaged, which causing some damage to atmospheric stability [30].This conclusion is consistent with the results given in the literature, which verifying the correctness of this paper [38].The relative content of different particulates also showed fluctuating changes over time. Conclusion The concentration distribution of particulates in a high-rise building in Xi'an at different vertical heights in typical winter and summer seasons was tested and analyzed in this paper.The following conclusions are as follows: The concentrations on non-working days were much lower than that on working days at different times and on different floors, and the concentrations of particulates were relatively low in summer.The particulates reached the highest at 12:00 in summer, with the average concentrations of PM 10 , PM 2.5 , and PM 1.0 were 37.3, 31.6, and 29.4 μg/m 3 .While reached the highest at 15:00 in winter, with the average concentrations of PM 10 , PM 2.5 , and PM 1.0 were 82.4,64.8, and 57.7 μg/m 3 .With the height of the floor increases, the particle size of less than 1.0 μm showed a trend of first decreasing and then gradually increasing.The particle size from 1.0 to 2.5 μm and above 2.5 μm showed a trend of increasing first and then gradually decreasing.The distribution of atmospheric environment in Xi'an is mainly dominated by small particulates.The particle sizes of low floors are mainly range from 1.0 to 2.5 μm, and the high floors are less than 1.0 μm.With the increase of floors and time, PM 1.0 /PM 2.5 and PM 2.5 /PM 10 show a trend of first decreasing and then increasing on working days, while PM 1.0 /PM 2.5 and PM 2.5 /PM 10 show a trend of first increasing, then decreasing and next increasing on non-working days.Time changes have a greater impact on the average of PM 1.0 /PM 2.5 and PM 2.5 /PM 10 compared to floor height changes, which is 3.8% and 3.8% higher on working days than on non-working days. In addition, outdoor meteorological parameters will also have a certain impact on particulates concentration distribution.It provides reference values for controlling the particulates concentration in high-rise buildings. Fig. 1 . Fig. 1.Changes in particulates concentration at different vertical heightsFigure1showed that during non-working days, with the increased of floor heights, the concentration of PM 10 showed different trends at different times in summer.Most of them showed a trend of first increasing and then decreasing, but there was also a trend of first decreasing and then increasing.The concentration of PM 2.5 and PM 1.0 both showed a trend of first increasing, then decreasing and next increasing.At different times during non-working days in winter, the concentration of PM 10 showed a trend of first decreasing, then increasing, and next decreasing with the increase of building height.However, the concentration of PM 2.5 and PM 1.0 showed a gradually decreasing trend.The concentration of particulates on the lower floors is relatively high, while the concentration of particulates on the higher floors is relatively low.The concentration range of PM 10 at different heights during non-working days in summer is 9.5 to 39.5 μg/m 3 , the concentration range of (a) Particle size less than 1.0 μm (b) Particle size larger than 1.0 μm 10 Fig. 5 . Fig.5.Changes in PM 1.0 /PM 2.5 and PM 2.5 /PM 10 at different times Figure5showed that with time increases, PM 1.0 /PM 2.5 and PM 2.5 /PM 10 also showed a trend of first decreasing and then increasing during the working day period.On non-working days, both PM 1.0 /PM 2.5 and PM 2.5 /PM 10 showed a trend of first increasing, then decreasing, and next increasing.
2024-03-12T16:26:40.273Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "64ce88ec1249a3b1d8b65bea75fed4efc22e0459", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98362400051Z", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c8f66281de3765efb7620a4a9b04fed3b1228468", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
256017351
pes2o/s2orc
v3-fos-license
Validity of tools used for surveying physicians about their interactions with pharmaceutical company: a systematic review Background There is evidence that physicians’ prescription behavior is negatively affected by the extent of their interactions with pharmaceutical companies. In order to develop and implement policies and interventions for better management of interactions, we need to understand physicians’ perspectives on this issue. Surveys addressing physicians’ interactions with pharmaceutical companies need to use validated tools to ensure the validity of their findings. Objective To assess the validity of tools used in surveys of physicians about the extent and nature of their interactions with pharmaceutical companies, and about their knowledge, beliefs and attitudes towards such interactions; and to identify those tools that have been formally validated. Methods We developed a search strategy with the assistance of a medical librarian. We electronically searched MEDLINE and EMBASE databases in September 2015. Teams of two reviewers conducted data selection and data abstraction. They identified eligible studies in one table and then abstracted the relevant data from the studies with validated tools in another table. Tables were piloted and standardized. Results We identified one validated questionnaire out of the 11 assessing the nature and extent of the interaction, and three validated questionnaires out of the 47 assessing knowledge, beliefs and attitudes of physicians toward the interaction. None of these validated questionnaires were used in more than one survey. Conclusion The available supporting evidence of the issue of physicians’ interaction with pharmaceutical company is of low quality. There is a need for research to develop and validate tools to survey physicians about their interactions with pharmaceutical companies. Background Social, economic and public-health sectors are interested in the elements of poor prescription practices [1,2]. There is evidence that physicians' prescription behavior is negatively affected by the extent of their interactions with pharmaceutical companies [3,4]. A large number of qualitative and quantitative studies aimed to understand physicians' knowledge, beliefs and attitudes towards their interaction with pharmaceutical company representatives. In at least two studies, physicians have denied being influenced by pharmaceutical promotion while claiming it influenced their colleagues [5,6]. There have been many initiatives to manage the financial relationships between industry and physicians. For example, the Institute of Medicine published in 2009 the "Conflict of Interest in Medical Research, Education, and Practice" report which included recommendations to addressing those relationship [7]. More recently, the Physician Payment Sunshine Act has required pharmaceutical and medical device companies to publicly report payments to physicians and teaching hospitals, as well as certain ownership interests [8]. In order to develop and implement policies and interventions for better management of interactions [9], we need to understand their extent, as well as physicians' perspectives towards interaction with pharmaceutical companies. Studies of physicians' knowledge, attitudes and beliefs need to measure them using validated tools. The study objectives were to: 1. Assess the validity of tools used in surveys of physicians about the extent and nature of their interactions with pharmaceutical companies, and about their knowledge, beliefs and attitudes of towards such interactions. 2. Identify and describe survey tools that have been formally validated Eligibility criteria The inclusion criteria for our first objective (assessing validity of tools used in surveys) were: • Types of study design: quantitative survey studies • Types of participants: Practicing physicians; we used no restrictions on country of practice; • Types of interactions: any form of interaction between physicians and pharmaceutical companies or their representatives; • Types of outcomes: extent and nature; knowledge, beliefs, attitude [10]. The exclusion criteria for our first objective are: • Studies restricted to "residents" or "medical students" • Studies not published in English. The inclusion criteria for our second objective (identifying and describing validated tools) were: • Tools used in one of the studies included under the first objective • Tools assessed for criterion validity and/or construct validity (see Appendix 1 for definitions). We did not include tools assessed only for face and/or content validity given they represent perceptions and judgments of experts regarding the content of the tool [11]. Search strategy We developed a search strategy with the assistance of a medical librarian (Appendix 2). We electronically searched MEDLINE and EMBASE databases in September 2015. We did not apply any search filter. We also screened the references lists of included studies and the grey literature (e.g., theses). Last, we searched the Health and Psychosocial Instruments (HaPI) database to screen for indexed validated tools relevant to our study. Selection of studies Teams of two reviewers screened the titles and abstracts of the identified studies in duplicate and independently. Then they used standardized forms to screen the full texts of the studies judged as potentially eligible by at least one of the reviewers. They compared results, and resolved disagreement by discussion. They sought the input of a third reviewer as needed. Data collection Teams of two reviewers abstracted data from included studies. They compared results, and resolved disagreement by discussion. When needed they sought the input of a third reviewer. They used pilot tested standardized data abstraction forms. • For each study included under objective 1, we noted whether the tool was newly developed (versus a previously developed, versus a modified version of a previously developed tool) and whether or not it has been validated. • For each validated tool included under objective 2, we noted the concepts it measured (e.g., extent, nature, attitudes, beliefs, knowledge); how it was developed; and how it was validated. Data analysis and synthesis We calculated the kappa statistic to assess the agreement between reviewers for full text screening. We summarized the findings in both narrative and tabular formats, as the nature of the data was not amenable to a metaanalysis. We reported the results separately for tools measuring extent and nature, and for tools measuring knowledge, beliefs and attitudes. Figure 1 shows the results of the search and of study selection. The kappa statistic for full-text screening was high at 0.893. We identified 11 eligible surveys assessing nature and extent of the interaction, and 47 eligible surveys assessing knowledge, beliefs and attitudes. Table 1 shows the 11 included studies and the validity of their tools. Nine studies reported developing a new tool [12][13][14][15][16][17][18][19][20], and two reported modifying a previously developed "validated" tool [21]. Of these 11 studies, only one used a tool that met our criteria for validity [19]. Of the remaining ten studies, three assessed content validity [18,20,21]. Validated survey tools As stated above, only one tool assessing the nature and extent of the interaction fit our criteria for a 'validated tool' (Table 2). That tool measured the concept of "gift giving". While the report did not provide details about the development of the tool, it described evaluating face validity as well as construct validity using principal component factor analysis of the attitudinal items. [20,[24][25][26][27] and one evaluated face validity [25]. Validated survey tools Only three survey tools assessing knowledge, beliefs and attitudes of physicians toward the interaction met our eligibility criteria for a 'validated tool' [2,22,23]. Table 2 provides a full description of those tools. In summary: • Concepts measured The first tool assessed the physicians' attitudes towards pharmaceutical companies representatives (PCRs) [22]. The second assessed the perspectives towards the importance of different "sources of influence" [2]. The third assessed physicians' attitudes toward interacting with PCRs and the perceptions of the effect of the value of a gift on physician's judgment [23]. The first tool was additionally developed using indepth interviews of physicians and PCRs to develop a preliminary questionnaire [22]. The second [2] and the third [23] survey tools were developed by consulting with experts while developing the questionnaire. Pre-testing was conducted for the first [22] and second [2] tool. • Types of validity Construct validity and factorial analysis were reported for the development of all three tools [2,22,23]. Discussion In summary, the purpose of this study was to systematically review the available tools to survey physicians about their interactions with pharmaceutical company and identify the validated ones. We identified one validated questionnaire assessing the nature and extent of the interaction, and three validated questionnaires assessing knowledge, beliefs and attitudes of physicians toward the interaction. None of these validated questionnaires were used in more than one survey. Four of 58 surveys on interactions used validated questionnaires. The major strength of our study is the use of systematic review methodology (including comprehensive search, duplicate selection and data abstraction processes). Also, our systematic review is the first one to assess the validity of tools to survey physicians about their interactions with pharmaceutical companies. One potential limitation is our restriction to physicians in practice. Although one could argue that questionnaires designed to survey residents and students might be informative and used for physicians, we wanted our study to be more focused. Unfortunately, the use of non-validated or poorly validated instruments is not uncommon in health survey research. Indeed, we have identified three methodological studies assessing the validity of tools in specific healthcare fields [28][29][30]. The first study included studies assessing the attitudes of healthcare students and professionals towards patients with physical disability [28]. This study identified 38 eligible surveys, nine of which used validated instruments, and only three of these fit our validity criteria [28]. The second methodological study focused on studies assessing knowledge, perceptions and practices of health care professionals towards alcoholic patients [29]. Out of 21 included studies, the numbers assessed for internal construct validity, external construct validity, and predictive validity, internal consistency, and reliability were 8, 15, 1, 7 and 0 respectively Conclusion Policy makers addressing the issue of physicians' interaction with pharmaceutical company need to be aware of the low quality of supporting evidence due to the use of non-validated questionnaires (given the bias that could be introduced into the findings). One observation is that the questionnaires in the identified surveys measured different concepts (e.g., aspects of the interaction, "information" or the "gifts" aspect). This limits the capacity of comparing results across studies (e.g., across different countries, or in the same country over time). Our findings highlight the importance of the use by investigators of commonly accepted and validated instruments. These investigators could use our findings to choose a validated questionnaire. Unfortunately the choices are limited, and those investigators may reasonably judge that none of the instruments address exactly their specific research question. Therefore there is a need for research to develop and validate such tools. Investigators also need to adhere to suggested guidelines for reporting survey studies that include recommendations for reporting the extent to which the validity and Ferrari [20] Non validated Mikhael [34] Indhumati [35] Tabas [36] Loh [27] Thomson [26] Skoglund [25] De Gara [37] Evans [38] Magzoub [39] Saito [40] Alghasham [41] Ellison [42] Ross [43] Burashnikova [44] Fortuna [45] Tobin [46] Morgan [6] Brett [47] Rutledge [48] De Las Cuevas [49] Spiller [50] Jones [51] Figueiras [1] Guldal [52] Gaedeke [53] Gibbons [54] Creyer [55] Gaither [56] Gaither [57] Banks 1992 [58] Hayes [59] Stross [60] Hull [61] Evans [62] Siddiqi [63] Macneill [24] Alssageer [64] Stoddard [65] Sara [66] Oshikoya [67] Othman [68] reliability of the instrument have been established [31]. It would also be ideal for journals to require authors of surveys to adhere to those guidelines. 1. Content Validity: refers to evaluation of the items of a measure to determine whether they are representative of the domain that the measure seeks to examine. It is based only on the judgment of experts regarding the content of the items. 2. Face Validity: assumes that when we look at the questions included in a measuring instrument, it appears to measure the concept that it intends to measure. It refers to how users of the instrument perceive it. 3. Criterion Validity: establishing a correlation between the measure and an external criterion (a gold standard). 4. Construct Validity: the degree of measurement of a theoretical concept, trait, or variable; the way in which our construct behaves or correlates with other related constructs. In other words, it represents a framework of hypothesis testing based on the knowledge of the underlying construct. Madhavan surveyed physicians in West Virginia about their attitudes "surrounding the 'gift relationship' between pharmaceutical companies and physicians". The questionnaire developed for the purpose of this study consisted of three sections: the first included statements to which physicians would indicate how much they agree or disagree, the second aimed to collect demographic information and physicians' practice information, and the third asked about the amount and value of "gifts" they recently received. Examples from the first section: "pharmaceutical companies give gifts to physicians to influence their prescribing", "it is inappropriate to accept gifts from pharmaceutical companies" and "physician-patient relationship would be improved if the extent of the gift and receiving relationship between pharmaceutical companies and physicians was made public". McKinney was the oldest published study (1990) about attitudes of physicians and residents toward their "professional interaction with pharmaceutical sales representatives". The participants were from Minnesota and Wisconsin. The questionnaire they developed had two parts, the first asked the physicians about the potential ethical compromise when receiving gifts from PCRs and the frequency of their contact with the PCRs; the second included statements assessing attitudes towards PCRs and the physicians reported the extent of their agreement with these statements. Some of these statements were "pharmaceutical representatives provide useful and accurate information about newly introduced drugs", "pharmaceutical representatives should be banned from presentations at this institution", "I would have the same degree of contact with pharmaceutical representatives whether or not promotional gifts were distributed" and "acceptance of promotional items from pharmaceutical representatives has no impact on my prescription behavior". Andaleeb [22] assessed the attitudes of physicians in Pennsylvania toward PCRs. They formulated the questionnaire's items after reviewing the literature and gathering information from physicians and PCRs directly, and then conducted in-depth interviews to identify elements explaining physicians' attitudes. The preliminary questionnaire was pretested on three physicians and then modified. The final questionnaire included four subgroups and each had a statement to which physicians would rate their agreement to. The subgroup "favor" included "pharmaceutical sales representatives are an asset to my practice"; "support" included "pharmaceutical sales representatives provide me with information that helps me practice better medicine"; "Style" included "I feel pharmaceutical sales representatives are always trying to manipulate me to prescribe their company's brands"; and "peers" included "the medical community has a negative attitude toward detail persons". Fernandez [2] aimed to understand the "opinion of general practitioners on the importance and legitimacy of sources of influence on medical practice". The physicians from two Spanish regions were asked to assess the legitimacy of different strategies and/or groups on influencing medical practice: "financial incentives, politicians of the health field, pharmaceutical industry, scientific organizations, academic institutions and professional associations". Another task was to rate the importance of different change strategies to medical practice: "information provided by pharmaceutical companies' visitors", "attendance at training courses, reading articles and reports" and "existence of financial incentives".
2016-05-04T20:20:58.661Z
2015-11-25T00:00:00.000
{ "year": 2015, "sha1": "de51bf8b196c05cdb91d7cf5966fdb80a6f458cd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13104-015-1709-4", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "de51bf8b196c05cdb91d7cf5966fdb80a6f458cd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
46935089
pes2o/s2orc
v3-fos-license
Comparing the application of two theoretical frameworks to describe determinants of adverse medical device event reporting: secondary analysis of qualitative interview data Background Post-market surveillance of medical devices is reliant on physician reporting of adverse medical device events (AMDEs). Few studies have examined factors that influence whether and how physicians report AMDEs, an essential step in the development of behaviour change interventions. This study was a secondary analysis comparing application of the Theoretical Domains Framework (TDF) and the Tailored Implementation for Chronic Diseases (TICD) framework to identify potential behaviour change interventions that correspond to determinants of AMDE reporting. Methods A previous study involving qualitative interviews with Canadian physicians that implant medical devices identified themes reflecting AMDE reporting determinants. In this secondary analysis, themes that emerged from the primary analysis were independently mapped to the TDF and TICD. Determinants and corresponding intervention options arising from both frameworks (and both mappers) were compared. Results Both theoretical frameworks were useful for identifying interventions corresponding to behavioural determinants of AMDE reporting. Information or education strategies that provide evidence about AMDEs, and audit and feedback of AMDE data were identified as interventions to target the theme of physician beliefs; improving information systems, and reminder cues, prompts and awards were identified as interventions to address determinants arising from the organization or systems themes; and modifying financial/non-financial incentives and sharing data on outcomes associated with AMDEs were identified as interventions to target device market themes. Numerous operational challenges were encountered in the application of both frameworks including a lack of clarity about how directly relevant to themes the domains/determinants should be, how many domains/determinants to select, if and how to resolve discrepancies across multiple mappers, and how to choose interventions from among the large number associated with selected domains/determinants. Conclusions Given discrepancies in mapping themes to determinants/domains and the resulting interventions offered by the two frameworks, uncertainty remains about how to choose interventions that best match behavioural determinants in a given context. Further research is needed to provide more nuanced guidance on the application of TDF and TICD for a broader audience, which is likely to increase the utility and uptake of these frameworks in practice. Electronic supplementary material The online version of this article (10.1186/s12913-018-3251-2) contains supplementary material, which is available to authorized users. Background A growing body of research in implementation science has employed classic or implementation science theories or theoretical frameworks to investigate behavioural determinants influencing the use of evidence-based innovations by health care professionals [1]. Given the undesirable prevalence of over-, under-or misuse of innovations and their inconsistent impact on patient outcomes [2], systematic categorization of determinants has been highlighted as a strategy to inform the selection of interventions that best mitigate or address those determinants. The Theoretical Domains Framework (TDF) [3] and the Tailored Implementation for Chronic Diseases (TICD) checklist [4] are two prominent, validated theoretical frameworks that were rigorously developed based on review of the literature followed by international expert consensus. Both facilitate the design of implementation strategies by identifying one or more interventions that may be appropriate for addressing behavioural determinants. Unfortunately, application of these theoretical frameworks to develop and implement change strategies has proven challenging [5], with an inconsistent impact on health care delivery or patient outcomes [6]. There is a need to improve the selection of behavioural interventions so that they reliably lead to health care improvement. Hence, more insight is needed about the similarities and differences in the content and application of commonly used theoretical frameworks to understand how their use can be optimized when choosing and designing behaviour change strategies. Previous research has focused on the determinants of implementing practice guidelines, clinical tests or procedures, and quality improvement processes or tools [7,8]. Despite widespread use of medical devices, little attention has been devoted to understanding determinants of the reporting of adverse events associated with their use. Medical devices include a wide range of health or medical instruments essential for the prevention, diagnosis, cure or management of a disease or abnormal physical condition [9]. Those considered higher risk for adverse medical device events (AMDEs) include orthopedic implants such as hip or knee joints and cardiovascular implants such as pacemakers or implantable cardioverter defibrillators [10,11]. AMDEs may result from limitations in device design or function, and account for 10% of patient safety incidents in hospitals [12]. Growing concern about AMDEs has led to calls for greater monitoring of outcomes associated with their use [13]. However, registries are not present in every jurisdiction or for every type of medical device. In the absence of systematic data collection, the identification and sharing of information about AMDEs relies on voluntary reporting by physicians. To learn about AMDE reporting behaviour, we interviewed 22 Canadian physicians who varied by geographical region and career stage; 10 implanted cardiovascular devices and 12 implanted orthopedic devices [14]. When AMDEs arose, they often developed work-around solutions to continue using the same type of device, or they chose to use other comparable devices available on the market. Some participants said they informally shared information about AMDEs with colleagues or industry representatives, however most did not. Determinants of AMDE reporting were identified at the level of the physician (i.e. beliefs about adverse events, device preferences); organization or system (i.e. lack of hospital, national or international reporting policies, systems or incentives); and the device market (i.e. purchasing group contract obligations) [14]. As invasive health care technologies, the characteristics and uses of higher-risk medical devices differ from those of other innovations such as practice guidelines, clinical procedures, or quality improvement processes or tools. Hence, determinants of their use may also differ, providing a unique context within which to study the application of theoretical frameworks for selecting behavioural interventions. The purpose of this study was to (1) categorize determinants of AMDE reporting behaviour that emerged in the primary study using the TDF and TICD; (2) systematically identify interventions that could promote and support AMDE reporting; and (3) compare the determinants and interventions identified by the TDF and TICD as a means of exploring how to optimize the use of those theoretical frameworks in behavioural intervention design. At a practical level, study results will identify interventions that are likely to improve AMDE reporting, thereby optimizing the use and outcomes of higher-risk medical devices. Simultaneously, this work will contribute to the implementation science literature by broadening our understanding of the relevance and application of theoretical frameworks in identifying or describing determinants of innovation use, and selecting corresponding behavioural interventions for change. Study design AMDE reporting determinants were mapped to the TDF and TICD to compare determinant domains, determinants and corresponding recommended behavioural interventions. The two authors (LD and ARG) independently mapped the determinants using each framework. LD is an implementation scientist with experience in studying the determinants of physician behavior as it relates to prescribing practices [15], the interdisciplinary management of residents in long-term care [16], and the determinants of patient adherence to recommended treatment following a myocardial infarction [17]. ARG is an implementation scientist with extensive experience in studying determinants of the use of innovations including teamwork in cancer diagnostic assessment programs [18], timely triage and referral of trauma patients [19], the surgical safety checklist [20], guidelines [21] and integrated knowledge translation [22]. ARG has also evaluated the use of theory in assessing barriers of innovation use [23] and in planning behavioural interventions to implement guidelines [24]. ARG had employed the TICD to collect or analyze data in previous studies; she was familiar with the TDF but had not applied it in previous work. LD had not previously applied the TICD but had previous training and experience related to the TDF. This study was based on secondary analysis of qualitative data and did not require ethics approval. However, the University Health Network Research Ethics Board provided ethical approval for the qualitative study that generated data upon which this study is based, and participants of the qualitative study had provided written informed consent prior to being interviewed [14]. Implementation frameworks The TDF includes 84 individual determinants across 14 domains (knowledge, skills, social or professional role and identity, beliefs about capabilities, optimism, beliefs about consequences, reinforcement, intentions, goals; memory, attention and decision processes; environment, context and resources, social influences, emotion, behavioural regulation). These domains, and not the individual determinants within them, are linked with 93 behavioural interventions (referred to as behaviour change techniques) across 16 overarching categories [3]. The TICD includes 57 individual determinants grouped in 7 domains (guideline factors, individual health professional factors, patient factors, professional interactions, incentives and resources, capacity for organizational change; social, political, and legal factors), and links individual determinants with one or more of 116 behavioural interventions [4]. Data collection AMDE reporting determinants and exemplar quotes that illustrated determinants were acquired from the previously conducted study (Additional file 1) [14]. Methods for the previous study are published elsewhere [14]. In brief, qualitative interviews with physicians that implanted cardiovascular and orthopedic implants were conducted by ARG. Themes reflecting determinants were generated, reviewed and discussed by the entire eight-person research team on four separate occasions to assess thematic saturation, agree upon themes, and interpret data. Themes were organized in the categories of physician beliefs; policies, processes, and systems; and the device market [14]. Data mapping Mapping of AMDE reporting determinants to the TDF and TICD was independently performed by LD and ARG. To do this, both used the same version of the TDF [4] and TICD [4] instruments that listed determinant domains, individual determinants (for TICD), and corresponding behavioural interventions. The intent was to undertake naturalistic application of the TDF and TICD that relied solely on the content and guidance provided by the theoretical frameworks themselves. LD and ARG did not review or discuss the content of the TDF or TICD before the independent mapping exercise, nor did they attempt to resolve and reach consensus on discrepancies after mapping. This was an intentional methodological decision to facilitate comparison across mappers using only the frameworks themselves as a guide. AMDE reporting determinants were matched to determinant domains or individual determinants by reading the definitions and examples provided in each framework. LD and ARG each generated a table in which AMDE reporting themes and exemplar quotes were listed along with TDF and TICD domains or determinants thought to be relevant and reflective of the data. Data analysis The two tables reflecting independent mapping were collated to illustrate the TDF and TICD domains or determinants selected by both LD and ARG, and by LD alone and ARG alone. Behavioural interventions corresponding to each domain or individual determinant were extracted from the TDF and TICD and added to the collated table. Domains, determinants and corresponding interventions identified by LD and ARG in the TDF and TICD were enumerated and compared. Results Mapping of AMDE reporting themes to TDF and TICD Table 1 summarizes the TDF domains and Table 2 summarizes the TICD determinants selected by one or both mappers. All themes were successfully mapped to both frameworks All AMDE reporting themes (noted in italics throughout the manuscript) were directly and clearly addressed by both frameworks, and therefore mapped to one or more TDF domain and TICD determinant. For example, the theme 'AMDEs were considered unexpected or unavoidable' aligned with the TDF domain of 'Beliefs about consequences' and the theme 'Lack of responsiveness to AMDEs from industry' was readily mapped to the TDF domain of 'Reinforcement'. Similarly, the theme 'No hospital, national or international systems for AMDE reporting' was readily mapped to the TICD determinant 'Incentives and resources: information system' and 'Use of specific devices often determined by purchasing group contracts obligations' was mapped to the TICD determinant 'Health professional behaviour: capacity to plan change'. A range of domains and determinants were identified AMDE reporting determinants were mapped to multiple domains and determinants, revealing the interplay of multi-level determinants that influence AMDE reporting, in addition to the complexity of applying the TDF and TICD. In part this was because the previous study [14] identified that physician, organizational, system, and market level factors influenced whether and how physicians reported AMDEs. This was compounded by the reality that AMDE reporting themes often mapped to more than one domain or determinant. For example, the theme 'No hospital, national or international systems for AMDE reporting' mapped to 4 different TDF domains (Environmental context and resources, Reinforcement, Knowledge, and Behavioural regulation). The same theme mapped to 5 different TICD domains, representing 9 unique determinants [Incentives and resources (4 determinants): information system, availability of necessary resources, non-financial incentives and disincentives, and quality assurance and patient safety systems; Capacity for organizational change (2 determinants): regulations, rules, and policies, and monitoring and feedback; Health profes- Across both mappers, themes relating to physician beliefs were mapped to 4 unique TDF domains, while organizations or systems and device market were each mapped to 5 unique domains. Overall, the TDF identified 7 unique domains across all AMDE reporting themes. Using the TICD, physician beliefs themes were mapped to 3 unique determinants; policies, processes or systems themes were mapped to 14 unique determinants; and device market themes were mapped to 10 unique determinants. Overall, the TICD identified 21 unique determinants across all AMDE reporting themes. Domains and determinants were convergent across themes Although AMDE reporting themes were identified at the physician, organization or system, and device market levels, selected domains or determinants were often mapped to multiple themes. For example, the TDF domain 'Beliefs about consequences' was applied across multiple themes pertinent to physician beliefs and device market (Table 1). Similarly, the TICD determinant 'Health professional cognitions: expected outcome' was applied across multiple themes pertinent to physician beliefs and device market ( Table 2). Comparison across mappers The two mappers differed in the number and domains or determinants matched to AMDE reporting themes, revealing the subjectivity inherent in the mapping process (Tables 1 and 2). For example, both applied the TDF domain 'Environmental context and resources' to the theme 'No hospital, national or international systems for AMDE reporting'. For the same theme ARG also chose the TDF domain 'Reinforcement' and LD also chose the TDF domains 'Knowledge' and 'Behavioural regulation'. For the same theme, both mappers applied the TICD determinants 'Incentives and resources: Interventions corresponding to TDF domains and TICD determinants Additional file 2 summarizes the interventions corresponding to TDF domains and TICD determinants selected by one or both mappers. Many interventions were identified Both frameworks identified numerous interventions for each AMDE reporting theme. For example, the theme 'AMDEs were considered unexpected or unavoidable' was mapped by both mappers to the TDF domain of 'Beliefs about consequences' , for which 23 distinct interventions are suggested across 4 categories (covert learning, comparison of outcomes, natural consequences, and reward and threat). The same theme was mapped by both LD and ARG to the TICD determinant of 'Health professional cognitions: expected outcome' , for which 2 distinct interventions are suggested (information or educational strategies that provide compelling evidence, and audit and feedback). Using the TDF, domains selected by both mappers identified a total of 47 unique intervention options across all themes; this included 23 unique interventions to address physician beliefs, 14 unique for organization or system themes, and 35 for device market themes. Using the TICD, determinants selected by both mappers identified 12 unique intervention options, including 2 unique interventions for physician beliefs, 8 for organization or system themes, and 4 for device market themes. Convergence of interventions As was noted previously, selected domains or determinants were often similar across AMDE reporting themes and determinant levels. Hence, interventions recommended by the TDF and TICD were also similar. For example, across themes describing physician beliefs, interventions frequently recommended by TDF included covert learning, comparison of outcomes, natural consequences, and reward and threat. Common interventions recommended by TICD included information or educational strategies that provide compelling evidence or address reasons for disagreement, audit and feedback, and a local consensus process. Direct relevance of interventions In some cases, interventions recommended by the TDF and TICD were intuitively linked to the determinant theme. For example, the theme 'Views about cause of AMDEs confounded by multiple factors' was mapped to the TDF domain 'Knowledge' , for which 17 interventions were recommended in the categories of feedback and monitoring and shaping knowledge and natural consequences, which both reflect knowledge sharing. The same theme was mapped to the TICD determinant of 'Health professional knowledge and skills: domain knowledge' for which 3 interventions were recommended, including change the mix of professional skills; tailor educational strategies; and disseminate new knowledge, again all focused on knowledge sharing. In other cases, the applicability of interventions recommended by the TDF and TICD appeared less direct, perhaps owing to a greater degree of complexity in determinants identified in the primary study. For example, at the device market level, the theme 'Use of specific devices often determined by purchasing group contract obligations' was mapped to the TDF domain of 'Environmental context and resources' for which 14 interventions categorized as antecedents or associations were recommended. These interventions involve restructuring the physical or social environment, or adding or removing prompts or cues, and do not seem to readily address the multi-level restrictions on behaviour of purchasing group contracts. Conversely, mapping the same theme to TICD determinants identified the more granular intervention of improvements in contracts. Similarly, important themes from the predicate study reflecting complex determinants may not have been well-addressed by either TDF or TICD, leading to less than appropriate interventions. For example, the theme 'Views about cause of AMDEs confounded by multiple factors' was mapped to the TDF domain of 'Beliefs about consequences' by both mappers and the TICD domain of 'Health professional cognitions: expected outcome' by both mappers, ultimately leading to 23 corresponding interventions recommended by TDF and 5 recommended by TICD. All of the interventions address knowledge but none appear to fully recognize the interplay of determinants inherent in this theme. Comparison across theoretical frameworks Overall, although a greater number of TICD determinants were applied across themes and mappers compared with TDF domains, the TDF identified many more unique interventions across all themes (47 for domains selected by both mappers plus additional domains selected by one mapper) compared with the TICD (12 interventions for determinants selected by both mappers plus additional determinants selected by one mapper). Several interventions recommended by TDF and TICD were similar in meaning, irrespective of the theme. For example, for the physician beliefs theme 'AMDEs considered expected or unavoidable and not adverse' , the TDF intervention of comparison of outcomes was conceptually similar to the TICD intervention of audit and feedback, and the TDF intervention of information about health consequences was similar to the TICD intervention of information or educational strategies that provide compelling evidence. Even when themes were mapped to domains or determinants that were similar in meaning, different interventions were recommended by TDF and TICD in some instances. For example, for the device market theme 'Lack of responsiveness to AMDEs from industry' , the TDF interventions (categorized as scheduled consequences) focused on adding or removing rewards, while the TICD interventions (information or educational strategies and audit and feedback) focused on the provision of information. Table 4 summarizes overall study findings and their implications. Knowledge generated by this study addresses the applied objectives of identifying interventions to stimulate AMDE reporting, and comparing the domains or determinants and interventions identified by mapping AMDE reporting themes to the TDF and TICD. Implications for practice Interventions to stimulate AMDE reporting AMDE reporting themes were mapped by both mappers to several domains and determinants, which identified corresponding interventions common to the TDF and TICD. Information or educational strategies that provide evidence about AMDEs, and audit and feedback of AMDE-related data were identified as interventions to target physician beliefs; improve information systems, and reminder cues, prompts and awards were identified to target organization or system themes; and modify financial/non-financial incentives, and share data on outcomes associated with AMDEs were identified to address device market themes. However, issues and discrepancies in the application of TDF and TICD raise uncertainty about which or how many interventions may be relevant to promote and support AMDE reporting. Application of the TDF and TICD Issues revealed by this study include a lack of clarity about how directly relevant domains or determinants should be and therefore which and how many to select; if and how to resolve discrepancies in the selection of domains or determinants across multiple mappers; and how to choose interventions from among the large number associated with selected domains and determinants. Several TDF domains and TICD determinants were relevant, similar in meaning, and selected by both mappers. Convergence within and across TDF and TICD identified a core set of behavioural determinants and corresponding interventions. Thus, both theoretical frameworks were useful for selecting behavioural determinants to which AMDE reporting themes matched and corresponding interventions. However, TDF domains and TICD determinants selected independently by both mappers often did not match, and a large number of interventions corresponded to the TDF domains and TICD determinants selected by one or both mappers. Even when themes mapped to TDF domains and TICD determinants with similar definitions, the frameworks often recommended different interventions. TICD recommended interventions that seemed to be more directly applicable to a behavior such as AMDE reporting with multi-level determinants as compared with the TDF. Domains and corresponding interventions in the TDF or TICD did not fully recognize the complex interplay of determinants inherent in some themes; it is unclear if this is because the frameworks are better suited to exploring determinants in some contexts (i.e. adherence with clinical guideline recommendations) and not others (i.e. reporting of AMDEs. Neither TDF nor TICD prompt users to prioritize domains or interventions Neither the TDF nor the TICD prompt users to prioritize among the many potentially applicable domains or interventions as means of limiting or focusing the number and type of interventions Discrepancies in applying TDF and TICD may be accounted for by distinctions between their content and format. TDF includes determinant domains largely focused on the individual level while TICD includes determinant domains and determinants spanning multiple levels and, unlike TDF, offered definitions and examples to guide the application of these more granular determinants. Although more TICD determinants were applied compared with TDF domains, TDF recommended a greater number of interventions compared with TICD. While the predicate study did not itself prioritize determinants, neither the TDF nor the TICD prompt users to prioritize among the many potentially applicable domains or interventions as means of limiting or focusing the number and type of interventions. Overall, uncertainty remains about the optimal way to identify interventions that match behavioural determinants for a given behaviour, and the precision and relevance of those choices. Discussion This study was a naturalistic application of the TDF and TICD to identify evidence-based interventions corresponding with known determinants of AMDE reporting and, in so doing, to explore how use of these theoretical frameworks could be optimized. Both TDF and TICD were useful in identifying several interventions that could promote and support AMDE reporting. However, it is uncertain which interventions are the best options given discrepancies in the selection of TDF domains and TICD determinants, and corresponding interventions across theoretical frameworks and independent mappers. The content and format of TICD (well-defined domains and determinants spanning individual, organizational, system and environmental levels) may make it easier to apply than the TDF for individuals who are not familiar with either framework. Even still, uncertainty remains about how to best apply the frameworks in practice and their precision when used to design behaviour change interventions. Our findings align with previous work highlighting the uncertainty and challenges surrounding the application of theoretical frameworks to design behaviour change interventions. Lipworth et al. analyzed determinants of the uptake of clinical quality interventions and found that all 14 TDF domains and numerous corresponding interventions were relevant, necessitating a "drilling down" to identify those that were most "contextually salient" [25]. Lawton et al. used the TDF to conduct and analyze the findings of 60 interviews with 60 general practice health care professionals regarding adherence to various clinical recommendations [3]. A wide variety of determinants were identified but it was difficult to "pinpoint which determinants, if targeted by an implementation strategy, would maximize change", underscoring the need for "broader contextual consideration". One potential explanation is reality that theoretical frameworks do not address causal mechanisms, or how change occurs, which presents a challenge when attempting to identify which intervention (s) are most likely to support improvement [1]. Phillips et al. interviewed 10 health care professionals from six disciplines who used the TDF [26]. Frequently cited challenges experienced when applying it included the time and resources required to use the TDF, lack of clear operational definitions, and overlap between domains. Participants found it difficult, complicated, unwieldy, and subjective to interpret and apply the domains [26]. Birken et al. conducted a systematic review of five protocols and seven studies that used both the TDF and the Consolidated Framework for Implementation Research (CFIR) to examine the rationale for having applied both frameworks [27]. Authors of included studies justified the use of both frameworks by stating that one offered greater insight on determinants and the other on interventions, although which framework offered determinants versus interventions was interchangeable across studies. A conceptual analysis of reasons for the failure of interventions designed based on the TICD offered several reasons including potential mismatch of determinants to interventions or a subsequent mismatch of interventions to targeted groups and settings [6]. Thus, our research and that of others reveals uncertainty and challenges in the application of theoretical frameworks to design behaviour change interventions. More recently a guide to use of the TDF was published [28]. The guide specifies that coding disagreements could be resolved by either consensus among coders or assessment of inter-rater reliability, and when uncertain about coding to apply all relevant TDF domains. However, these suggestions do not help users select from among the many potential interventions identified by this approach. The interpretation and application of these findings may be limited by several factors. Independent mappers made a deliberate decision to not coordinate their interpretation of the TDF and TICD before mapping, nor did they intend to discuss and resolve discrepancies after mapping. The objective was to independently apply the theoretical frameworks specifically to explore the nature of any arising discrepancies as a means of identifying problems that may be encountered by others when employing these tools in implementation planning. Each mapper had differing levels of familiarity with both frameworks, thereby precluding the ability to comment on the nature of discrepancies when those applying the framework have similar levels of experience. As ARG conducted the interviews for the primary study, it is possible her familiarity with the data may have led to a contextual advantage when applying the frameworks. The challenges and discrepancies encountered when applying the frameworks may be specific to the single case examined, that of determinants of AMDE reporting. Also, the TDF and TICD may be better suited to assessing determinants and corresponding interventions for some contexts more so than others; that could not be determined by this study and will require future research. With respect to selecting determinants and interventions, our research and that of others [3,6,[25][26][27] found that the TDF and TICD are useful for fully describing the range of potentially relevant determinants, a task perhaps best done by implementation scientists who are familiar with the constructs and their definitions. This suggests that selecting the most relevant determinants and interventions is likely to benefit from collaboration with stakeholders with context-specific knowledge. Processes such as Intervention Mapping, whereby researchers and health care professionals can jointly choose and design interventions based on the identification and prioritization of determinants, may prove useful for developing and evaluating interventions that are more likely to improve the delivery and outcomes of care [29]. Further research applying the TDF and TCID in specific contexts is needed in order to resolve the differences between them and clarify the circumstances for which each framework is most useful. The critical need remains to make these tools easier to use for a broader audience, and to establish a reliable way to identify which of many potential interventions are likely to successfully address specific determinants. Key considerations include how many independent mappers are needed, what process is needed to resolve discrepancies across mappers, whether intervention design should be based on only those domains or determinants selected by all independent mappers, or on some other combination of domains or determinants identified; and how best to prioritize the selection of potential interventions. Further insight or framework development is also needed to help users address complex determinants, and to prioritize domains and corresponding interventions. Conclusions The TDF and TICD were employed to identify behavioural interventions corresponding to determinants of the reporting of AMDEs. Interventions common to both frameworks included information or educational strategies that provide evidence about AMDEs; audit and feedback of AMDE data; improved information systems; reminder cues, prompts and awards; modifying financial/non-financial incentives; sharing data on outcomes associated with AMDEs. Challenges and discrepancies in the application of frameworks raise uncertainty about which or how many interventions may be relevant to promote and support AMDE reporting. Given the worldwide imperative to promote the use of evidence-based innovations and improve the quality and safety of care, there is an urgent need to make tools such as the TDF and TICD easier to use for a broader audience, and to establish a reliable way to identify which of many potential interventions are likely to successfully address specific determinants. Just as research more broadly has seen a shift from the production and dissemination of evidence to the implementation of evidence, and it is time for the field of implementation science to shift from the development of frameworks to supporting their application in practice. Funding This study was funded by the Canadian Institutes for Health Research, who took no part in the design of the study; data collection, analysis or interpretation; or in the writing of the manuscript. Availability of data and materials All data generated or analysed during this study are included in this published article and its supplementary information files. Authors' contributions ARG conceptualized the study and acquired funding; designed the study, collected and analysed data, drafted the manuscript, and gave final approval of the version to be published. LD assisted with study design, collected and analyzed data, drafted the manuscript and gave final approval of the version to be published. Ethics approval and consent to participate This study, based on secondary analysis of themes that emerged from a previous study, did not require ethical approval. For the previous study [14], ethical approval was granted by the University Health Network Research Ethics Board and all participants provided written informed consent prior to being interviewed. Competing interests The authors declare that they have no competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Author details 1 Women's College Hospital, Toronto, Canada. 2 University Health Network, Toronto, Canada.
2018-06-05T22:07:18.903Z
2018-06-04T00:00:00.000
{ "year": 2018, "sha1": "e9208f99307d0bfb4332a04b5dfd8e9feabf8682", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-018-3251-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9208f99307d0bfb4332a04b5dfd8e9feabf8682", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29181597
pes2o/s2orc
v3-fos-license
Semi-discrete composite solitons in arrays of quadratically nonlinear waveguides We demonstrate that an array of discrete waveguides on a slab substrate, both featuring the $\chi^{2}$ nonlinearity, supports stable solitons composed of discrete and continuous components. Two classes of fundamental composite solitons are identified: ones consisting of a discrete fundamental-frequency (FF) component in the waveguide array, coupled to a continuous second-harmonic (SH) component in the slab waveguide, and solitons with an inverted FF/SH structure. Twisted bound states of the fundamental solitons are found too. In contrast with usual systems, the \emph{intersite-centered} fundamental solitons and bound states with the twisted continuous components are stable, in an almost entire domain of their existence. PACS numbers: Quadratically nonlinear (χ (2) ) media, continuous or discrete, provide favorable conditions for the creation of optical solitons. The wave-vector mismatch and χ (2) coefficient are efficient control parameters in this context, and the solitons display a variety of features due to their "multicolor" character. Accordingly, a great effort has been invested in the study of solitons in continuous [1,2] (as reviewed in [3,4]) and (quasi-)discrete [5,6,7,8,9,10,11] χ (2) media. In both cases, the solitons can find applications to all-optical switching [11,12,13] and light-beam steering. [14,15] Waveguide arrays, i.e., one-dimensional (1D) discrete systems, exhibit properties that are absent in continuous media, such as anomalous or managed diffraction. [11] Accordingly, discrete solitons are drastically different from their counterparts in continuous media, as was first predicted in the context of the χ (3) nonlinearity [16]. Here we propose semi-discrete composite solitons in χ (2) optical systems, that contain both discrete and continuous components, each carrying either the fundamentalfrequency (FF) or second-harmonic (SH) wave. We demonstrate that stable semi-discrete solitons can be readily formed in a waveguide array coupled to a slab waveguide, both structures being made of a quadratically nonlinear material. We study two most interesting species of semi-discrete solitons. Type-A ones consist of a discrete FF component in the waveguide array, coupled to a continuous SH component in the slab waveguide. Conversely, type-B solitons feature continuous FF and discrete SH components in the slab and discrete array, respectively. The proposed setting is displayed in Fig. 1. It includes the periodic array of waveguides, with a spacing x 0 , mounted on top of (or buried into) the slab waveguide. Both the array and slab are made of a χ (2) material, such as LiNbO 3 or KTiOPO 4 . A rigorous coupled-mode theory for such composite waveguides can be developed by a straightforward generalization of that available for a single discrete waveguide coupled to the slab [17]. Thus, we arrive at a system including a set of ordinary differential equations for the discrete array, coupled to a partial differential equation for the slab. In case A, the coupled equations take the following normalized form: and in case B they are written as Here, the normalized coordinates are ζ = z/z 0 and η = x/x 0 , where z and x are, respectively, the distances in the propagation and transverse directions, z 0 = Kx 2 0 , K is the propagation constant of the corresponding continuous mode, β is the effective wave vector mismatch, δ(η) is the delta-function, and ̺ = c d z 0 is the normalized coupling constant between adjacent waveguides in the array, where c d is the coupling constant in physical units, as predicted by the coupled-mode theory. [17] The normalized field amplitudes at the FF, φ n and Φ, and at the SH, ψ n and Ψ, are proportional to their counterparts, u n , U and v n , V , measured in physical ·χ(ω, ω)ê n,c (ω)ê n,c (ω) in cases A and B, respectively, with P d and P c the power in the stripe waveguide and the power density in the slab waveguide, A d and A c the transverse areas of the stripe waveguide and slab waveguide between two adjacent stripes, respectively, andê n (x, y) andê c (x) fundamental modes of the stripe and slab waveguides. In this paper, we focus on the case of ̺ = 1, which adequately represents the generic situation. The above equations were derived for the case when the FF and SH fields have the same (TE or TM) polarization. Using the general coupled-mode description [18], the system can be extended for the case of two polarizations, which will lead to a type-II χ (2) interaction [3,4] involving two different FF components. This case will be presented elsewhere. Both variants of the model (A and B) neglect direct FF-SH interactions in the same waveguide, as they are suppressed by the large natural mismatch, while we assume that care is taken to minimize the mode mismatch, between the continuous and discrete waveguides. Such requirements can be easily fulfilled, as the geometry gives rise to different propagation constants for the same frequency in the waveguides of the two types. A model which takes into regard the residual SH-FF coupling in each waveguide can be easily considered too, but cases A and B, as defined above, are the most interesting ones. Note also that in this geometry the overlap between the FF and SH is smaller, as compared to the case in which FF-SH interactions in the same waveguide are employed. Composite solitons amount to stationary solutions of systems (1) and (2) in the form of φ n (ζ) =φ n exp(iλζ), Ψ(η, ζ) =Ψ(η) exp(2iλζ) and ψ n (ζ) =ψ n exp(2iλζ), Φ(η, ζ) =Φ(η) exp(iλζ), respectively, where λ is the soliton's wavenumber. Inserting these expressions in the underlying equations, we solved the resulting systems by dint of the Newton-Raphson method. Similar to the ordinary discrete solitons, the composite ones can be odd or even: the former ones are centered at a site of a discrete waveguide, whereas even solitons are intersite-centered. Typical examples of odd and even composite solitons are displayed for both cases, A and B, in Fig. 2. To examine the stability of the composite solitons, we have first applied the Vakhitov-Kolokolov (VK) criterion [19], which predicts that the necessary stability condition is dP/dλ > 0, where P is the total power [which is the single dynamical invariant of Eqs. (1) and (2)], P (λ) = 2 +∞ −∞ | {Ψ, Φ} | 2 dη + n | {φ n , ψ n } | 2 , in cases A and B, respectively. The result is displayed in Fig. 3, which shows that (as may be expected) the solitons exist above the band of linear waves of the discrete subsystem, i.e., for λ > 2 in case A, and λ > 1 + β/2 in case B, and both odd and even solitons are VK-stable in most of their existence domain. The prediction is significant, as even solitons are always unstable in ordinary discrete systems. Note that the power of odd solitons is slightly smaller than that of even ones. Full stability of the solitons was examined in direct simulations of Eqs. (1) and (2), which completely corroborate the predictions of the VK criterion. In particular, those composite solitons of the B-type which are VK-unstable as per Fig. 3 decay into linear waves. We have also studied the twisted solitons, built as outof-phase bound states of two odd ones, see examples of Aand B-type solitons with a twisted FF component in Fig. 4. Their stability was also examined by means of both the VK criterion (see Fig. 4) and in direct simulations. The results demonstrate that the semi-discrete twisted solitons exist if their wavenumber λ exceeds a cut-off value, which depends on the mismatch β, and they are stable in almost the entire existence domain. By contrast, in continuous χ (2) media twisted solitons are always unstable [3,4]. In a small region near the cut-off the twisted composite solitons are unstable too. The cut-off being well separated from the band of linear waves, in simulations the unstable twisted solitons do no decay into linear waves, but rather evolve into a fundamental odd soliton or split in two such solitons. To summarize, we have shown that stable composite solitons, fundamental and twisted ones, can be supported by a slab waveguide coupled to an array of waveguides, both made of a χ (2) material. This new species of semidiscrete solitons features unusual properties, viz., stability of intersite-centered fundamental solitons, and of states with the twisted continuous component. One can envisage that this system may also support walking [20] semi-discrete solitons, and that similar solitons can be found in dual discrete-continuous systems with the Kerr nonlinearity, as well as in multidimensional χ (2) systems. This work has been supported by the NSF Grant No. ECS-0523386 and DoD STTR Grant No. FA9550-04-C-0022.
2018-04-03T01:20:21.087Z
2006-01-19T00:00:00.000
{ "year": 2006, "sha1": "8344e879ccc48003482fee8756524491e0a3fdd5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nlin/0601041", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8344e879ccc48003482fee8756524491e0a3fdd5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
252385402
pes2o/s2orc
v3-fos-license
Diagnostic accuracy of SARS-CoV-2 rapid antigen test from self-collected anterior nasal swabs in children compared to rapid antigen test and RT-PCR from nasopharyngeal swabs collected by healthcare workers: A multicentric prospective study Testing for SARS-CoV-2 is central to COVID-19 management. Rapid antigen test from self-collected anterior nasal swabs (SCANS-RAT) are often used in children but their performance have not been assessed in real-life. We aimed to compare this testing method to the two methods usually used: reverse transcription polymerase chain reaction from nasopharyngeal swabs collected by healthcare workers (HCW-PCR) and rapid antigen test from nasopharyngeal swabs collected by healthcare workers (HCW-RAT), estimating the accuracy and acceptance, in a pediatric real-life study. From September 2021 to January 2022, we performed a manufacturer-independent cross-sectional, prospective, multicenter study involving 74 pediatric ambulatory centers and 5 emergency units throughout France. Children ≥6 months to 15 years old with suggestive symptoms of COVID-19 or children in contact with a COVID-19–positive patient were prospectively enrolled. We included 836 children (median 4 years), 774 (92.6%) were symptomatic. The comparators were HCW-PCR for 267 children, and HCW-RAT for 593 children. The sensitivity of the SCANS-RAT test compared to HCW-RAT was 91.3% (95%CI 82.8; 96.4). Sensitivity was 70.4% (95%CI 59.2; 80.0) compared to all HCW-PCR and 84.6% (95%CI 71.9; 93.1) when considering cycle threshold <33. The specificity was always >97%. Among children aged ≥6 years, 90.9% of SCANS-RAT were self-collected without adult intervention. On appreciation rating (from 1, very pleasant, to 10, very unpleasant), 77.9% of children chose a score ≤3. SCANS-RAT have good sensitivity and specificity and are well accepted by children. A repeated screening strategy using these tests can play a major role in controlling the pandemic. Testing for SARS-CoV-is central to COVID-management. Rapid antigen test from self-collected anterior nasal swabs (SCANS-RAT) are often used in children but their performance have not been assessed in real-life. We aimed to compare this testing method to the two methods usually used: reverse transcription polymerase chain reaction from nasopharyngeal swabs collected by healthcare workers (HCW-PCR) and rapid antigen test from nasopharyngeal swabs collected by healthcare workers (HCW-RAT), estimating the accuracy and acceptance, in a pediatric real-life study. From September to January , we performed a manufacturerindependent cross-sectional, prospective, multicenter study involving pediatric ambulatory centers and emergency units throughout France. Children ≥ months to years old with suggestive symptoms of COVID-or children in contact with a COVID--positive patient were prospectively enrolled. We included children (median years), ( . %) Introduction Following the successive COVID-19 waves due to several SARS-CoV-2 variants, in many countries, healthcare authorities implemented non-pharmaceutical interventions and largescale testing strategies (1,2). Two methods were mainly used without distinction in France: reverse transcription polymerase chain reaction from nasopharyngeal swabs collected by healthcare workers (HCW-PCR), and rapid antigen test from nasopharyngeal swabs collected by healthcare workers (HCW-RAT) in addition to immunization programs (3). In 2021, 48.8% of the 168 million of tests recorded in the French national database were HCW-RAT (4). While HCW-RAT has lower analytical sensitivity than HCW-PCR, this method is highly specific, inexpensive, and provides results in minutes. Testing for SARS-CoV-2 is central to COVID-19 management and essential to detect people who are likely infectious, helping to implement control measures (5). For SARS-CoV-2 testing and screening, especially in children, rapid antigen test from self-collected anterior nasal swabs (SCANS-RAT) could be a useful tool (5). The duration of the pandemic and the frequency with which testing must be done to limit infectiousness, particularly in schools, means that the less invasive, less painful and less unpleasant tests should be used, to avoid poor acceptance by children and families. The diagnostic accuracy of HCW-RAT for diagnosing SARS-CoV-2 infection in children has been assessed in several studies Abbreviations: Ct, cycle threshold; HCW-RAT, rapid antigen test from nasopharyngeal swabs collected by healthcare workers; HCW-PCR, reverse transcription polymerase chain reaction from nasopharyngeal swabs collected by healthcare workers; SCANS-RAT, rapid antigen test from self-collected anterior nasal swabs. and was the subject of a meta-analysis (6). No test included fully satisfied the performance requirements recommended by the World Health Organization, and the diagnostic accuracy of the HCW-RAT under real-life conditions varied broadly (6). A recent French study in an emergency department found good sensitivity of the HCW-RAT in real life for symptomatic children, and when focused on high viral load, the sensitivity was excellent (7). Nasopharyngeal swabbing compared to other upper-respiratory sampling methods, including oropharyngeal swab, appeared to be superior in a pediatric study finding a significantly higher positivity rate and a significantly higher viral load on nasal samples (8). In adults, the diagnostic accuracy of SCANS-RAT was assessed in several studies, but relatively few patients were enrolled (9)(10)(11)(12). Millions of SCANS-RAT are used each day worldwide, but to our knowledge, no study has assessed their performance in real-life in children. This study compared SCANS-RAT to HCW-PCR and HCW-RAT, estimating the accuracy and acceptance, in a pediatric real-life study. Methods From September 10, 2021, to January 29, 2022, the Association Clinique et Thérapeutique Infantile du Val de Marne (ACTIV) network conducted a manufacturer-independent cross-sectional, prospective, multicenter study involving 74 pediatric ambulatory centers (see the Acknowledgments section) and 5 emergency units (Jean-Verdier hospital in Bondy, intercommunal hospital of Créteil, Princess Grace hospital in Monaco, Carémeau hospital in Nîmes, and Versailles hospital in Le Chesnay) throughout France. Children ≥ 6 months to 15 years old with suggestive symptoms of COVID-19 or children in contact with a COVID-19-positive patient were prospectively enrolled. Healthcare workers collected nasopharyngeal swabs to perform a rapid antigen test during the medical visit. Reverse transcription polymerase chain reaction were collected either during the medical visit or with a medical prescription in a laboratory. Swabs were performed as recommended in international guidelines (13). Ambulatory and hospital virology laboratories analyzed the HCW-PCR specimens according to the French National Reference Center recommendations (14). At the same time, SCANS-RAT was offered to children in the pediatrician office or emergency department. After oral instructions from an adult (parents or pediatricians), children self-collected the nasal specimen from both nares. Adults could help to perform the test when the children were not able to perform the swabbing alone. The test used was COVID-VIRO ALL IN R (AAZ-LMB, Boulogne Billancourt, France) which has a short soft sponge sampling part (1.5 cm). Recommended sampling duration was 30 s (15 s per nostril) (15). Children were asked to rate the SCANS-RAT from 1, very pleasant, to 10, very unpleasant. After informing the parents of the participating children about the study, an electronic case report form in a secure database was prospectively completed by the pediatrician. Any child or parent had the right to object to the data collection for this study. The diagnosis accuracy of the SCANS-RAT was compared with that of HCW-PCR and/or HCW-RAT. According to the spread of different variants in France, we defined 2 periods: period 1, when the Delta variant was predominant and Omicron not yet or poorly isolated in France (from September 10, 2021 to December 19, 2021), and period 2, when the Omicron variant was spreading and became predominant (i.e., > 50%, from December, 20, 2021 to January 29, 2022) (16). We performed an ad-hoc subgroup analysis on children who had a HCW-PCR with Ct <33 and with Ct <30. Data were entered by using an electronic case report form (PHP/MySQL) and were analyzed by using Stata/SE v15 (StataCorp, College Station, TX, USA). Quantitative data were compared by Student t test and categorical data by chi-squared or Fisher exact test. Discussion To our knowledge, this is the first large prospective study assessing in real-life the diagnostic accuracy of an SCANS-RAT in a pediatric population. Studies assessing SCANS-RAT diagnosis accuracy have involved only adults (9)(10)(11)(12). As compared with the HCW-RAT, for which a positive test result is often associated with live viral culture (17), the diagnostic accuracy of the SCANS-RAT is similar in both sensitivity and specificity. If we consider all positive tests independent of Ct number, as compared with HCW-PCR, the SCANS-RAT had excellent specificity but relatively moderate sensitivity under the minimum performance requirements as recommended by the World Health Organization (6). However, HCW-PCR have been reported to remain positive up to 5 weeks after infection while live virus is usually isolable only during the first week (5). Thus, HCW-PCR without Ct may not be the gold-standard to detect contagious patients. There is a continuous relation between Ct and viral culture with a 33% reduction of the odds of live viral culture for 1-unit increase in Ct (18). Several thresholds have been proposed (18). In France, tests with Ct ≤ 33 are reported as "positive" whereas tests with Ct > 33 are reported as . "weak positive" (19). In our study, if we consider only patients with Ct < 33, which suggests a high viral load, the sensitivity was good [84.6% (95%CI 71.9; 93.1)]. Viral load is an important determinant of disease transmission, which is a critical parameter for implementing control measures and disease modeling (20,21). The purpose of the SCANS-RAT is more to detect the most infectious patients than to accurately diagnose COVID-19. Rapid antigen tests have multiple advantages: suitability, speed of the results and cost. Furthermore, tests from anterior nasal swabs are suitable for repeated tests in children. Indeed, during the successive epidemic waves, children had to undergo many tests, sometimes for a short period of few weeks, and good acceptability is a crucial goal: lower sensitivity of individual tests can be compensated for by frequency of testing and wider dissemination of tests. Because children show substantially reduced mortality from COVID-19, entry screening into schools might require greater compromise that balances resources and sensitivity to testing as many individuals as possible. Because of a high specificity, the risk of a false positive test due to repeated SCANS-RAT is low. The use of tests from self-collected anterior nasal swabs and not from nasopharyngeal swabs collected by healthcare workers is the first step to succeed in a largescale testing strategy allowing for widespread school opening. Repeated use of the SCANS-RAT can contribute to a wider opening of schools with expected benefits for the mental and physical health of children (22). In the United States, many schools offered free COVID-19 tests (23). In France, in early January 2022, with the Omicron wave, the testing strategy for children at school was difficult to perform: 3 tests in 5 days (24). Indeed, in this context, even if it means losing slightly sensitivity, it appears crucial to have a very good acceptance of the tests in children, allowing a wide use within families without healthcare workers support. Of note, the sensitivity of the SCANS-RAT compared to HCW-PCR and HCW-RAT did not change significantly during the delta and Omicron periods. Similar results were recently reported in a study mainly in adult population (25). Our study has some limitations. First, we did not use centralized reverse transcription polymerase chain reaction performed by centralized high-complexity laboratories and the Ct number was available for only three-quarters of SARS-CoV-2-positive patients. However, this limitation is also a strength of our real-life study: we compared the SCANS-RAT with the . /fped. . FIGURE Rapid antigen test from self-collected anterior nasal swabs (self-test) results according to HCW-PCR viral load. methods used in real life. Second, most children in our study were symptomatic (92.7%), and we cannot extrapolate our results for screening in asymptomatic children. However, the accuracy of the HCW-RAT and contagiousness are believed to be mainly driven by the viral load and SCANS-RAT used in our study has as good sensitivity as HCW-PCR with low Ct (5). Third, for some children, HCW-RAT and HCW-PCR were not performed the same day. However, the majority of HCW-PCR were performed in symptomatic children and in the first few days after the symptom onset. This corresponds to a period where children have high viral loads with Ct <30 (26). In conclusion, the anterior nasal self-collected test used in this study seems reliable and suitable, allowing to detect infectious children. A repeated screening strategy using SCANS-RAT can play a major role in controlling the pandemic. Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request. Ethics statement The study protocol was approved by an Ethics Committee (Centre Hospitalier Intercommunal de Créteil, France) and was registered at ClinicalTrials.gov: NCT0441231. Legal guardians and children were informed with a written non-opposition form. Written informed consent was not required to participate in this study in accordance with the national legislation and the institutional requirements. Author contributions RC, SB, CJ, and CL designed the study. CA, AF, FC-S, OR, AA, CB, and BV made the acquisition of the study data. RC, AR, SB, and CL drafted the initial manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All the authors analyzed the data, revised critically the manuscript for important intellectual content, and provide approval for publication of the content. All authors contributed to the article and approved the submitted version. Funding This study has been self-funded by ACTIV, including the purchase of the tests.
2022-09-21T13:53:36.076Z
2022-09-21T00:00:00.000
{ "year": 2022, "sha1": "0bc12ab7e780bb4f9070ee604460636b99d16697", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "0bc12ab7e780bb4f9070ee604460636b99d16697", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
216488052
pes2o/s2orc
v3-fos-license
Comparison of Alternative Pulpwood Inventory Strategies and Machine Systems at a Log-Yard Using Simulations : The rising throughput of log-yards imposes new constraints on existing equipment and increases the complexity of delivering an optimal and uninterrupted supply of pulpwood to pulp mills. To find ways of addressing these problems by reducing log cycle times, this work uses a discrete-event mathematics model to simulate operations at a log-yard and study the impact of three different log-yard inventory strategies and two alternative machine systems for log transportation between main log-yard and buffer storage. The yard’s existing inventory strategy of last load in and first out limits access to older logs at the main storage site. By allocating space for 89,000 m 3 and 99,000 m 3 of pulpwood at the buffer storage it is possible to keep the log cycle time at the main storage to a maximum of 12 and 6 months. Additionally, the use of an alternative log transportation machine system comprising a material handler with a trailer increased the work time capacity utilization relative to the yard’s current machine system of two shuttle trucks and a material handler for transporting logs between the main and buffer storage areas. Compared to the currently-used last in first out inventory strategy and purposely emptying the main storage area once or twice per year did reduce the total work time of both machine systems by 14% and 30%. Consequently, the volume delivered from the buffer to the log-yard decreased on average by 17% and 37% when emptying the main storage area once and twice per year. Even with reduced work time when emptying the main storage area, both machine systems could fulfil given work load for transporting logs from the buffer storage to the main log-yard without interrupting operations of the log-yard. Introduction Sweden is a large pulp producer, accounting for 8.8% of the world's total pulp production in 2016 [1]. Over the last 16 years, the number of operational pulp mills in Sweden has decreased from 45 to 40, while the average production capacity per mill has increased by 27%, reaching 320,000 t pulp per mill [2]. These developments present significant challenges for supply chain logistics because of the need to ensure uninterrupted production at the mills while maintaining a competitive price for the delivered pulpwood as the supply area increases. Introducing intermediate terminals between the forest and the mill is one way to cope with the increased supply area and uncertainty in inventory keeping at the mill's log-yard [3][4][5]. However, extra material handling steps at terminals increase the overall cost of the delivered wood [5,6]. An alternative way of maintaining a pulpwood inventory level capable of supporting mill operations is to have a large log-yard at the mill's gate that can accommodate the incoming pulpwood volume. lock-in effect in the biggest storage area. The yard therefore uses a nearby buffer storage area when its main storage capacity is exhausted and to reduce the incidence of lock-in at the main yard. The yard's infrastructure does not allow the existing log-stackers to leave the main log-yard area, so an external machine park is used to move logs from the buffer area to the main yard. The company's interest is to explore alternative inventory keeping strategies and find out whether it would be possible to improve their operations by introducing a new type of machine to move logs between the buffer area and the main yard. Therefore the first objective was to evaluate if a new machine system can fulfil the work load requirements of moving logs between buffer area and main log-yard area. The existing and new machine system's performance was evaluated under three different inventory management strategies at the main log-yard. The second objective was to analyse the effect of the three inventory management strategies on the main yard's log cycle time and their effect on the storage capacity required in the buffer area. Materials and Methods The analysis presented here is based on data supplied by the company that owns the studied log-yard, Domsjö Fiber AB. These data include detailed information about when trucks, trains and ships arrived at the yard and the volume of logs they carried. Additionally, short time studies were conducted to observe the log-stackers' work cycles and thereby better understand their productivity and work logic. Comparatively little data were available on other aspects of the yard's operations because access was limited and some contractors were unwilling to be observed while working at the yard. Therefore, information on the log-yard's storage capacity and storage areas, the decision-making process at the yard, and the pulp mill's daily volume requirements were obtained via interviews with log-yard and logistics managers. The base model and the analysis of company data on which this study is based were described by Kons et al. [12]. The company's yearly data were divided into 52 data sets of which half were used for analysis and model building. The model's parameters were set per week because the company has a weekly pulpwood demand. The simulated results represent the yard's operations over a year, and the simulations are focused on the dynamics of the log inventory levels at the yard. The yard receives log deliveries via three modes of transport (trucks, trains, and ships), each of which has a different carrying capacity. Because the delivery volumes of each mode differ greatly, they were normalized against the volume supplied by a single truck [12]. That is to say, the delivery volumes of trains and ships were expressed in terms of the carrying capacity of a truck; each train and ship delivery was represented as 36 and 132 trucks arriving simultaneously, respectively. Normalization was necessary to enable realistic analysis of the work of log stackers and other machines at the yard, and to allow for work interruptions when working on volumes delivered by train or by ship; unloading a train or ship takes several hours, whereas a single truck can be unloaded in minutes [12]. Stage 1 The processing of logs can be divided into two stages (Figure 1). Stage 1 covers the unloading of logs delivered to the yard by truck, train, and ship. Deliveries may arrive on five days of the week. The company's data indicate that approximately one ship and one train arrive each week. Since there was insufficient data to generate a probability density function (PDF) to model train and ship arrival, the inter-arrival interval (dt) for all modes of transport was based on the truck data and was modelled using a generalized Pareto probability density function (1) as described by Kons et al. [12]. The company's pulpwood demand rose to 22,600 m 3 /week in 2018; it was assumed that this would leave the shape of the dt PDF unchanged but shorten the dt interval. The PDF has the form: where x 1 is the input value, k 1 is the shape parameter (0.3080), σ 1 is the scale parameter (0.00320) and θ 1 is the threshold parameter (0.00361). These values were derived by least squares regression. The company's data indicate that the probabilities of arrival of trucks, trains, and ships are 99.2%, 0.3% and 0.5%. The model was configured such that if a train or ship arrived during a given week, no further arrivals by the same mode of transport could occur during that week. The quantity dt is based on the yard's working hours and the fact that it is open for five days per week, including nights. Transport units generated during weekends are therefore removed before entering the waiting queue for unloading. After the volume of one truck load (ca. 38.9 m 3 ) was determined using a weighted PDF based on a normal distribution (with a mean of 45.4302 m 3 and a standard deviation of 4.5819 m 3 ) and a uniform distribution (with lower and upper bounds of 1 m 3 and 30 m 3 , respectively), the total normalized delivery volumes by different modes of transport was determined against those for trucks. The weights of the normal and uniform components were 0.78 and 0.22, respectively [12]. Weighted PDFs were needed to account for split truckloads, in which only part of the load is delivered to the pulp mill; such deliveries accounted for 22% of the volume supplied to the mill (2). where W 1 and W 2 are the weights 22% and 78% according to data, and V 1 1 , V 1 2 are the normal and uniform PDFs defined as (2b) and (2c), where x 2 and x 3 are the input values, µ 1 is the mean value (45.4302), σ 1 is standard deviation (4.5819), a 1 is the lower end point (minimum 1), b 1 is the upper endpoint (maximum 30). Once the transport unit arrives, it enters a waiting queue for unloading by one of the four log-stackers operating at the log-yard. The number of working log-stackers is decided based on the queue lengths of the waiting transport units. If the waiting transport queue is less than three units, only one log-stacker will be active. If the queue is between three and five units, an extra log-stacker is called out. If the queue becomes five to seven units, the third log-stacker is called out. If the transport queue becomes longer than seven transport units, all four log-stackers work simultaneously. The time taken to unload logs is described by a normally distributed PDF with a mean of 319 s and a standard deviation of 75 s. The log-yard has four distinct storage areas (A, B, C, and D) whose maximum storage capacities are, respectively, 36,000 m 3 , 18,000 m 3 , 5000 m 3 , and unlimited. Of these four storage areas A, B and C is considered as main log-yard while area D is buffer storage which is scattered around in a bigger close by area within the pulp mill territory. All four storage areas operate on the LIFO principle, meaning that the last log load to arrive is the first to leave the storage area. Volume delivered by truck is placed in storage area A, while deliveries by train and ship are placed in storage areas B and C, respectively ( Figure 2). Area D provides buffer storage capacity that is used when areas A, B, or C are full. If storage area A is full when a truck delivery arrives but the inventory level at area B is below 16,000 m 3 , the logs in the delivery will be unloaded at area B. However, if the inventory level at area B is above 16,000 m 3 , the logs will be unloaded at the buffer storage area (D). Logs that would normally be placed in storage areas B or C are redirected to area D if the inventory levels at those areas are above 18,000 m 3 and 5000 m 3 , respectively. Stage 1 ends when the logs are unloaded in one of the four storage areas. Stage 2 In stage 2, according to [28] to ensure that log-yard is not empty the four storage areas are pre-loaded at the start of the simulation in such a way that the total volume of logs stored at the yard is at least 70,000-75,000 m 3 , in accordance with the company's minimum objectives. Storage areas A, B, C and D are pre-loaded with 900, 400, 120, and 400 normalized truck units, respectively, corresponding to stored volumes of approximately 35,000 m 3 , 15,500 m 3 , 4660 m 3 and 15,500 m 3 , respectively. The log stackers used in Stage 1 are also used in Stage 2 to move logs from the main log-yard to the debarking drum. Each log stacker's work time is restricted to 18h per day for four days and 12 h for one day. During weekends, the log-stackers do not work at all. Given these constraints, the time taken for a log stacker to move one unit of logs at a rate compatible with the pulp mill's weekly pulpwood demand of 22,600 m 3 can be described using a normal PDF with a mean of 513 s and standard deviation of 154 s. Because of infrastructural constraints, log stackers can only work in storage areas A, B, and C. This gives rise to two questions relating to the management of inventory levels in the four storage areas and the machine systems used to move logs from buffer storage area D back to the main log-yard. For supply security reasons log-yard managers want to ensure that the total inventory level at all four storage areas does not fall much below 70,000 m 3 . Therefore, areas A, B and C must be filled to capacity at all times. However, managers also want the stored logs to be reasonably fresh. Unfortunately, the storage of logs in piles forces the yard to operate according to the LIFO principle, which creates a "lock-in" effect whereby the logs that arrive at the yard first are "trapped" at the bottom of the log stacks. To minimize the adverse impact of lock-in on pulpwood quality, the logs can be redistributed to the buffer storage area (D) periodically, allowing the main storage areas to be emptied. This reduces the overall log cycle time. Inventory Strategies Storage area A is the biggest and the most important part of the log-yard. Because of its high number of incoming trucks and the fact that the yard operates on the LIFO principle, there is a particularly high risk that logs will be locked-in for long periods of time in this area. To address this problem, three different log-yard inventory strategies are compared in order to determine the effect of periodically emptying storage area A on buffer storage and machine system transporting logs back to the log-yard. The log-yard's managers have little long term control over what is delivered or when it arrives, so the main way they can control inventory levels in the log-yard is by deciding how much volume should be removed and brought to the pulp mill from each storage area. Strategy I is the "Business As Usual" case, which represents the yard's current approach (Table 1). In this case, the log stackers visit each storage area at a frequency that keeps the inventory levels at each area in relative balance and minimizes the need to take logs to the buffer area. The relative frequencies at which log stackers visit areas A, B, and C are 57%, 8%, and 35%, respectively. However, this working pattern creates a high risk of lock-in at the biggest storage area (A) because the continuous arrival of new logs limits access to the older logs in the log stack at the back of the storage. Under Strategy II, lock-in at storage area A is alleviated by emptying area A once per year. During this process, the logs arriving for storage A are temporarily redirected to the buffer storage area (D). While this process is ongoing, 5% of the incoming volume that would normally be delivered to area A is directly redirected to area D in order to reduce the time needed to empty area A. Under this strategy, the relative frequency at which log stackers visit storage area A is overwritten from day 116 onwards if the inventory level at area A is above 5000 m 3 . This is maintained until the inventory level in area A falls to 5000 m 3 , which represents less than one day's demand from the pulp mill. At this point, it is assumed that all of the remaining logs in area A can be removed while new log stacks are built and so the storage area is emptied. Therefore, when the inventory level in storage area A reaches 5000 m 3 , the relative frequency at which log stackers visit area A is restored to the value used in Strategy I, and logs from the buffer area can be returned to the main yard. Strategy III assumes that log freshness is even more valuable and therefore calls for storage area A to be emptied twice per year. In this case, the relative frequency at which log stackers visit area A is overwritten (in the same way as in Strategy I) from day 58 onwards if the inventory level at area A is above 5000 m 3 . The stacker visit frequency is then reset to the Strategy I value once the inventory level at area A has fallen to 5000 m 3 . During this period, 5% of the volume incoming to area A is immediately redirected to storage area D. This process of emptying storage area A before returning to Strategy I is repeated after an additional 232 days. Under all three inventory management strategies, logs stored in the buffer area must be returned to the log-yard and fed into the debarking drum. At present, the log-yard's infrastructure restricts the log stackers' access to the buffer area, so an external contractor must be hired to transport logs from the buffer to the main log-yard. This arrangement is referred to as Machine System I (Table 1). To obtain more flexibility in the planning and control of log flow within the log-yard, its managers are considering buying an alternative machine setup (Machine System II). The performance and constraints of these machine systems were compared under each of the three inventory management strategies outlined above. Machine System I consists of a material handler and two shuttle trucks with trailers, and is used to move logs from buffer storage D to storage area C in the log-yard. All three machines under Machine System I are operated by the contractor, who must be called when needed and can only operate when the conditions permit at least one full shift (8 h) of continuous work. Machine System I has an average productivity of 156 m 3 /h; in the model, the variation in its productivity is described using a normal PDF for the time taken to serve one normalized truck load unit according to Functions (2a)-(2c); the mean and standard deviation of this PDF are 900 s and 270 s, respectively. When logs are brought back from buffer storage D, they are placed in storage area C, which is otherwise used to store logs delivered by ship and is the storage area closest to the debarking drum's in-feed. Therefore, for the contractor to be called out, the inventory level in storage area C must be 500 m 3 or less (to ensure there is sufficient space) and must not exceed 2300 m 3 to leave storage space for incoming ship deliveries. At the same time, the inventory level in buffer storage D must be at least 1300 m 3 to ensure there is sufficient volume for a full working shift. The contractors can work for at most 8 h per day, five days a week. Machine System II consists of a self-loading material handler with a trailer that works for the same 84 h per week as the log stackers at the log-yard. Machine System II is only used to transport logs from the buffer area to storage area C in the log-yard. The average productivity of Machine System II is 62 m 3 /h, and the variability of the time taken to move one unit with this machine system was modelled using a normal distribution with a mean of 2250 s and a standard deviation of 675 s. We assume that the log-yard owns the machine and can operate it daily with less strict inventory level conditions in storage areas C and D. Therefore, there is no lower limit on the inventory level in buffer storage D when using Machine System II because it is not necessary to provide sufficient work for a full working shift. Additionally, because Machine system II can work continuously every day, the inventory level required at storage area C to initiate transportation from buffer area D is set to 3000 m 3 . When the inventory level at storage area C reaches 4000 m 3 , log transportation is stopped. Under all three inventory management strategies, the transport units and the corresponding log volumes exit the system upon delivery to the debarking drum by a log stacker. Analysis and Statistics All data analyses, modeling, and simulations were performed using MathWorks' MATLAB and the Simulink software packages. In total, six scenarios were analysed representing every possible combination of inventory management strategy and machine system. The log inventory dynamics under the three strategies were then compared. Additionally, the effect of machine systems I and II on inventory dynamics was investigated along with differences in the machine systems' working patterns. Each scenario was simulated 30 times, giving a total of 180 simulation runs. After each run, data for each storage area and the total inventory levels over time were recorded. Additionally, data from the block representing the two different machine systems were recorded to evaluate machine work time and the volume of transported logs. Finally, the total volume generated and delivered to the debarking drum was recorded. According to Banks et al. [29], to verify that the model accurately represents the log-yard's operations, we compared its inputs and outputs to those reported by the company managers and to the company's data. Additionally, the machine work time and volume carried were verified to be within the maximum permitted working hours and below the machines' capacity, respectively. The statistical significance of differences between scenarios was tested using the Mann-Whitney two-tailed T-test (P < 0.05) and One-way ANOVA, and the Kruskal-Wallis test with Dunn's correction (P < 0.05). Results The differences between scenarios with respect to the average volume of pulpwood arriving at the yard were insignificant (0.51%). The differences between scenarios in terms of the average volume delivered to the pulp mill were slightly larger but still insignificant (0.75%; see Table 2). The log-yard's average total inventory level varied by no more than 5248 m 3 (Table 3) under any scenario. Additionally, the incoming and delivered volumes were very similar under all scenarios; the average difference between the total incoming and delivered volume was 1967 m 3 ( Table 2). However, the scenarios did differ significantly (P < 0.05) with respect to the distribution of the total volume between storage areas A, B, C, and D. The average yearly volumes at areas A and B under strategy II were 13% and 9% lower, respectively, than under strategy I, while those under strategy III were 15% and 13% lower, respectively, than under strategy II. Conversely, the average yearly volumes at areas C and D under strategy II were 28% and 13% higher, respectively, than under strategy I, while those under strategy III were 21% and 8% higher, respectively, than under strategy II. Varying the machine system used to transport logs between areas C and D had no significant effect on the inventory strategies (Table 3). However, there were differences in the machines' work patterns. On average, the total working time of Machine System II was 189% greater than that of Machine System I due the lower overall productivity per hour and more flexible working conditions of the Machine System II (Table 4). Additionally, the total working times of both machine systems under strategy II were 14% lower than under strategy I, with a further 16% decrease between strategies II and III. Despite the large differences in total work time between machine systems I and II, the volumes they transported differed less markedly. On average, Machine System II transported 16% more volume per year than Machine System I. In keeping with the work time results, the delivered volume under strategy II was 17% lower than under strategy I, and that for strategy III was 20% lower than under strategy II (Table 4). Over the year, the work time distribution of Machine System II was slightly smoother and more evenly distributed from week to week than that of Machine System I ( Figure 3). Moreover, Machine System II could still be used during periods when storage area A was being emptied and storage area C was close to its maximum capacity for receiving logs from the buffer storage. Figure 4 shows the yearly inventory changes over one year under Strategy I for Machine Systems I and II. The average inventory level is very close to the average maximum and average minimum values because the volume flows in all four storage areas were relatively consistent over the year. Even when looking at the absolute maximum and minimum values under both machine systems over 30 simulations, the inventory level variation at storage areas A and B was relatively small (ca. 6000 and 6300 m 3 , respectively), and both areas remained close to their maximum capacities (Figure 4). The variation in maximum and minimum inventory levels in storage areas C and D was much more pronounced. Storage area C could go from empty to full (ca. 5000 m 3 ) in a matter of days when a ship was unloaded. Additionally, because the purpose of area D is to accommodate changing volume flows, its average stored volume varied by as much as 67,000 m 3 . In Strategy II, storage area A is emptied once per year, which reduces the average inventory levels of storage areas A and B while increasing those of areas C and D ( Figure 5). In scenarios S2 M1 and S2 M2, the time of the year during which the storage A reaches its minimum inventory level varied by as much as 63 and 47 days, respectively, depending on the inflow of material. While storage area A was being emptied, machine work due to haulage of logs between areas D and C stopped for two full weeks under scenario S2 M1 and for four weeks under scenario S2 M2. Once the inventory level in storage A reached its minimum value, the machine work and storage levels returned to almost their initial values. Under strategy III, storage area A is emptied twice yearly, and lock-in is avoided by reducing the log turnover time to around six months ( Figure 6). With both machine systems, under Strategy III, the volume held at storage area A reaches its first minimum after 55 days of emptying; reaching the second minimum takes 31 for Machine System I and 21 days for Machine System II. Because storage area A is emptied twice per year, the average inventory levels of the storage areas change even more markedly than under Strategy II. Additionally, while storage area A is being emptied, machine work due to transportation of logs between storage areas D and C is halted for 10 full weeks when using Machine System I, and 3 weeks for Machine System II. The main driver of log-yard storage dynamics are the incoming and outgoing volumes. While the arriving and delivered volumes to the mill do not vary greatly (Table 2), they have an effects on the fluctuations in buffer storage as even a slight increase of incoming traffic can trigger even more prerequisite condition to redirect excess volumes to the buffer storage. As result this effects also the time required to empty area A (Figures 4-6). This modest variation in combination with inventory strategy and machine use causes substantial variation in the total inventory level, which ranged from 40,832 to 130,262 m 3 under different scenarios. Discussion Lock-in effects of the logs were avoided at all four storage areas (A, B, C and D) when the total minimum inventory level of the whole log-yard was held below 60,000 m 3 , as was the case for strategies II and III. However, while the log-yard did not run out of logs in any of the simulation runs, maintain such low inventory levels would substantially increase the risk of pulpwood shortages. Under Strategy I, maintaining an absolute minimum inventory level of less than 10,000 m 3 also creates the possibility of avoiding lock-in in storage areas C and D, assuming that there is space to use a stack layout that provides access to logs in storage area D ( Figure 4). However, given the pulp mill's weekly pulpwood demand of ca. 22,600 m 3 , reducing the yard's inventory level to less than 60,000 m 3 creates a major risk of raw material insecurity for the pulp mill. When the log-yard's inventory is so low, an additional supply of pulp chips to the pulp chip buffer storage should be secured to compensate for the lack of pulpwood logs. Additional pulp chip storage helps to even out the demand for the pulpwood supply and makes logistics planning easier. However, pulp chip storage is a short term storage in order to keep the pulp chip quality acceptable and it is not included in the study. The results for the average wood flows revealed that only storage area C exhibited no lock-in under any strategy. Under Strategy I, to maintain the yard's total inventory at the desired level of 75,000-125,000 m 3 , all three of the yard's main storage areas are held close to their maximum capacity at all times. Under strategies II and III, lock-in is disrupted at storage areas A and C at least once or twice per year ( Figures 5 and 6a,b,e,f). This is achieved by redirecting some of the incoming volume to storage area D, which works as a buffer storage. In storage areas B and D, the configuration of the storage area plays a significant role in preventing lock-in (or at least minimizing its occurrence). Storage area B is a long area with two log stacks facing one-another. As shown in Figures 5 and 6, this area reaches its minimum average inventory level of ca. 7000-9000 m 3 (which is around half its capacity) once or twice per year. With well-planned machine work, these drops in inventory levels could enable the removal of older logs from this area, thereby avoiding lock-in. One way to plan work in this way would be to manually track each end of the elongated log stacks and empty them from one end at a time. This would require data on log arrival times and placement, which could be used to develop a suitable algorithm using approaches established in studies on the industrial packaging problem [30]. Area D is the yard's biggest storage area. In this work, it is assumed to have an unlimited capacity and a large area. When storage area A is managed according to Strategy I, the average inventory level of storage area D stabilizes at ca. 40,000 m 3 ( Table 3). The lowest average volume delivered from area D to area C is 135,830 m 3 /year (Table 4). Under these circumstances, with a well-planned log stack configuration, the log storage period should be no longer than one year, even at area D. To make log-yard operations and inventory levels smooth, it is important to have sufficient machine capacity for transporting logs from storage area D to C. Table 3 shows that inventory levels at storage areas C and D are almost independent of the machine system under all three strategies, although Machine System II causes the inventory level at area D to be slightly lower and that at area C to be slightly higher than is observed with Machine System I. This difference relates directly to the composition of the two machine systems. Machine system I is like a hot system in that each of its units is highly dependent on the others. Consequently, careful planning and well-defined conditions are needed to utilize it optimally. Conversely, Machine System II is like a cold system in that a single machine unit can load/unload and transport smaller volumes at its own pace at almost any time because it has much less strict operational requirements. Additionally, Machine System II was assumed to have the same weekly working hours as the other log-yard machines (84 h/week). The two machine systems thus have different operational environments. As a result, Machine System II delivers more volume back to the log-yard than Machine System I and has more working hours per year (Table 4) and ( Figure 3). Both machine systems were assumed to deliver logs from storage area D to storage area C. However, Machine System I is operated by a contractor who is only called in when needed, whereas Machine System II is assumed to belong to the log-yard. Therefore the working hours of Machine System II could be increased further because the log-yard owners could use it in other more remote parts of the yard when storage area C is too full to take logs from area D. It is very challenging to model an existing log-yard with enough accuracy to reproduce real-world decision-making and performance outcomes. When using DEM-based models, such accuracy requires large amounts of data representing a long period of time. Unfortunately, this is rarely possible because such data are rarely collected by companies, log-yard workers may be unwilling to be recorded, and safety considerations may prevent observation of certain areas of the yard. The results presented here relied on data representing one year of the studied yard's operations. However, it is impossible to guarantee that a single year of data will be fully representative of the yard's operations over a multi-year period. Additionally, like Robichaud et al. [31], we relied heavily on short time studies and interviews with log-yard managers when modelling the decision-making processes that govern the yard's operations and also when analysing performance-related data. The problem with short term time studies and decision-making data provided by log yard managers is that they do not necessarily reflect real yard operations over extended periods; instead, they may simply reflect the operational environment that managers would like to see. The lack of long-term data is particularly problematic when modelling decision-making relating to log flows within the log-yard because it necessitates the inclusion of many conditional statements and outcomes often deviates from norms over longer time periods. To further improve the yard's performance and reduce log turnaround times in each storage area, algorithms and potentially even new log-yard layouts could be developed using concepts from packing problem theories [32]. Since the log-yard handles large bulk volumes, even small improvements in log accessibility and storage times could confer noticeable financial and operational benefits [30]. In conclusion, the results presented here show that when the maximum storage capacity at storage area D reaches around 89,000 m 3 , which is a ca. 16,000 m 3 increase relative to that used at present, and it opens up a possibility to empty storage area A once per year, even when the inflow of material to the log-yard is at its highest ( Figure 5). Storing an additional 10,000 m 3 (total 99,000 m 3 ) at area D makes it possible to empty storage area A twice per year, halving the log turnaround time at ( Figure 6). Additionally, the yard's existing machine systems for transporting logs from area D to area C are fully capable of handling the volumes of material necessary to keep the log-yard running while implementing these strategies. These results can be used to provide the initial decision support for the yard's managers, enabling them to avoid the lock-in of logs at the yard and to evaluate alternative machine systems for log handling. Funding: This research was part of the BioHub project financed by the Botnia-Atlantica program, under the European Regional Development Fund.
2020-04-02T09:31:24.242Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "58a8a99f70517635ef6503a95c438b335e475fb2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/11/4/373/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "819f1b7bb97faee26835dc512d4addb735df1a60", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
49569380
pes2o/s2orc
v3-fos-license
Pulmonary benign metastasizing leiomyoma: A case report Uterine leiomyoma is the most common benign gynecological tumor. Rarely, it has benign extra-uterine growth patterns, including benign metastasizing leiomyoma (BML), with lungs being the most common metastatic site. We present a case of a 47-year-old female who, 3 years prior to presentation, underwent abdominal supra-cervical hysterectomy for benign leiomyoma. Approximately 6 months prior to presentation, she was seen for shortness of breath and chest pain. A CT of the chest revealed multiple new non-calcified pulmonary nodules bilaterally. PET/CT demonstrated mild FDG uptake in multiple lung nodules, with no significant extra-thoracic sites of abnormal FDG uptake. A CT guided lung biopsy showed a low grade, smooth muscle tumor. Immunohistochemical staining was positive for smooth-muscle actin and desmin, estrogen and progesterone receptor and was negative for CD117, HMB-45, CD34, pan cytokeratin and EMA. She underwent wedge resection of one of the nodules which confirmed the above findings. A cytogenetic analysis was also performed, which was consistent with pulmonary BML. She ultimately underwent left lower lobe resection and was started on a daily aromatase inhibitor. BML is a rare disease usually seen in women of reproductive age. The pathogenesis and treatment remain controversial. BML mostly tends to have an indolent course and a favorable outcome. Case description We present a case of a 47-year-old, morbidly obese African American female, G6 P4-0-2-4, with a 22 pack per year smoking history who had quit 5 years prior to presenting. She had an abdominal supracervical hysterectomy 3 years prior to presentation for uterine fibroids. Surgical pathology had confirmed benign leiomyomata with no malignant findings. A CT of the abdomen/pelvis was performed a year later for abdominal pain which showed a 42 × 47 mm right ovarian cyst and multiple ventral hernias. She did not follow up for the ovarian cyst. Six months prior to presentation the patient was seen in the hospital for shortness of breath and chest pain. She underwent a CT of the chest with pulmonary embolus (PE) protocol, which showed multiple, noncalcified pulmonary nodules bilaterally, with the largest being 14 mm in the left lower lobe (LLL), accompanied by a 10 mm nodule in the right middle lobe amongst others, concerning for metastatic disease. No pulmonary embolism was seen ( Fig. 1A-C). As part of the initial workup, given her history of ovarian cyst, she underwent a CT of the abdomen and pelvis again which showed bilateral cystic masses in the adnexa, likely ovarian in nature. She subsequently had a pelvic ultrasound confirming right and left ovarian cysts which were 13 × 24 × 20 and 33 × 23 × 27 mm, respectively. No other masses were seen. This prompted a repeat visit to her gynecologic oncologist. Her tumor markers, including CA-125 and CEA, were normal. FSH was normal as well. Based on the nature of the cysts and negative tumor markers, metastatic ovarian cancer to the lungs was deemed highly unlikely and a referral was made to the lung cancer center. PET/CT was ordered to further assess the nodules and assess for any extra-pulmonary areas of abnormal uptake. It again showed the largest nodule measuring 13 × 14 mm in the LLL, and a middle lobe nodule measuring 10 × 10 mm, both demonstrating mild FDG uptake ( Fig. 2A and B). When compared with a CT performed four years prior, the LLL nodule was new and the middle lobe nodule had increased in size since that time (Fig. 3). Also, at least 7 other smaller nodules, which were beneath the threshold of reliable characterization with PET FDG imaging, were seen with no significant extra-thoracic sites of abnormal FDG uptake. Based on these findings she underwent a CT guided biopsy of the LLL nodule. Pathology identified a low grade (2/8 HPF mitosis), benign appearing, smooth muscle tumor. Immunohistochemical staining showed smooth-muscle actin and desmin, but was negative for CD117, HMB-45, CD34, pan cytokeratin and EMA. An estrogen receptor was present with patchy positivity and a progesterone receptor was strongly and diffusely positive. Given her history, these findings were suggestive of benign metastasizing uterine leiomyoma. Due to concerns about a possible sampling error missing features of a more aggressive leiomyosarcoma, as well as to get fresh tissue for cytogenetic analysis, she underwent a video-assisted thoracoscopic surgery (VATS) wedge resection of the LLL nodule. Pathology again confirmed cytologically bland-appearing spindle cell neoplasm, with entrapment of bronchiolar epithelium and no significant nuclear atypia or necrosis. Mitotic activity was low (up to 2 mitoses per 10 high power fields), with no margin involvement identified. Properly controlled immunostaining showed the neoplastic cells to lack expression of HMB-45 and CD117 (c-kit) (no support for lymphangioleiomyomatosis or gastrointestinal stromal tumor, respectively) as before. A properly controlled DOG-1 (discovered on GIST) immunostain was positive, a result typically associated with gastrointestinal stromal tumor. However, this result has also been reported for uterine type retroperitoneal leiomyomas and peritoneal leiomyomatosis [19] (Fig. 4A-F). Cytogenetic analysis was also performed. Karyotyping showed an abnormal result, including loss of chromosomes 19 and 22 and deletion of 1p, in addition to other abnormalities (Fig. 5). This was consistent with the diagnosis of pulmonary benign metastasizing leiomyoma. The patient was started on the daily aromatase inhibitor, Anastrozole. She tolerated the medication without any side effects. Three-month follow up CT scans showed a decrease in the size of pulmonary nodules. Six and 12-month CT scans have shown stable disease. Discussion BML is a rare disease, first described in 1939 by Steiner [11], with about 100 cases described in the literature to date [2]. The pathogenesis and etiology have remained controversial with hematogenous spread of benign uterine tumor, local smooth muscle tissue proliferation or metastatic low grade leiomyosarcoma as proposed etiologies [2]. It is usually seen in women of reproductive age with a history of uterine leiomyoma who underwent hysterectomy, favoring the hematogenous/ iatrogenic spread of the tumor, yet in some cases lung nodules existed even before hysterectomy, as in our case [3,8]. However, the cytogenetic studies demonstrating a monoclonal origin of both uterine and lung tumors, along with positive hormone receptors and response to hormonal therapy, support the hypothesis [2,12,13]. Cytogenetic studies have also shown that BML is a genetically distinct and definable entity with a 19q and 22q deletions, amongst others. Such changes are also found in a small genetically distinctive subset of uterine leiomyomata, supporting a common origin [18]. The average age of patients diagnosed with BML is 48 years with about a 3 month to 20 year span between hysterectomy and lung findings [14,16]. Open/thoracoscopic lung biopsy is the standard diagnostic modality [3]. Pulmonary smooth muscle proliferations can either be primary, including hamartomas, lymphangioleiomyomatosis, leiomyoma and leiomyosarcoma, or metastatic, including metastatic leiomyosarcoma and BML. The low mitotic index (< 5 mitoses per 10 high power fields), lack of nuclear pleomorphism, lack of local invasion and distinctive karyotypic profile helps differentiate BML from other possible diagnoses [4,15,18]. Due to the rarity of the disease, currently there are no treatment guidelines for BML. Multiple options have been reported in the literature, including close observation, surgical resection or antiestrogen therapy (e.g., selective estrogen receptor modulator, progesterone, aromatase inhibitors, oophorectomy and gonadotropin-releasing hormone analogues) [5,17]. The preferred treatment is surgical resection if possible with hormonal therapy as an alternate. BML tends to typically have an indolent course and a favorable outcome, although pulmonary lesions may continue to progress, resulting in pulmonary insufficiency and even death [18]. Conclusion Despite BML being a rare condition, it should be considered in the differential diagnosis in women of reproductive age with a history of uterine leiomyoma presenting with pulmonary nodules, solitary or multiple. Accurate histopathological analysis along with immunohistochemical staining and cytogenetic analysis are necessary to exclude other spindle cell neoplasms. Declaration of conflicting interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. with entrapped benign bronchiolar epithelium (epithelial entrapment rather than obliteration being a reflection of indolent rather than aggressive growth of the neoplasm). C: High power view of the lung wedge resection specimen showing the cytologically bland appearance of the spindle cells (lack of any significant nuclear atypia); also, inconspicuous mitoses (no significant mitotic activity), and no necrosis. D: Core biopsy, desmin immunostain (marker of smooth muscle and skeletal muscle differentiation), showing strong and diffuse expression in the neoplastic cells. The appearance of the smooth muscle actin (SMA) immunostain, which was also done on the core, is identical. E: Core biopsy, progesterone receptor immunostain, showing strong and diffuse nuclear expression in the neoplastic cells. The estrogen receptor immunostain showed similar results. F: Wedge resection specimen, DOG-1 immunostain, showing unexpected diffuse expression in the neoplastic cells. The CD117 (c-kit) immunostain was negative. The DOG-1 is typically positive for GIST, but has also been reported to show expression in some uterine type retroperitoneal leiomyomas and peritoneal leiomyomatosis. Funding The authors received no financial support for the research, authorship, and/or publication of this article.
2018-07-17T00:17:54.958Z
2018-05-03T00:00:00.000
{ "year": 2018, "sha1": "25be67541c508c5313bdbfabefd691c0c3816083", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.rmcr.2018.04.017", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25be67541c508c5313bdbfabefd691c0c3816083", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227063042
pes2o/s2orc
v3-fos-license
PhishGAN: Data Augmentation and Identification of Homoglpyh Attacks Homoglyph attacks are a common technique used by hackers to conduct phishing. Domain names or links that are visually similar to actual ones are created via punycode to obfuscate the attack, making the victim more susceptible to phishing. For example, victims may mistake"|inkedin.com"for"linkedin.com"and in the process, divulge personal details to the fake website. Current State of The Art (SOTA) typically make use of string comparison algorithms (e.g. Levenshtein Distance), which are computationally heavy. One reason for this is the lack of publicly available datasets thus hindering the training of more advanced Machine Learning (ML) models. Furthermore, no one font is able to render all types of punycode correctly, posing a significant challenge to the creation of a dataset that is unbiased toward any particular font. This coupled with the vast number of internet domains pose a challenge in creating a dataset that can capture all possible variations. Here, we show how a conditional Generative Adversarial Network (GAN), PhishGAN, can be used to generate images of hieroglyphs, conditioned on non-homoglpyh input text images. Practical changes to current SOTA were required to facilitate the generation of more varied homoglyph text-based images. We also demonstrate a workflow of how PhishGAN together with a Homoglyph Identifier (HI) model can be used to identify the domain the homoglyph was trying to imitate. Furthermore, we demonstrate how PhishGAN's ability to generate datasets on the fly facilitate the quick adaptation of cybersecurity systems to detect new threats as they emerge. INTRODUCTION A common type of phishing attack involves permuting alphabets of the same Latin characters family. These are also commonly known as look-alike domains, or typosquatting. In a study by Dhamija et al. [1], researchers fooled 90.9% of their participants by hosting a website at "www.bankofthevvest [.]com", with two "v's instead of a "w" in the domain name, showcasing the effectiveness of this strategy. Based on lookalike and typosquatted attack types, the current industry accepted approach is to calculate the edit distance between strings. Equation (1) shows two sample domains which are 1 Levenshtein edit distance away from "facebook.com". In this method, a lower Levenshtein value indicates more similar domains, increasing the confidence of a phishing attempt. As most modern browsers support the display of Internationalized Domain Names (IDN), domain names with digits and other special characters can all be registered. IDNs are converted to their Latin character equivalent in the form of punycodes. Though extremely useful in facilitating domain names of various languages, it opens up the possibility of cyber attacks, particularly, homoglyph attacks. The homoglyph attack vector comes into play when there is a mixture of characters that look similar to their Latin counterparts. As shown in Table 1, it is not easy to differentiate the homoglyphs and their original domains. Replaced Punycode Visualized facebook.com "a" to "á" xn--fcebookhwa.com fácebook.com google.com "l" to "ł" xn--googe-n7a.com googłe.com imda.gov.sg "i" to "ı" xn--mdaiua.gov.sg ımda.gov.sg As homoglpyh attacks have been on the rise since 2000 1 , Many techniques have been proposed in the literature to detect such attacks. Suzuki et al. [2] studied the similarity be-tween single characters and evaluated their pair wise similarity based on mean squared error. They then tuned their algorithm by getting humans to evaluate similarity of character pairs. A major drawback of this is that the string of words wasn't taken into account as similarity comparisons was done at character level. Furthermore, the authors opined that a combination of homoglyphs could affect the confusability of homoglyph strings. In this work, the lack of a large dataset was a major problem, hence the need for human labellers. Woodbridge et al. [3] showed that a Siamese CNN was able to detect and classify homoglyph attacks. He also contributed a dataset containing pairs of real and spoofed domains renderable in Arial. Though extremely useful for the purpose of training ML algorithms, the major drawback is that it is inherently biased towards Arial font. This means that punycode that could be rendered by other fonts are not taken into account. Deep learning models trained on such a dataset would have a bias towards Arial font. Creating a curated dataset for multiple fonts would be extremely tedious and may not be efficient, as it would again be biased towards those fonts. Thus, we propose to make use of state-of-the-art GAN algorithms to extend Woodbridge et al.'s dataset to produce potentially infinite possibilities of homoglyphs. Though there may not be a way to render these GAN generated homoglyphs in punycode, we expect that GAN generated data would provide significantly more variability to the dataset such that ML algorithms would not be constrained or limited to variations of any particular font. We highlight our 3 main contributions. Our major contribution is PhishGAN, which can generate realistic text images of homoglpyhs. We show that PhishGAN is able to produce a more varied set of homoglpyh images than naively applying SOTA algorithms like Pix2Pix and CycleGAN. Although there may not be a valid punycode to produce PhishGAN's output images, it is extremely useful in serving the purpose of dataset augmentation for data-intensive deep learning algorithms to train on. Our second contribution is an extensive validation of PhishGAN's output via a Homoglyph Identifier (HI), which is intended to detect and classify which domain a homoglyph was trying to mimic. We aim to show a model would be able to identify and classify real-life homoglyphs after being trained on just data generated by PhishGAN. Other than using triplet loss over paired loss, the HI is largely similar to Woodbridge et al.'s work. Our final contribution is a realistic scenario testing, showing how PhishGAN's ability to generate dataset on the fly can help cybersecurity systems adapt quickly to new emerging threats. PRELIMINARIES AND PROBLEM SETUP In this section, we briefly review the current SOTA conditional GANs, particularly Pix2Pix and CycleGAN and introduce the mathematical formulation of our problem. Pix2Pix and CycleGAN are the closest related works to PhishGAN and they belong to the family of conditional GANS, whose outputs are conditioned on an input. These 2 models were experimented on extensively to determine the vital components to produce visually realistic homoglyphs. Pix2Pix is a conditional GAN algorithm developed by Isola et al. [4] with the aim to translate input images into realistic output images. For example, given outline sketches of objects, it is able to include colours and produce realistic looking images as shown in Figure 1. Isola et al. made use of paired images; his dataset contains input sketch-like images and desired photo realistic ground truth images. As the images are paired, one could simply minimise the L1 loss between the ground truth and the network output described in (2) In (2), x is the input sketch like image, z is the noise vector added to allow for variation, G is the generator function that generates an output image given x and z as inputs. Finally, y is the target output image. The generator function, G, is typically realised via a UNet CNN architecture as shown in Figure 2. An input image x is first passed through CNN layers to reduce its dimensions to a low dimensional space. At the smallest dimension, a noise tensor is concatenated channelwise and it is then upsampled via convolution transpose layers to produce a tensor of the same size and shape as the input x. At each upsampling stage, the corresponding tensor at the downsampling stage is concatenated channel-wise to the upsampled tensor. Like ResNet architectures [5], UNets facilitate gradient backpropagation through neural networks. Isola et al. showed that with the above architecture and simply minimising the L1 loss function, blurry images could be produced (see Figure 1. Isola et al. then added on a discriminator, trained to classify whether images are generated or real. This is easily achieved by training a CNN in a supervised way (i.e. With a discriminator function, D, that tags "real" images as "0" and "fake" images generated by the generator as "1"). The generator's objective function is then augmented with a loss that describes its ability to fool the discriminator, (3). The discriminator objective function 2 is as follows: Zhu et al. [6] relaxed the need for paired images in Cy-cleGAN; it only requires samples of x's and y's. It is essentially a combination of 2 Pix2Pix networks as shown in Figure 3. In particular G and D G are as previously described by Pix2Pix where G aims to generate y, with input x, while D G aims to determine whether the y's are generated by G or not. There is however, another set of generator (F ) and discriminator (D F ). The function, F , aims is to generate x from y and D F aims to determine if x's are generated by F or not. The objective function for the discriminators are same as before, while the generators have 2 loss terms in place of L1: This loss 3 puts a constraint such that an input image x after being processed by G and F sequentially should be x. Similarly, an input image y, after being processed by F and G should be y. This loss acts as a regularisation loss to limit the search space while optimizing G and F . It is added to the loss function of both G and F . Identity losses: These constrains that passing an image y to G should yield y as G aims to produce images that are from the y's dataset, and vice versa for F . L I,G is added to the loss function of G while L I,F is added to the loss function of F . Zhu et al. [6] showed that CycleGAN was able to "morph" images of horses (x) to zebras (y), without paired images. Both Pix2Pix and CycleGANs have been tested on photo images. However, to the best of our knowledge, there is no GAN in the literature that looks at morphing text based images. In this work, we investigate the applicability of both Pix2Pix and CycleGAN for our use case of generating "glyphed" text images from real images. The ability to do this will essentially allow us to generate an infinite sized homoglyph dataset. Related to our second and third contributions is Woodbridge et al.'s [3] work on using Siamese CNN to detect homoglyph strings by checking whether the L2 distance of their encodings versus the encodings of a checking list of strings is below a certain threshold 4 . Siamese CNN aims to extract features from images into a single vector. This is represented as follows: S(i) = e where S is the CNN function that encodes an image i into encoding e. The network S is then trained on pair loss, which is the L2 distance between 2 encoding. For pairs of strings that have been labelled to be similar, we minimise the L2 distance while for pairs of strings that are dissimilar, we maximise the L2 distance. This loss function is also known as contrastive loss. A drawback is that it doesn"t place an upper bound on how far to segregate 2 differing points. Thus, it isn"t optimal if 2 points, which are already at the centre of their respective clusters are pushed further away and out of their clusters. The triplet loss has been shown to produce better encoding than the contrastive loss over a variety of use cases [7]. Thus, in this work, we make use of the triplet loss to train S 5 . 3 z G and z F refer to the noise tensors added to generator functions G and F respectively 4 If so, a homoglyph detection is flagged and it is classified as the checking list item with the smallest L2 distance. 5 As mentioned earlier, we will not be comparing the advantages of the triplet loss over the contrastive loss in this work. Instead, we will use it to train a HI to show possible use cases of PhishGAN and also to validate its output. PhishGAN PhishGAN's aims to generate homoglyphs given any Latin based input text string. To overcome the bias a manually created dataset may have toward any one font, PhishGAN should also be able to accept strings of multiple fonts and output homoglyphs corresponding to such fonts. We make use of Woodbridge et al.'s contributed domain dataset that contains pairs of domains and possible homoglyphs renderable by Arial font. Our workflow for PhishGAN is shown in Figure 4. Fig. 4. Workflow for PhishGAN A domain string is first rendered into a greyscale image, of shape 40 × 400 × 1, using a certain font via the Python Pillow package. We also perform data augmentation by randomly shifting the text around the 40 × 400 × 1 image. Next, the rendered image, x, together with a noise tensor (z) is fed to a generator function, based on the UNet architecture, to produce image G(x, z). Our UNet architecture is shown in Table 2. Batch normalisation is done between every convolutional layer. The input to the UNet is the 40 × 400 × 1 greyscale image, normalized to between −1 and 1. 512 channels of randomly generated Gaussian noise were concatenated channelwise to the tensor output of the final 2D Convolution layer. As such 528 channels will be fed to the first convolution transpose layer to eventually reconstruct a 40 × 400 × 1 image tensor. Leaky Relu was used as the activation function between each layer, except the final layer where a Tanh activation was used. Next, a discriminator was trained to identify whether the input image is real or fake as shown in Figure 5. Table 3 shows the details of the discriminator network architecture. Again, the Leaky Relu activation function was used for all layers except the final layer, which was activated via a Sigmoid function. No drop out was used. We also find that addition of batch normalisation here significantly degrades final performance. Instance normalisation was used instead. The objective function for the generator is similar to Pix2Pix but instead of using the L1 loss, we introduce a dot product loss, in particular: L dot = f lat(D(G(x, z))).f lat(y) The flat() function in (5) reshapes the image tensors to a vector in order to calculate the dot product. It has been found previously that such a loss function is especially useful in preserving the style of an image [8], and is widely used in neural style transfer algorithms. For the case of homoglyph generation, we are trying to preserve the style of the target image, y, but yet would like additional variations so that a model trained with these augmented data would be less biased toward what any particular font can render. The generator objective function is thus as follows: For the discriminator, we find that removing the network's dependency on x as a condition gives better results. Thus the objective function for the discriminator is as follows: D(G(x, z))) (7) A batch size of 64 was used and the network was trained over 25k steps. We also tried a CycleGAN approach. However, the images produced were not as varied as the above approach, which is based on Pix2Pix. Homoglyph Identifier (HI) The HI works by first encoding the input homoglyph image into a low dimensional vector. This is done through the use of a separately trained CNN, called the Encoder. We then apply the Encoder to a list of domains (referred to as the checking list), that we would like to protect against homoglyph attacks, to get reference vectors for each domain 6 . Next, we calculate the Euclidean distance between the suspect homoglyph and each of the reference vectors. If the Euclidean distance is less than a threshold (T ), we classify the suspect homoglyph as a homoglyph and we identify the domain that it was trying to mimic as the one with the smallest Euclidean distance. We employ the triplet loss to train this network. Figure 6 illustrates the triplet loss. In our formulation, we use the outputs of PhishGAN as the anchor. The positive sample is the example the anchor was trying to mimic in the checking list while the negative sample is sampled randomly from the checking list, with probabilities inversely proportional to the encoding's Euclidean distance from the anchor's encoding. Thus, an item in the checking list closer to the anchor's encoding (E(A)) in terms of Euclidean distance would have a higher probability of being sampled. It is important to note that only the output of PhishGAN is used to train the Encoder; no hand crafted dataset is used. The Encoder is then trained via the loss function (8) [9]. It is clear from the equation that minimising L triplet is equivalent to ensuring that the distance between E(A) and E(P ) is at least smaller than the distance between E(A) and E(N ) with a margin of M . For the sake of brevity, we will not delve into the details of how this is an improvement over the pair loss in Siamese Networks as they are well documented in the literature [10] [7]. In our experiments, we set M to 1 arbitrarily, thus the threshold, T , was also set as 1. The network used for the encoder is also a CNN network identical to the one used by PhishGAN's discriminator, except that 3 dense layers of 128 neurons were used instead of the 4 stated in Table 3. The final activation of this CNN network is a linear one, which we subsequently activate via an L2 normalisation function. This ensures that the L2 norm of the encodings are always 1. Instance normalisation was used between network layers in this network. To analyse PhishGAN's impact on HI, we paint the following scenarios: • Scenario 1: Given a checking list of 10 domains, we train a HI to identify homoglyphs of these domains and showcase the results on a testing dataset created via dnstwist 7 8 • Scenario 2: Next, we add an additional unseen domain, "covid19info.live" to the checking list. This is to observe the changes in performance due to the inclusion of a new domain that is likely to be used by hackers for phishing during the coronavirus pandemic in 2020. • Scenario 3: Finally we train the HI again, using Phish-GAN, with "covid19info.live" added to the checking list and observe the performance. RESULTS AND DISCUSSION We first showcase sample outputs of PhishGAN for items in the checking list that this paper uses 9 and compare them with the current SOTA. As can be seen in Figure 7, Phish-GAN, although similar in structure to Pix2Pix, is able to provide much more variations for training of homoglyph networks. The inability of CycleGAN to work shows that the variation between the x and y domains are very small and it may be difficult for an algorithm to automatically pick out these subtle differences. Pix2Pix was able to show some variation as can be seen from the small number of glyphs added to certain strings like "google.com", "apple.com", etc. It is obvious that PhishGAN produces the most varied number of glyphs. For "yahoo.com", it is interesting Fig. 7. Sample outputs of PhishGAN compared to other methods that it was able to produce "yaHoo.com". It was also able to morph "microsoft.com" quite significantly to something visually similar to "mlcrosoft.com". Finally, we showcase PhishGAN's ability to morph multiple fonts. It is interesting to see the example of "google.com", which was morphed to "gooale.com", "youtube.com" being morphed to "younibe.com" and "linkedin.com" to "iinkeain.com". We next use PhishGAN to train the HI. As mentioned in the previous section, we show the performance for the 3 scenarios. We will use 2 evaluation metrics; first we determine its ability in detecting homoglyphs 10 . Next, we determine its ability to classify those homoglyphs that were detected to those in the checking list. It is important to note that the same model was used in both Scenario 1 and Scenario 2 while Scenario 3 pertains to retraining a new model, taking into account the additional domain in the checking list. All models were trained to convergence in terms of training loss (i.e. triplet loss). As can be seen in Table 4, there is a degradation in accuracy when moving from scenario 1 to scenario 2. This could be due to the fact that the model was not trained on "covid19info.live" in the checking list. Thus, it is less aware of the type of glyphs one may expect from such a domain. There is largely an increase in the F1 score because out of those that were identified as homoglyphs, more were classified correctly on average. Next, we retrained the model in scenario 3 and we see that we are able to regain the accuracy performance and also increase F1 score. This could be due to the larger training dataset as the model now also sees glyphs from one other example, which may help in detecting glyphs Finally, we showcase HI's ability to produce meaningful encodings of the input text based images. As can be seen in Figure 8, the Encoder is able to encode the different homoglyph images, produced by PhishGAN, into clearly discernible clusters, with each cluster corresponding to a particular domain name as shown in Figure 8. The variation within each cluster also showcases PhishGAN's ability to generate a variety of homoglyphs for each string. This shows that Phish-GAN is not suffering from mode collapse 11 , a widely known problem for GANs. On this note, it should be added that the original Pix2Pix architecture did not include the addition of noise tensors as the authors 12 find that they do not cause sig-nificant variations in the output, given a particular input condition, x. In our case, however, the additional noise tensors were important as they allowed varied outputs as evidenced in Figure 8, where clusters aren't just a single point. Finally, Figure 8 also shows why there could be misclassifications. Looking at the "linkedin.com" and "covid19info.live" clusters, we observe that the distance could be quite near, indicating the possibility of classification errors. CONCLUSION We have shown how PhishGAN outperforms current SOTA in its ability to generate homoglyphs. This was achieved by making practical modifications to the Pix2Pix architecture: 1. Replacing L1 loss with L dot . 2. Inclusion of noise tensors in the generator. 3. Removing condition input, x, from the discriminator. Using instance normalisation for the discriminator. We also show that we are able to get reasonable performances using just PhishGAN generated data to train a HI. We then showcase how adding an additional domain into the checking list may degrade HI's accuracy performance and how Phish-GAN can be used to retrain the HI to regain its accuracy and classification performance. The HI's ability to encode the text based images into discernible clusters was also verified by applying t-SNE on the encodings, validating that the triplet loss is indeed an appropriate loss for this problem setup. Finally, the variations within each cluster are also proof that PhishGAN doesn't suffer from mode collapse problems that plague GANs in general. The work here has significant impact for future research into homoglyph detection as it shows that deep neural networks can also be used to generate homoglyphs which can in turn be used to train homoglyph detection models and update them on the fly as new threats emerge. We believe the work done here provides a significant alternative to handcrafting homoglyph datasets and can contribute significantly to the prevention of homoglyph phishing attacks. To the best of our knowledge, this is also the first piece of work that applies GANs to text based images, opening up a new research direction for the application of GANs. FUTURE WORK Future work could extend PhishGAN to other languages. Relaxing the need for paired training data should be explored. pytorch-CycleGAN-and-pix2pix/issues/152
2020-06-25T01:01:30.828Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "313f9823b5c30ca067ad8b65463ec5dfff7e7a10", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.13742", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "313f9823b5c30ca067ad8b65463ec5dfff7e7a10", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14558253
pes2o/s2orc
v3-fos-license
Inflation from Warped Space A long period of inflation can be triggered when the inflaton is held up on the top of a steep potential by the infrared end of a warped space. We first study the field theory description of such a model. We then embed it in the flux stabilized string compactification. Some special effects in the throat reheating process by relativistic branes are discussed. We put all these ingredients into a multi-throat brane inflationary scenario. The resulting cosmic string tension and a multi-throat slow-roll model are also discussed. I. INTRODUCTION Inflation [1][2][3] provides a natural mechanism for creating the homogeneity and flatness of our observable universe. It also gives an elegant way of generating the perturbations [4][5][6][7][8][9][10] which seed the structure formation. In order for the inflation to last sufficiently long and then successfully exit to reheat the universe, the inflaton has to be held up on a potential for a sufficiently long time. Such a mechanism is achieved most commonly by a potential which is very flat on the top. The required flatness is summarized by the slow-roll conditions. A central problem in inflation has been to find a natural realization of such a flat potential in a fundamental theory. Many years of research in supergravity and string theory indicates that, while such flat potentials may arise in many occasions, they are not generic. Therefore it is important to ask if inflation can happen given a steep potential, while generating a scale invariant spectrum. In this paper we study such a model by making use of the warped space. Recently warped space has shown its importance in both field and string theory. It has been proposed as one of the few possible explanations to the hierarchy problem [11]. In this paper, we study an interesting use of the warped space in the context of the inflation. Since the infrared (IR) end of a warped space generally has a small warp factor, to a bulk observer at the ultraviolet (UV) end of this warped space, anything trapped in the IR side moves very slowly in the extra dimension. This is because the speed of light traveling in the extra dimension is small in the IR end. In particular this applies to a D3-brane. To a four-dimensional observer, the extra dimension is the internal space and the position of the D3-brane in the extra dimension is a scalar field. Therefore this provides a new mechanism for the scalar field to move very slowly [17,24,28]. In terms of the gravitationally coupled scalar field theory, we will be interested in a scalar field with a relativistic kinetic term and rolling down from the top of a steep potential in a warped internal space. The causality restricts the scalar to roll slowly and we will show that it is quite robust against the steepness of the potential. The appearance of the resulting inflation is kind of similar to the slow-roll inflation: the inflaton stays at the top of the potential for a long time before it falls down through fast rolling. However the more detailed nature of these two scenarios are different: for example, in slow-roll inflation, the potential is flat and the inflaton is non-relativistic, while here the potential is steep and the inflaton is ultra-relativistic during inflation. This mechanism is especially interesting in situations where the warped space become necessary and generic, but the flat potentials are not. We will call this type of inflationary models as DBI inflation. It is important to realize effective field theories of any inflation model in a unified fundamental theory such as string theory. In this paper we are interested in the idea of the brane inflation [33][34][35][36][37][38] in the flux stabilized string compactification [13,14,16]. The setup is the orientifold compactification of type IIB string theory. Besides stabilizing the complex and dilaton moduli, the NS-NS and R-R fluxes also induce warped space (throats) around various conifold singularities. Such warped space carry D3-charges. They attract anti-D3branes in the bulk and then annihilate them. Because the D3-charge of a throat is discrete in multiples of some integers, D3-branes will generally be created at the IR end of the throat after this annihilation [39,22]. We will be particularly interested in those throats with large flux numbers. The fluxantibrane annihilation process in such a throat proceeds through quantum tunneling [39,22]. In the four-dimensional spacetime point of view, the annihilation creates a bubble in the false vacuum. Generally the interior of the bubble is still in a false vacuum, because there may be a moduli potential for the resulting D3-branes, or anti-D3-branes in other places of the compact manifold waiting to be annihilated. If such a moduli potential is flat enough, the slow-roll inflation can happen within the bubble. Under a steep repulsive moduli potential, normally these D3-branes will quickly roll out and our universe cannot live in such a bubble. However since here we have a situation where the D3-branes are trapped in an IR warped space, such a rolling is subject to a causality constraint, namely the DBI inflation can happen. A multi-throat brane inflation model [28] will be studied in more details. In this model, branes generated as above roll out of the brane (B) throat, triggering the DBI inflation. They reheat and settle down in the Standard-Model (S) throat. We show that such a model can generate the right density perturbations with a direct reheating and a Randall-Sundrum (RS) warp factor. Subtleties of the relativistic brane reheating [40], and its effect on the density perturbations and on the large flux number are studied. Other possible cases, for example adding an antibrane (A) throat, and a multi-throat slow-roll model are also discussed. The multi-throat configuration provides a unique opportunity to observe signals of string theory. It gives a hierarchical range of scales. For low string scale such as the RS setup, we may have chance to observe strings in colliders. For throats with high string scale, brane inflation can create cosmic strings [41][42][43]. They may give observable signals in addition to the density perturbations and the spectral index [44][45][46][47][48][49][50][51]. We will discuss the corresponding string tension in our cases. We also discuss another way strings are produced during the dS epoch, which are more general but with less tension. This paper is organized as follows. In Sec. II, we describe the effective field theory of the DBI inflation. This includes the zero-mode inflation and the density perturbations. In Sec. III, we embed the field theory in the setup of the flux compactification, where the inflaton dynamics is described by the DBI action of D3-branes in warped extra dimensions. Various constraints coming from the validity of the DBI action and an interesting stringy suppression mechanism on density perturbations are discussed. In Sec. IV, we turn to the reheating process. We emphasize two important processes that arise quite often for the throat reheating in a multi-throat setup -the relativistic brane collision and cosmological rescaling. We put all these ingredients in a multi-throat model in Sec. V. In Appendix A, we discuss branes rolling into a throat, which is used in Sec. IV, and briefly review another DBI inflation model. Appendix B studies how the DBI inflation and slow-roll inflation are jointed in case of a flat potential. This leads to a multi-throat slow-roll inflation model. Cosmic strings produced in different cases are discussed accordingly in Sec. V and Appendix B. II. FIELD THEORY OF DBI INFLATION Although the DBI inflation scenario is motivated by string theory, the field theory description of the main process during the inflationary period can be extracted out independently, and is interesting in its own right. This describes a scalar rolling down from a steep potential in a warped internal space. In this section, we will study how inflation arises in this setup and the resulting density perturbations. When appropriate, we will mention its connections to the string model that will be discussed later. A similar type of model was studied in Ref. [17,24], where an important difference is to start the inflaton from the UV side. The resulting inflationary scenario has some qualitative differences and will be compared in the end of Appendix A. For later convenience, we will denote the scalar field as r, which is related to the usual scalar field φ through r(t, x) ≡ φ(t, x)/ √ T 3 (the constant T 3 is the brane tension). The scalar moves in an internal warped space with a characteristic length scale R Here ds 2 4 = g µν dx µ dx ν is the four-dimensional space-time metric. It is highly warped near r ∼ 0. The scalar field r(t, x) can be thought of as a 4-d hypersurface embedded in 5-d space (2.1). The action which governs the gravitationally coupled scalar is given by Note that in this paper V has been made dimensionless by pulling out a factor of T 3 . The kinetic term in (2.2) may be understood as a generalization of the kinetic term for a homogeneous scalar in flat four-dimensional space-time 3) whose integrand is proportional to the proper length of a relativistic particle traveling in the warped space. Another familiar limit is the non-relativistic limit where |h −4 g µν ∂ µ r∂ ν r| ≪ 1. The action then reduces to the minimal case As we will see, in terms of a D3-brane moving in extra dimensions, the action (2.2) comes from the DBI and Chern-Simons action describing the low-energy effective world-volume field theory of a probe brane in the AdS and R-R fields background. We assume that the potential V (r) has a maximum at r = 0 and falls as r > 0. For a generic non-flat potential, in the familiar case of (2.4), the scalar will undergo a fast-roll and make the inflation impossible. Here the highly warped space near r ∼ 0 plays an important role. The idea is that the scalar velocity is restricted by the speed of light in the internal spaceṙ ≤ h 2 . Therefore the requirement of slow rolling translates into the requirement of a small warp factor. This is interesting since an exponentially large warping is not difficult to find. In fact, it turns out that there are more stringent constraints coming from, for example, the strength of the background which supports the warp factor against the inflaton backreaction, and the infrared closed string creation of the dS back-reaction. These constraints will be discussed in Sec. II A and Sec. III A. A. The inflation We first study the zero-mode dynamics of the scalar inflaton, which drives the inflation. We ignore the spatial inhomogeneities of the scalar field so that it is only a function of time r(t). The four-dimensional metric g µν is taken to be diag(−1, a 2 (t), a 2 (t), a 2 (t)) , where a(t) is the scale factor. The action (2.2) then becomes The corresponding equations of motion are We will only need the information of the potential near r ∼ 0 and expand V (r) as We will start the scalar inflaton deep in the warped space from r 0 ∼ 0. A realization of such an initial condition will be provided in Sec. III. For clarity, we make two approximations to be verified in the end of this subsection. First, we approximate that during inflation the potential V (r) stays as a constant V (0) ≡ V i . As we will see, this is because the inflaton moves over only a very short distance during the inflation. Second, the kinetic energy of the scalar field, namely the first two terms on the right hand side of the Eq. (2.7), is negligible comparing to the potential V . This is because these two terms are red-shifted by the warped factor h 4 . Both assumptions hold because during inflation the inflaton is held inside the IR region for a sufficiently long time. This will translate into a not-very-restrictive upper bound on V i . The Eq. (2.7) is then significantly simplified. It gives a dS space with a Hubble constant From now on we will denote m 2 ≡ βH 2 as long as H is a constant. In the non-relativistic limitṙ ≪ h 2 , the equation of motion (2.8) for r reduces to the familiar formr If β ≪ 1, the potential (2.9) satisfies the slow-roll conditions and the Eq. (2.11) determines the slow-roll velocity. It is also interesting to see how the warp factor will affect such dynamics and we study it in Appendix B. Here we concentrate on the more general situation where β > ∼ 1. In this case, the inflaton will be accelerated quickly to become relativistic if h 2 is small enough. We thus expand the inflaton evolution around the speed of light (2.14) The subleading terms are suppressed at least by a factor of 1/Ht and neglected if The parameters p and λ in (2.12) are determined by matching (2.13) and (2.14). We get where the condition is required for such an expansion. For the case that we are mostly interested in, β > ∼ 1, the condition (2.15) is stronger than (2.17). As emphasized in Ref. [17,40], the back-reaction of the relativistic inflaton can have significant impact on the DBI action. The condition that such a back-reaction can be neglected can be estimated as follows. The warping scale caused by the energy density of the inflaton field in the internal space is characterized by R ′4 ∼ γ/T 3 , where γ = h 2 / √ h 4 −ṙ 2 is the Lorentz contraction factor. This scale has to be much smaller than that of the background R 4 . Or equivalently, as we will discuss in Sec. III, the background warped space with R 4 ∼ N/T 3 can be thought of as being created by N source D3-branes. The energy density of the relativistic probe D3-brane should be much smaller than the source for the back-reaction to be ignored. Using (2.16) this condition, γ ≪ N, becomes Let us now summarize the dynamics of the inflaton inside the throat. Starting from the place r ≫ βR 2 H 3N where the back-reaction can be ignored, the inflaton travels ultrarelativistically toward the UV side of the warped space under the acceleration of the potential (2.9). The coordinate velocity is bound by the causality constraint and is very small. During this period, the inflaton is held up at the top of the potential for a sufficiently long time to trigger the inflation. The Lorentz contraction factor of the inflaton decreases in this process. Around r ∼ R 2 H, the inflaton starts to become non-relativistic due to the increased warp factor. But the coordinate velocity is in fact much larger. Inflation is ended and the inflaton undergoes a fast-roll down to the bottom of the potential. During the whole inflationary period, the inflaton is relativistic. This period lasts for ∆t ∼ Nβ −1 H −1 , so the total number of inflationary e-folds is To have a large N, we need R to be bigger than T have N ∼ 10 4 . In terms of string theory flux compactification, such a value is not difficult to find. In fact, as we will see from a more detailed model in Sec. V, a sufficient amount of inflation proceeds even if β is considerably larger than one. Within the range (2.15) and (2.18), the inflaton position r is related to the latest e-folds N e by This expression can be turned around and viewed as a requirement for h in order to have N e e-folds of inflation. This is easy to satisfy since the warp factor is usually exponentially small. Therefore we will consider the constraint from the back-reaction (2.19) to be stronger. We have a few comments here. Besides the lower bound (2.18) coming from the back-reaction, we will also have corrections related to the initial starting point r 0 at −t 0 , if we assume the inflaton starts there with zero velocity. This gives the asymptotic behavior (2.16) a correction of order R 2 /t 0 . 1 Nonetheless as mentioned, because the main constraint from the back-reaction on the total number of inflationary e-folds is usually much stronger than the requirement of having a relatively large warping, we will always assume that the inflaton starts from a small enough r and the abovementioned correction can be ignored. The motion of the inflaton within the region where the back-reaction cannot be ignored is under less precise control so far. A qualitative description is discussed in [40]. The time 1 If −t 0 ≪ −β 2 N 3 e /H, or r 0 ≪ R 2 H/β 2 N 3 e , this correction does not affect the first two leading terms in (2.16), and therefore does not change our analyses. If −β 2 N 3 e /H < −t 0 ≪ −N e /H, or R 2 H/β 2 N 3 e < r 0 ≪ R 2 H/N e , the second term in (2.16) is affected. But this will only lower the velocity and make the back-reaction smaller. Having larger −t 0 will then decrease the total number of e-folds. scale is expected to be roughly of order Nβ −1 H −1 if we think of this region as having an effective warp factor similar to the lower bound (2.18). This will make the inflationary period last even longer. Since the total number of e-folds (2.19) is already very large, in this paper we assume this period to be outside of the observable universe. More importantly, there are other more stringent constraints coming from the backreaction of multiple D3-branes and infrared closed string creation. We will describe these in terms of strings and branes in Sec. III A. Interestingly, the DBI inflation persists even if β < ∼ 1. What happens is that, as we decrease β, a growing period of slow-roll inflation smoothly matches on to the end of a long period of DBI inflation. We will study this in Appendix B. We now check the consistency requirement for the two approximations made in the beginning of this subsection. First, the distance ∆r ∼ R 2 H that the inflaton moves over during the inflation lowers the potential by βH 4 R 4 . Second, in Eq. (2.7), the kinetic energy which is very easy to satisfy and normally having HR ≪ 1 is enough. B. Density perturbations In the previous subsection we have studied the zero-mode evolution of the inflaton field and gravitational background. In this subsection we will study perturbations around it. In Ref. [54], Garriga and Mukhanov have developed a general formalism to calculate the density perturbations for their kinetic energy driven inflation model [55]. Their analyses are very general and we can directly adapt them here as well. A similar application can be found in [24]. Before we start the rigorous derivation, we would like to present an intuitive approach [28] which gives a more explicit interpretation of the underlying physics in our case. As we have seen, a special property of the inflaton in our case is that it travels relativistically and the corresponding Lorentz contraction factor γ is decreasing. If we choose at each moment an instantaneous frame which moves at the same speed as the inflaton, the zero mode velocity of the inflaton vanishes to this observer. (It is a good approximation for large N e . This is because ∆γ/γ ≈ ∆N e /N e , so the relative change in γ is negligible in a duration of several e-folds.) Because of the time dilation, the Hubble constant is increased by a factor of γ to this moving observer. We can then use the result of the scalar fluctuations in the minimally coupled (non-relativistic) case, namely . This amplitude is essentially determined by applying the uncertainty principle to the inflaton momentum generated within a Hubble horizon of size H −1 γ −1 . After they are stretched outside of the Hubble horizon, their amplitudes get frozen because they are no longer in causal contact. We then switch to the lab observer, the horizon size remains the same since it is in the direction orthogonal to the velocity. But the frozen scalar amplitude will be reduced by a factor of γ −1 because of the relativistic Lorentz contraction. So we get δr ≈ H/(2π which is the same as the slow-roll case, except that the horizon size is now reduced by a factor of γ −1 . This horizon is also called the sound horizon. Because of these scalar inhomogeneities, different spatial part of the universe will end the inflation at different time [6,7] (in a gauge where we set the unperturbed slice synchronous). For small perturbations δr ≪ r, the time difference is In the third step, Eq. (2.20) is used. The subscript * means that the variable is evaluated at the time of the horizon crossing when the corresponding mode is frozen. This time delay seeds the large scale structure formation [6,7,56,57]. On the scale of Cosmic Microwave Background (CMB), the resulting density perturbation is given by In the simplest case ε r = 1. But for later purpose, we define ε r ≡ H r δt r /Hδt. Notice here that we have denoted the Hubble constant H r during the reheating differently from the Hubble constant H during the inflation. In the usual field theory we normally assume that they are approximately equal. But applying to the multi-throat string compactification, they may be very different because the reheating can happen in a throat not responsible for the inflation. Independent warp factors cause the subtlety of a possible period of cosmological rescaling process in such a throat. This cannot be described by an effective single scalar field theory and may be imposed as a boundary condition. It also has the effect of shifting the wave-number and rescaling the δt → δt r by a related factor. We leave these details specific to string models to Sec. IV & V. Let us now start to apply the formalism from [54]. The fluctuations around the zero-mode evolution (2.5) and (2.16) can be parameterized in the following way [10] where we have added the subscript 0 to denote the zero-mode evolution. Following the notation in [54], we denote the pressure and energy density as The equations of motion for the perturbations follow from the Einstein's equations ∂ ∂t where the sound speed c s is defined as In (2.27) and (2.28), the ε, p and c s are all evaluated by the zero-mode solutions, which are Using the new variables ξ and ζ, and the relationḢ = − T 3 2M 2 Pl (ε + p), we can rewrite the equations of motion aṡ we can simplify Eqs. (2.33) and (2.34) as where the prime denotes the derivative with respect to the conformal time η defined by dη = dt/a(t). Another equation is of first order and becomes auxiliary. To evaluate z ′′ /z we use (2.35) and (2.30). The leading contribution comes from the scale factor a which has the strongest time dependence. The next order comes from ε, p and c s , which all vary more slowly. The time-dependence of H is neglected. So we get (2.37) Hence for large N e , Eq. For k/a ≪ H/c s , it is also easy to get the solution where the coefficient of z is obtained (up to a constant phase) by matching it to (2.38) at the horizon crossing c s * k = a * H * and using z ≈ Hence we see the well-known phenomenon that, in terms of ζ, the quantum fluctuations (2.38) evolves to the frozen classical perturbations (2.39). We also see that the horizon size is c s * H −1 agree with the previous intuitive argument. Under the assumption of instant and efficient reheating, the perturbations of the scalar field is transformed into density perturbations. The corresponding spectral density is where ζ k is the Fourier mode of ζ defined in (2.32). The density perturbation δ H is related to the spectral density P R (k) by So we recover the result (2.23) (except for a difference between H r δt r and Hδt which we discuss below). To compare with the previously mentioned physical interpretation, we obtain the relation between ζ and Φ using (2.28) and (2.32) The second term is smaller than the first by a factor of 1/Ht ≈ −1/N e . Since V ≫ h 4 γ, we have ζ ≫ Φ. This means that the first term in (2.32) dominates. The physical interpretation of this term fits in our previous intuitive arguments in the convenient gauge choice. So a possible jump of the Hubble constant from H to H r and the time delay from δt to δt r , imposed as an approximate boundary condition at the reheating, is translated into a jump in ζ by a factor of ε r . So the density perturbation will have an additional factor ε r (as long as ε r ζ ≫ Φ). Such a mechanism is provided when we discuss more reheating details in Sec. IV and arises quite generally in some string models in Sec. V and Appendix B. III. DBI INFLATION IN STRING THEORY It is important to ask how the field theory described in the previous section may be embedded in string theory. One natural place to realize it is to use the mobile D3-branes in the flux stabilized string compactification. This was described in a multi-throat brane inflation scenario [28]. In this setup, the position of branes in the extra dimensions is the inflaton as in the brane inflation [33], and the warped extra dimensions corresponds to the warped internal space. Giddings, Kachru and Polchinski (GKP) [13,12] show that, near a conifold singularity in type IIB string compactification on a Calabi-Yau manifold, the presence of NS-NS and R-R three-form fluxes on two dual cycles induces the gravitational and R-R charges similar to those of the transverse D3-branes. The equivalent D3-charge is where K and M is the number of NS-NS and R-R fluxes respectively, and the characteristic length scale R of the resulting warped space is given by In addition, this warped space has a minimum warp factor in the IR end The fluxes generally fix the complex moduli and the axion-dilaton. A non-perturbative superpotential is used to stabilize the Kähler moduli and antibranes are introduced to lift the vacuum to dS space [14]. 4 A multi-throat configuration is a generalization of such a setup, which contains many throats of different warp factors in different places in the extra dimensions. It will be interesting to construct it explicitly, but in this paper we assume its existence. We add the D3-branes whose moduli in throats are the candidate inflatons. The volume stabilization for the extra dimensions and the interactions between the D3 and D7-branes will generate potentials for these D3-brane moduli. Details of such interactions are quite complicated and still under active studies. Specifically in this paper we will be interested in the following situation. Consider the situation where the D3-brane moduli receive quadratic potentials with mass-squared of O(H 2 ) or larger. This is actually a generic situation as we have seen in the eta-problem of the slow-roll inflation model building. In order to have slow-roll inflation, these contributions have to cancel each other to a certain precision. The tuning involved depends on the mass range of the contributing terms and adjustable parameters. Here we do not address the origin of these mass contributions. (Studies can be found in [16,19,25,29,30].) But we do not require the abovementioned fine-tuned cancellations. In the multi-throat setup, we assume some throats have negative mass-squared, and some have 4 Alternatives are studied in [52,53]. positive mass-squared. We note that these potentials are repulsive or attractive for the D3brane moduli, but not the (fixed) positions of the throats. 5 The potential that we considered in (2.9) corresponds to those repulsive ones. An immediate question is then how the D3-branes can start from the IR end of a repulsive throat. This can be done by considering the dynamics of anti-D3-branes in this setup. Like D3-branes, throats will attract and annihilate anti-D3-branes. This process undergoes through the flux-antibrane annihilation [39,22]. However there are two important differences between the flux-antibrane annihilation and brane-antibrane annihilation. First, if the number p of the anti-D3-branes is much smaller than the R-R flux number M, this annihilation proceeds through quantum tunneling. So the anti-D3-branes in these throats can have different lifetime. This is necessary if antibranes are used to lift the AdS vacuum and provide the inflationary energy [14,16]. Second, more important to our current discussion, a number of D3-branes will generally be created in the flux-antibrane annihilation. The reason is that when the anti-D3-branes annihilate against the NS-NS fluxes, the total D3-charge of the throat can only change in steps of M according to (3.1). Unless p is a multiple of M, D3-branes will be generated in the end of the annihilation to conserve the D3-charge. The moduli of these D3-branes become the inflaton required in our DBI inflation. A. DBI action and its validity The low energy world-volume dynamics of a probe D3-brane in a warped space such as (2.1) is described by the Dirac-Born-Infeld (DBI) and Chern-Simons action [59] T 3 is the D3-brane tension, and ξ a (a = 0, 1, 2, 3) are the D3-brane world-volume coordinates. The validity of the DBI action requires that the energy involved in the effective field theory be much smaller than the mass of the massive W-bosons stretching between the probe brane and the horizon [59,17]. From (2.16) this requirement, i.e.ṙ/r ≪ r/α ′ , becomes R 2 ≫ α ′ , which is in the region where we trust the supergravity background. More importantly, the probe dynamics is guaranteed only when the back-reaction of the D3-brane is small [17,40]. This is the main constraint that we used in Sec. II A. Now we can understand it more easily in this context. The warped space is the same as the near-horizon region of a stack of N D3-brane source. The relativistic effect increases the proper energy density of the probe D3-brane by a Lorentz contractor factor γ. If we treat the gravitational field strength of such a relativistic brane to be increased by roughly the same amount, we need γ ≪ N in order to neglect such a back-reaction. There are other effects that are more restrictive than the lower bound (2.18). First, the number of D3-branes created by the flux-antibrane annihilation is M − p. If all these branes stick together and exit the throat, the back-reaction will be increased by a factor of M for p ≪ M. So the total number of inflationary e-folds is reduced to which we approximate as √ N β −1 . Second, because the string scale is red-shifting towards the IR end, the Hubble expansion will be able to create closed strings some place in the throat. This is possible when the proper Hubble energy becomes comparable to the string scale, i.e. h −4 H 4 ≈ α ′−2 . But in this subsection we are more interested in its effect on the background metric which is responsible for the brane speed limit. Such effect only gets significant when the energy density of the closed string gas/network becomes comparable to the source. It will then smear out the background metric and the effective warp factor will no longer decrease. Such 6 More explicitly, the four-form potential C µ 0 ···µ 3 = ǫ ν 0 ···ν 3 g ν 0 µ 0 · · · g ν 3 µ 3 1 √ −g h 4 (r). a critical warp factor h c can be estimated by h −4 c H 4 ∼ NT 3 , where the left hand side is the proper energy density of the closed string gas/network smeared out in r < r c , and the right hand side is the proper energy density of the source brane (or the equivalent fluxes). Using R 4 ∼ N/T 3 , we get r c ∼ R 2 H/ √ N. So the total number of e-folds is reduced to √ N. So, for a stack (M) of branes, we will estimate N tot as √ N β −1 from (3.5), while for a single brane, N tot can be estimated as √ N. B. Stringy quantum fluctuations on D3-branes According to Sec. II B, the field quantum fluctuations on the D3-branes generate the density perturbations where we have used (2.23), (3.2), T 3 = (2π) −3 g −1 s α ′−2 and considered n B number of mobile D3-branes. The corresponding spectral index is which is red and running negatively. This roughly means that the Hubble energy of the dS space has to be smaller than the red-shifted string scale, which is the valid region for field theories. We note that the zero-mode field theory analyses should still remain valid, although the perturbation analyses break down beyond (3.8). As long as the background can be trusted under the conditions that we discussed in Sec. III A, the only fact used for the zero-mode is the relativistic speed-limit constraint. We can also rewrite (3.8) in terms of the latest e-folds N e using γ ≈ βN e /3 [from (2.16)] and (3.6), Hence, comparing to the naive extension of (3.6) beyond (3.10), the bound (3.8) offers a suppression mechanism for larger scales. It is interesting that such a mechanism is built in without adding any extra features to the model. Let us here simply suppose that for modes beyond (3.10) the bound is saturated and study some of its properties. The density perturbation is then (3.11) The spectral index, is now blue and running positively. Also, (3.12) have to be smoothly connected to (3.7) through a transition region. Of course here we only studied the bound, and a full stringy treatment will be desirable to give more accurate account. Then we will have an interesting possibility to observe the stringy effects: branes, coming from an extremely infrared region (B-throat), imprint stringy information on their world-volume in terms of quantum fluctuations and bring them to our world (S or A-throat). IV. THROAT REHEATING BY RELATIVISTIC BRANES Reheating after inflation is important to populate the universe. In brane inflation, this is achieved by brane collision and annihilation in the S (or A) throat. In our model, this is sometimes caused by ultra-relativistic branes. In this section, we discuss two important processes for such a reheating [40], namely the relativistic collision and the cosmological rescaling. A. Annihilation versus collision We first discuss the direct string production in brane annihilation. The string dynamics in brane-antibrane annihilation is described by Sen's boundary conformal field theory of rolling tachyon [60][61][62]. Ref. [63] has studied the one-point function on the disk diagram in this rolling tachyon background and show that it is capable of releasing all the brane energy to closed strings. What happens to the D3-anti-D3-branes is that the initial inhomogeneities on the brane world-volume will grow and eventually make them disconnected D0-anti-D0branes, which then emit all the energy to a non-relativistic coherent state of heavy closed strings. Since the Standard Model will have to live on some surviving D3-branes or anti-D3branes, open string creation on such residue branes will be important for the Big Bang. Loop diagrams with one end on the rolling tachyon and another on the residue branes [64] then become interesting (see Fig. 1 (B)). This is because the exponentially growing oscillator modes [65] in Sen's boundary state will create virtual closed strings and contribute to the loop diagrams. Due to their rapid time dependence, these are candidate competing diagrams against the disk and partially release brane energy to open strings. However, there are other loop diagrams with both ends on the rolling tachyon (see Fig. 1 (C)). They only create closed strings. Although only a limit amount of information is known on such diagrams, it is not difficult to see that the evolution of (C) is much faster than (B), since both ends of (C) are time-dependent while only one end of (B) is [64]. So again closed strings are dominantly produced in this process. 7 (It is possible that subsequently the heavy closed strings decay to both massless closed and open strings. This cosmological consequence deserves further studies [66].) The annihilation process is important when the colliding branes and antibranes are nonrelativistic. For example in KKLMMT, if we assume that the slow-roll conditions hold all the way from the UV entrance to IR end, the brane velocity will remain far below the speed limit. However in our case, there may not be a direct relation between the inflationary energy scale, which can come from a steep moduli potential, and the total warping of the S-throat. Hence the velocity of the D3-branes may be much faster and there may exist a region in the S-throat where the branes move relativistically. Such fast-rolling D3-branes will cause interesting effects on the reheating details. The first feature is that the probe branes can become ultra-relativistic, and the maximum 7 We assume that the difference between the closed and open string couplings is not big. value of its Lorentz contraction factor is determined by the D3-charge of the background throat. To illustrate, let us consider a quadratic attractive potential in the S-throat with a positive m 2 S [see (A8)]. Consider n D3-branes rolling out of the B-throat enter this S-throat directly. The total inflationary potential V i in (2.9) is a net contribution of the repulsive potential (2.9) of the B-throat and the attractive potential (4.1) of the S-throat. After inflation, this potential is converted to the D3-brane kinetic energy when they are in the S-throat (but still away from the IR end). This provides the initial velocity for the D3-branes. We denote this velocity as v 0 and it is given by A detailed dynamics of such D3-branes can be solved using the DBI action, and we can find the corresponding place where the probe back-reaction becomes important. We discuss this in more details in Appendix. A. Here let us summarize the relevant main results. It turns out that as long as the initial velocity v 0 satisfies the gravitational coupling of these probe branes can be ignored. The resulting dynamics then becomes very simple. It is determined by the conserved energy density The D3-branes go through three different phases after the inflation. In the first stage they are non-relativistic and accelerated by the potential (2.9) and (4.1) (mainly in the UV sides of the B and S-throat) to reach a velocity v 0 . Such a velocity reaches the speed-limit at h ≈ √ v 0 in the S-throat and the branes enter the second relativistic phase. During this phase the energy density (4.4) is dominated by the first term, i.e. the kinetic energy. The proper spatial volume of the branes shrinks and the conserved coordinate energy density is converting from the brane tension to the relativistic kinetic energy. [This does not happen in the non-relativistic phase although the proper spatial volume is also shrinking because of the cancellation from the R-R field, which is the second term in (4.4).] The Lorentz contraction factor is increasing as γ nγ becomes O(N S ) and the probe back-reaction becomes important. The D3-branes then enter a non-comoving phase. We will have more to say about this phase in the next subsection. As long as the reheating happens after the first phase, the energy transfer is dominated by relativistic collision rather than annihilation. In terms of direct open string creation, this process does not have the abovementioned problem associated with the brane annihilation. Namely, in Fig. 1, regarding both branes as colliding ones, the interaction between the colliding branes is only in terms of diagrams like (B). We will estimate the energy density of the created open strings to be in the same order of magnitude as the collision energy density. Some interesting properties of the relativistic brane collision are studied in [67]. B. Cosmological rescaling We now discuss the second effect closely related in the same process. If the reheating happens during the second phase discussed above, the reheating energy density is still approximately the same as the inflationary energy density, as in the non-relativistic annihilation case. 8 It is only the way of energy transfer that has been changed from the annihilation to ultra-relativistic collision (which is good in terms of direct open string creation). However, this is no longer true if the reheating happens in the third non-comoving phase. We will argue that such a phase will introduce effects not captured in an effective field theory, for example, a jump in the Hubble constant. Although the precise mathematical description of the brane dynamics when back-reaction is significant is unavailable, we can think of an analogy of two identical stacks of branes approaching to each other. Because their energy density are similar, one will not feel the space being exponentially warped by another. Therefore the longitudinal scale of the brane does not significantly contract. A similar phenomenon for the relativistic branes should also happen. Where this takes place is taken to be at h r given in (4.5), where the energy density of the relativistic probe branes is comparable to the source branes (or the equivalent fluxes). Starting from h r , the warped background becomes negligible to those probe branes, and their proper volume is no longer contracting significantly. Once these branes collide with other branes at the IR end, they will oscillate and expand. Their energy density is decreasing through expansion or radiation. In the mean while, the background is restoring. In fact it does not take too much expansion to reduce the energy density of these heated branes, say to one tenth of the original value. After that, they can again be treated as a probe of the background. Since we want this process to be connected to the standard Big Bang, we will be interested in the Poincare observer on the D3-branes. To this observer, in the end of the restoration process, the Planck mass takes the usual value in the sense of Randall and Sundrum. This coordinate choice of such an IR Poincare observer is important, the scale of such a choice is indicated in Fig. 2 by a dashed brane. (The proper energy is independent of such a choice.) To this observer the space-time inhomogeneity scale on the probe D3-branes has changed. This is illustrated in Fig. 2. These inhomogeneities have been geometrically rescaled by a factor of g r = h r /h S , where h S is the total warping of the S-throat. In the previous example, To obtain an order of magnitude estimate, we will ignore the fast restoration process and simply apply the rescaling factors of g r to the corresponding length scale, time duration, or energy scale with respect to their values at h r . These effects, not described in a scalar field theory, are then approximated as imposing effective boundary conditions in the beginning of the reheating. For example, the time difference δt of the inflation ending is geometrically increased by a factor of g r ; the Hubble constant is reduced by a factor of g −2 r because the energy density is geometrically decreased by a factor of g −4 r . Such rescaling effects can reduce the ζ in (2.32), and therefore the density perturbations, as we will see in an example of the next section. V. A MULTI-THROAT MODEL A multi-throat brane inflationary scenario has been described in [28], also in the introduction and Sec. III. So here we only briefly summarize some of the main points. We start by looking at the anti-D3-branes in the multi-throat configuration. They are attracted toward the throats, either annihilate against the fluxes through classical process, or stay inside in a quasi-stable state and annihilate through quantum tunneling. The end products are generally some D3-branes. For those throats (B-throats) having potentials like (2.9) for the D3-brane moduli, D3-branes will exit. The DBI inflation discussed in Sec. II & III then takes place. These branes eventually settle down in throats (S or A) with attractive D3-brane moduli potential, or in the bulk. The purpose of this section is to make this model more quantitative and improve the calculations by taking into account the aspects described in Sec. III B & IV. We also discuss the tension of the cosmic strings created at the end of the inflation from brane annihilation/relativistic collision, and during the Hagedorn transition of the dS epoch. We first consider two-throat case with only B and S-throats, where the S-throat is defined to have a RS warp factor. The Hubble constant is simply The inflationary potential can be dominantly provided by the antibranes in the S-throat, or a moduli potential. This will be discussed in Case A and Case B, respectively. The initial velocity v 0 of n number of D3-branes entering the S-throat is determined by the moduli potential. In Case A we have In Case B we have In the latter case, the velocity does not have to be as small as in the former, because the height of the moduli potential is not related to the S-throat warp factor. Then the rescaling will generally happen in a deep throat. In Case C, we consider the addition of an A-throat, where antibranes there are the main source of V i . Case A: If the reheating process involving brane collision and/or annihilation happens before the DBI action breaks down in the S-throat, we have the usual relation H ≈ H r (assuming an efficient reheating to open strings). Such a situation happens when the initial velocity of the brane is small so that where h r is given in (4.5). For example, if the inflationary energy is dominated by n S antibranes at the end of the S-throat, we have V i ≈ 2n S h 4 S . The D3-brane kinetic energy density (∼ 1 2 nv 2 0 ) has to be smaller than V i since the moduli potential is not dominant. Hence the condition (5.4) is satisfied. In such a case, we notice that the density perturbation is independent of the warp factor (and V ), so h S can take e.g. e −30 to incorporate the RS model. However at the same time fitting the observation δ H ≈ 1.9 × 10 −5 requires a very large N B [31,28] (similar to [24]). We take N e ≈ 32, because the inflation is driven by the electro-weak scale of the S-throat, and get N B ∼ 2.4 × 10 9 (estimating n B ∼ √ N B ). The total number of inflationary e-folds is 5 × 10 4 for β ≈ 1. If we require the stringy suppression for δ H discussed in Sec. III B happens near the largest observable scales, e.g. (N e ) c ≈ 29, we need β ≈ 15. Then the total e-folds becomes 3 × 10 3 . This is our simplest case, but it remains to be seen how naturally we can get such a large N B . (Getting a large N B through orbifolding is discussed in [24].) It is interesting to note that N B can be significantly reduced if the density perturbations are seeded by (3.11) as we shall discuss more in the later comments. In the next, we will discuss the case where the possible cosmological rescaling helps to reduce the density perturbations, and therefore N B . Case B: As we emphasized, in our scenario, the inflationary energy does not have to be correlated with the warp factor of the S-throat. It can also be sourced by the steep moduli potential. Then the reheating Hubble constant H r is different from H if the reheating happens in the non-comoving region in the S-throat. It is determined by the energy density V r on the reheated branes after the background restoration. This can be estimated following the description of Sec. IV B, In the first step, we approximate the first factor α ∼ 0.1, that is, assume that the D3branes can be treated again as a probe when their energy density is reduced to one tenth of the source. Reasonable variation of α will not significantly affect our later estimates. The second factor is the conserved coordinate energy density of the relativistic branes in the comoving region. The last factor is the rescaling factor. The result can also be simply understood as follows. As long as branes enter the non-comoving region, the final Lorentz contraction factor N S is determined by the strength of the background and is independent of the initial brane velocity or the place where the DBI action breaks down. The reheating Hubble constant is now The reheating time delay after the rescaling process is where Eqs. (3.6) is used. The density perturbation can be estimated as In the last step, Eqs. (4.6) and (5.3) are used. The first factor in (5.10) is the effect of the rescaling. At h S ∼ h r (and α ∼ 1), it smoothly goes to one and we recover (5.5). We turn Eq. (5.10) around and use the measured density perturbation at the corresponding e-fold to determine the responsible inflationary potential, As we discussed in Sec. III B, there is a natural suppression mechanism for the density perturbations at long wavelengths. This happens at the critical e-folding (N e ) c ≈ 3/βÑ 1/4 . (5.12) If this is responsible for the observed CMB suppression near the IR end, we can determined N . This is the strategy that we will use in the following to determine the values ofÑ and To do this we first estimate the total number of e-folds needed to account for the observable universe. We focus on the largest scale R 0 ≈ 10 42 GeV −1 ≈ 10 4 Mpc near the IR end of the CMB. The corresponding scale R r at the time of the reheating can be estimated by the relation where T 0 ≈ 2.7 K is the current temperature and the reheating temperature T r ≈ V 1/4 r . On the other hand, the Hubble length after rescaling is (5.14) Here the factor g r is the rescaling effect discussed in Sec. IV B. The factor γ −1 , also known as the sound speed, is the relativistic effect discussed in Sec. II B. γ can be calculated using (2.16), γ ≈ βN e /3. Equations (5.11), (5.13) and (5.14) tell us the e-fold corresponding to the IR end of the CMB, N and N e can be determined by requiring that the (N e ) c in (5.12) is several (e.g. three) e-folds below the N e in (5.15) (setting β ≈ 1 here). Therefore the inflationary energy scale In this estimation, we used α ∼ 0.1, β ∼ 1 and N S ∼ 10 4 . Among the 48 e-folds of the horizon stretching required to account for the homogeneity and flatness of our observable universe, 43 e-folds is given by the inflation, and the last five e-folds is given by the rescaling. Case C: There are other possibilities. Let us consider adding an A-throat and the inflation ends by brane-antibrane annihilation in this throat. One option is to assume that the hierarchy problem is not or only partially solved by the RS mechanism, e.g. we live in an A-throat. Then for example in case A (replacing the subscript S with A), h A T 1/4 3 needs not be TeV. From Eq. (2.20), the N e e-folds of inflation happens as long as h A and h B satisfy the relative relation [28] h Note that this includes the case where the only throat required is the B-throat, namely h A = 1. The upper bound on the inflation scale comes from the current experimental tensor modes bound [68,69], P h /P R = 8 75π 2 Another interesting option is to further add an S-throat. Then the reheating may happen either because some branes coming out of the B-throat enter both the A and S-throats [28,31], or the KK modes of the decay products in the A-throat are transfered to the S-throat [66]. In either case, all branes in the A-throat have to be annihilated, and the fate of closed strings in the A-throat or how much magnitude of density perturbations can be transfered from A to S deserve further investigations [66]. We have a few additional comments on various aspects of this model. Parameter dependence: There are some uncertainties in the estimations: for example in case B, the detailed rescaling and background restoration process parameterized by α in (5.6), and the steepness of the D3-brane moduli potential parameterized by β. The former only weakly affects (5.11) and (5.15). The variation of β changes the total number of e-folds. More importantly it changes the (N e ) c in (5.12). The spectral index (3.7) [and (3.12)] does not depend on the overall variation of δ H that we talked about. Tensor modes: In our model the condition for the inflation to happen is not restricted by the inflationary energy scale. So the tensor modes bound is very easy to satisfy. For example in Case B, H/M Pl ∼ 10 −25 ; Case C is more flexible. We leave non-Gaussianity feature for future studies. Large N B : For example in Case B, N B is 8 × 10 6 . This requires the NS-NS and R-R flux number to be a few thousands. In GKP compactification, the total D3 charge of all throats and (anti)branes equals to χ/24, where χ is the Euler number of the corresponding fourfold in F-theory. The largest value we know of is χ/24 = 75852 [70,71]. It is so far not clear if we have an actual maximum value, but it seems that the model described in this section works if the Euler number is on the larger side. We should emphasize here that the large N B in Case B is no longer due to the fitting of δ H . After taking into account of the non-comoving rescaling, such a degree of freedom goes to the factor V i in Eq. (5.10). N B is large because we want to calculate most part of the density perturbations for our observable universe by the conventional field theory, and only invoke the open string quantum fluctuations at the early epoch as a mechanism to suppress the IR end of CMB. The relation (5.12) typically gives a large N B . However it remains an open possibility that considerable part of the universe is seeded by stringy quantum fluctuations. If this is the case, the only constraint on √ N B number of branes, or √ N B > N e for a single brane. Using (3.11), the density perturbations δ H ≈ 1.9 × 10 −5 is fit around N e ≈ 55 by choosing β ≈ 8 (ε r = 1). We only require N B > ∼ 2 × 10 5 (or N B > ∼ 3 × 10 3 for a single brane), which amounts to a few hundreds (or tens) of flux numbers. String statistics: There is another interesting angle to look at the large N B aspect. If the underlying parameters used to determine the total number of inflationary e-folds is not directly correlated with the degrees of freedom used for the string statistics [71], these two subjects can be independent of each other. However in our model we have seen a clear relation between the total e-folds and flux number, N tot ∼ √ N B for β ≈ 1. Such a large factor adds a significant weight to the statistics and may push the selection rule to favor a compactification with a large Euler number. In fact, we expect the same qualitative argument to apply to the slow-roll case as well, since more fluxes provide more degrees of freedom to adjust the shape of the potential, although the relation is much less explicit. Cosmic strings: The brane inflation gives interesting mechanism to produce cosmic strings after inflation [41][42][43]. The D or F-strings are produced during brane-antibrane annihilation through Higgs mechanism or confinement. Their properties are further studied in [44][45][46][47][48][49][50][51]. The situation for the relativistic brane collision is similar, because right after collision the gauge symmetry is also restored due to high energy density transfered from the relativistic branes. In the subsequent cosmological evolution, this string network approaches the attractor scaling solution, and can be observed if their tension is large enough. For strings in the A-throat, the tension is given by (e.g. F-strings) Because strings will be generated if the temperature reaches the Hagedorn temperature [72,43], there is another interesting way that strings can be produced in any inflationary model in the multi-throat configuration. Same as we discussed in the end of Sec. III A, Hubble energy can exceed the string scale in a deep throat. We take such a relation to be where the left hand side is the Gibbons-Hawking temperature and the right hand side is the red-shifted Hagedorn temperature. Therefore any throat with a warp factor is filled with strings. After inflation, strings will stay inside each of these throats by gravitational attraction and evolve. Hence each throat independently contributes to the string network. The corresponding string tension is (5.23) This is typically lower than (5.20). If observable, we have a very clear signal for the multi-throat compactification -an isolated spectrum of the highest string tension (5.20), whose existence depends on the existence of the A-throat, followed by a dense spectrum of lower tension (5.23). The string tension is associated with the inflation scale, so in Case A and B they are too weak to be observed. In Case C, the bound (5.19) gives the A-throat string tension Although the tension is much weaker, the latter can be enhanced by effects of the multiple throats (e.g. more signals). Such a spectrum also arises in a multi-throat slow-roll model that we will discuss in Appendix B. There we will also have a significant lower bound on Gµ A . VI. CONCLUDING REMARKS In this paper we have studied a DBI inflation model in both field and string theory as an alternative to the slow-roll inflation. The inflation in such a model is achieved by a inflaton field held on the top of a steep potential by the IR end of a warped space. This model is realized in the multi-throat string compactification setup. We demonstrate that, at least at the level of orders of magnitude, this model can simultaneously produce many interesting features. This includes the large number of inflationary e-folds; density perturbations of the right order of magnitude while incorporating the Randall-Sundrum model with a direct reheating; a scale-invariant spectrum with interesting features; and a possible mechanism for the infrared suppression on CMB. Many issues remain to be studied in more details. For example, more detailed information on moduli potential profiles of the multi-throat GKP compactification with D3-branes; the stringy quantum fluctuations at the early stage; the detailed analysis of the non-Gaussian feature (in some cases properly taking into account the rescaling or stringy effects); various back-reactions such as the closed string creation in the IR B-throat due to the dS backreaction, and the rescaling process during the reheating due to the probe brane back-reaction; and some global aspects of this scenario such as its genericness and eternity. In this appendix, we are interested in how n number of D3-branes roll in the S-throat according to the DBI action. Most of the results can be found in Refs. [17,24,40]. The main results have been used in Sec. IV. This dynamics is described by the equations of motion (2.7) and (2.8), but the potential V (r) is replaced by Here m S > ∼ H so gives a generic steep attractive potential. The region that we will be interested in is well within the S-throat. We start with the initial velocity −v 0 . Such a velocity is picked up after the D3-branes fall down the potentials (2.9) and most of (A1). We first check that this velocity does not change much by the Hubble friction term if the The Hubble constant is mainly given by the kinetic energy of the non-relativistic brane after it falls down the potential, The velocity change due to the second Hubble friction term in (2.11) can be estimated as follows Since we can estimate 3 /M Pl , which is typically not much bigger than one, the brane velocity remains in the same order of magnitude. Because of the decreasing warp factor, this velocity reaches the speed-limit around h ∼ √ v 0 . We then return to the Eq. (2.8). The factor of H in the second term is still given by the conserved (mostly kinetic) energy (A2) if this term is negligible to the first term. This amounts to a comparison between H and d/dt ∼ 1/t. If t ≪ M Pl /( √ nT 3 v 0 ), the gravitational coupling in (2.8) can indeed be ignored. We have a conserved coordinate energy density (4.4) with E ≈ nv 2 0 /2 (V net is approximately zero in the IR side of the S-throat). The D3-branes become relativistic Now the time coordinate is chosen to be positive. If t ≫ M Pl /( √ nT 3 v 0 ), we need solve (2.7) and (2.8), and get where α = Let us first consider the case where these parameters are simplified, In this case the slow expansion of the scale factor a(t) is driven by the kinetic energy of the D3-branes. One can check that the second terms of (A4) and (A5) match each other at the . We now summarize the D3-brane dynamics following the DBI action. For h ≫ √ v 0 , the brane is moving non-relativistically. We now consider the large mass-squared In this case, Refs. [17,24] show that the inflation is possible. Now the spatial expansion is driven by the potential energy of the branes. In (A5), we have The probe back-reaction restricts Therefore in this setup the IR space below this region does not play important role for inflation because the brane velocity cannot further decrease. Accordingly, for inflation to happen, we need a large mass-squared in the moduli potential to give a high inflationary energy. This requires (A10) because the total e-folds is given by where r i > r > r f is the valid region for the behavior (A5), r i /r f ∼ N 1/2 S /n 1/2 . To have N e e-folds of inflation, we need m S > ∼ N e M Pl √ N S n . This may rely on moduli potentials. (It is easy to check that the constant vacuum energy provided by antibranes sitting in the IR end is negligible to the inflation.) The Hubble constant is time-dependent in this case, H ≈ p/t, and the inflation is in power law, a(t) ∝ t p . It may be interesting to apply the rescaling to the region r < r f . APPENDIX B: A MULTI-THROAT SLOW-ROLL MODEL In this appendix, we study the case β < ∼ 1 in a repulsive B-throat. We show that it is generally a combination of the DBI and slow-roll inflation. The resulting multi-throat slowroll model is studied. This Appendix has some overlap with an independent paper recently appeared [31]. We start with the slow-roll case. The scalar velocity is determined by the non-relativistic equation of motion (2.11) by neglecting the first term, This procedure is valid only when the slow-roll condition is satisfied, β ≪ 3. This velocity reaches the speed-limit h 2 at Within this slow-roll region the inflationary e-folds is given by where r m denotes the end of the flat potential and we approximate it as R B , the extension of the throat. Taking the lower limit (B2), we get the total number of slow-roll inflationary e-folds (N tot ) sr ≈ −3β −1 ln(βHR B ) In the last step, we have neglected all terms other than V i for simplicity. 9 Within the relativistic region, the inflaton behaves as (2.16), but now the condition (2.17) is stronger than (2.15). This condition just matches (B2). Taking the strongest lower bound from Sec. III A, the DBI inflation then happens within The total DBI inflationary e-folds is √ N B . We can state the overall results by varying the parameter β. For 1 < ∼ β ≪ √ N B , we only have the DBI inflation and it lasts for √ N B /β e-folds. For 0.1 < ∼ β < ∼ 1, the DBI inflation still proceeds, but its end starts to deform to slow-roll, the observable universe is a mixture of both. For 0 < β < ∼ 0.1, a total 6β −1 | ln V 1/4 i | e-folds of slow-roll inflation is smoothly added to the end of a total √ N B e-folds of DBI inflation. Following the above discussions, we can have the following multi-throat slow-roll model. Consider n B number of D3-branes rolling out of a B-throat, this time under a flat potential β < ∼ 0.1. (For slow-roll, the other directions have to be all lifted, so the branes do not roll towards those directions.) The inflationary energy is provided by n A anti-D3-branes in the A-throat. The density perturbations due to the slow-roll period can be calculated using the standard formula 9 Even if β is not very small as long as it satisfies the slow-roll condition, for example β ≈ 0.3, a long period of slow-roll inflation can be achieved because of the factor | ln V 1/4 i |. For example, if V i is supplied by an anti-D3-brane in an A-throat, | ln V 1/4 i | ≈ | ln h A |, which can be ∼ 10. But such a period of inflation cannot be responsible for the CMB since the density perturbations that it generates is not scale invariant, n s − 1 ∼ O(β). The spectral index is The density perturbations and spectral indices of the slow-roll inflation and the proceeding DBI inflation should be smoothly connected to each other in the transition region. This can be seen by evaluating (B9) at N e ∼ (N tot ) sr given in (B4). In terms of the time delay (2.22), both of them have the same δr * , whileṙ * transits through (B2). As N e increases, the density perturbation changes from ∼ e βNe/3 to ∼ N 2 e , so is growing, and then get suppressed at (5.12). From Eq. (B9), we can see that the warp factor of the A-throat cannot be as small as the RS ratio ∼ e −30 , because V 1/2 i = √ 2n A h 2 A while δ H ≈ 1.9 × 10 −5 . The rescaling mechanism does not help here either (but can happen). For example suppose V i is dominated by a kink in the bulk moduli potential and the branes gain too much kinetic energy so rescaling happens in the A-throat. This introduces a factor α 1/2 N It can take a wide range of value. The upper bound comes from the experimental tensor modes bound on the inflationary energy V i T 3 = 2n A h 4 A T 3 and we take it to be V i T 3 /M 4 Pl < 1.7 × 10 −8 as in (5.19). From (B12), Gµ F < 9 × 10 −6 g s /n A . (B14) The lower bound comes by setting h A ∼ 1 in (B13), This range is within the observational ability. The strings in various throats with (5.22), left over from the Hagedorn transition from the dS epoch, have tension (5.25).
2014-10-01T00:00:00.000Z
2005-01-24T00:00:00.000
{ "year": 2005, "sha1": "0a8d5ec3f0ad192849a8d77794aea004941e9106", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0501184", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c9d34692f8c9a557fa4d6e3ffff907054bb36b69", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
213930170
pes2o/s2orc
v3-fos-license
Learning service through college student organization as a political awareness on higher education This study aimed to investigate the devotion of learning through student organisations in providing political awareness and relation from the students in tertiary institutions. Qualitative interviews were conducted with 25 students who were active in student organisations such as College Student Organization from different campus in many different regions of Yogyakarta, Indonesia. The results of the study can be identified that there are several perspectives on the involvement of student organisations in providing political awareness and linkages that can be divided into two sides, namely positive views and negative views. Introduction Political awareness is an exciting theme in political discourse in the world, including in Indonesia. It was later discovered that information intake affected individual political awareness. Only wellinformed individuals respond to the polarisation of the political elite [1]. Meanwhile, the last few years in political theory have penetrated the neo-republican perspective by promoting the concept of republican democracy with the conception of neo-Roman freedom [2] addressing this phenomenon then it is time to consider the efforts in providing political education for citizens to oversee political developments. The results of research on political awareness show that differences in political awareness produce heterogeneity in citizens [3]. Researching about political awareness and engagement in a democratic country, Will find much medium that is an ideal condition in the realisation of democracy. One of the media is the contribution of citizens, including students as educated people. In Indonesia, during the New Order era, In the President Soeharto era during the office of Daoed Josoef, the Minister of Education and Culture's Decree issued a policy. 0156 / U / 1978 concerning Normalisasi Kehidupan Kampus (NKK), which was then followed by Decree No. 0230 / U / J / 1980 regarding the general guidelines of the organisation and membership of the Badan Koordinasi Kemahasiswaan (BKK) which became many discussions. There is a discussion about Normalisasi Kehidupan Kampus (NKK) openly it is natural to realise the Higher Education education system within the framework of Pancasila democracy [4]. At that time, Daoed Josoef argued that the Normalisasi Kehidupan Kampus (NKK) was a manifestation of returning students' functions as intellectual powers. Student institutions must meet the needs of students, namely student welfare, interests and thinking of students (Josoef, 1979). Indonesian political life proceeded according to the ruling regime at that time, Normalisasi Kehidupan Kampus (NKK) became an antidote in purifying students and politics [5] until finally the power of the new order regime was warped even students were involved in the downfall of the regime. Various problems such as unemployment, the increase in fuel prices are continually being voiced [6]. Now, almost 21 years after reform. Indonesia experienced a leap in a change in the atmosphere of the fourth industrial revolution. Sketches of the fourth industrial revolution emphasise technological change as a transformation for industry and society [7] The fourth industrial revolution has intensified globalisation [8]. The existence of the internet brings the creation of added value for organisations and communities [9]. Within the scope of the Association of Southeast Asian Nations (ASEAN), the fourth industrial revolution had a bonafide impact on economic development [10]. Still, in the atmosphere of the fourth industrial revolution, in the field of politics, a virtual movement emerged that had shifted the paradigm of political involvement. In Missouri, on Ferguson Street when there were many protests against the shooting of African-American youth #Ferguson movement arose as a form of commentary on the tragedy [11]. The existence of digital media becomes a means of delivery when there are policies that are not in harmony between the expectations and needs of citizens [12]. A study of political awareness for young immigrant in America produced that secondary school experience shaped political behaviour as an adult [13] subsequent studies that analysed the effectiveness of the USA voting campaign for students in Arizona, Colorado, and Florida in the 2002 US elections resulted that agenda setting provides views for adolescents in deciding issues that are considered essential to them [14]. In Indonesia, research on the resistance of the student movement results in that several extra campus organisations responds to political and social conditions that occur in their immediate environment, especially the campus. The form of resistance manifests itself in the consolidation between organisations, discussions about the campus situation, distributing leaflets, and demonstrations [15]. Some of the results of this study generate urgency in growing awareness and political attachment of students in tertiary institutions, besides also identifying how learning patterns and tools in tertiary institutions should be guarded in the metamorphosis of student's political awareness and attachment in Indonesia. Method This research was qualitative research that attempts to uncover the phenomenon. Qualitative method is the process of investigating phenomena by showing questions about what will be explored [16]. The primary source of data in this study is the motivation of students in their involvement in student organisations, the significance of their involvement in student organisations, the effect of the involvement of students in student organisations and how well college reconstruction has prepared their graduates as a 21st-century society. The total population in this study was 140 that was spread in the Special Region of Yogyakarta. In determining the research sample, researchers used a purposive sampling technique by determining the campus that has been accredited A as an institution according to the Indonesian National Accreditation Board for Higher Education Indonesia. Researchers sent research letters to 6 campuses in the Special Region of Yogyakarta, namely Yogyakarta State University, Sanata Dharma University, Duta Wacana Christian University, University, Ahmad Dahlan University Yogyakarta, Islamic University of Indonesia, and Atmajaya University. Then the researchers went to the secretariat of the Student Executive Board of each campus to determine the schedule of visits. Therefore, the participants consisted of 25 students from 6 campuses in the Special Region of Yogyakarta, both public and private. Data is collected through interviews, observations, and documentation. Interviews were conducted with 25 students involved in the Student Executive Board on different campuses in the Special Region of Yogyakarta. Interview guidelines researchers use to contain a list of interview questions that have been developed based on a theoretical basis. The researcher used three expert assessments in the interview instrument, which the researchers then revised by expert input. The scope of research questions includes: • What is the motivation of students in their involvement in student organisations about the awareness and political attachment of students in tertiary institutions? • How do students interpret their involvement in student organisations in the awareness and political attachment of students in tertiary institutions? • What is the effect of student involvement in student organisations on student political awareness and engagement in tertiary institutions? • How good is the reconstruction of tertiary institutions in preparing their graduates as a 21stcentury society about the political awareness and attachment of students in tertiary institutions? The researchers conducted interviews in different places according to the agreement with the informant. Interview equipment in the form of recording equipment, cameras and stationery researchers prepared to capture information. Observation researchers do by using observation sheets that have been consulted to experts. Observations were conducted before BEM activities took place when BEM activities took place and after BEM activities took place both off-campus and on-campus activities. Researchers do the documentation by getting information from each BEM work program, reputable international journals, and current books that contain research theme content. After the data is collected, the researcher analyses it using the Miles and Huberman model, which is data collection, data reduction, data presentation, and drawing conclusions or verification (Miles & Huberman, 2007) which the researchers then use in discussing and concluding. Results and Discussions The findings of the data are used in separating the two categories of major findings, namely the point of view in the involvement of student organisations in providing political awareness and attachment which are broadly two, namely positive views and negative views. A positive view is an optimistic view based on experience gained in student organisations in providing political awareness and engagement, while a contrary view is a view that has a point of view that involvement in student organisations is meaningless. The results of data processing are then broken down in the form of more detailed units of the importance, impact, involvement of students in student organisations on political awareness and engagement in tertiary institutions. Positive point of view: Student motivation in their involvement in student organisations about political awareness and political attachment of students in tertiary institutions The interview results show that students who are in the positive category admit that there is motivation in their participation in student organisations to be able to arouse their political awareness and political engagement. Interview responses to students involved in Badan Eksekutif Mahasiswa (BEM) are as follows: • Interviewer: What is your motivation for involvement in student organisations about political awareness and political engagement in higher education? • Also, ASPS, a Psychology Department student at Sanata Dharma University Yogyakarta who is also a member of BEM Sanata Dharma Yogyakarta revealed that: "... because I already have an interest in political issues as well as hits are happening right now, I am involved in BEM Sanata Dharma Yogyakarta as my training ground in managing aspects related to politics, for example, the importance of participating in elections on a national scale and on a scale the campus is the importance of determining the leadership choices at BEM Sanata Dharma Yogyakarta ... "(ASPS, 2019) • Like ASPS, JR also has a motivation for their involvement in the BEM student organisations. This shows that motivation is an essential factor in involvement in BEM for students. This finding shows that students recognise what motivates them in BEM student organisations to have a relationship with political awareness and political engagement. Some students like IAP from the Management Department of the Indonesian Islamic University in Yogyakarta admit that involvement in BEM is a form of awareness and as political socialisation within the tertiary level. • Interviewer: What is your motivation for involvement in student organisations about political awareness and political engagement in higher education? • IAP: "as a student, I certainly have intellectual responsibility, even in politics. My involvement in BEM is one of my ways to raise awareness and as political socialisation in the campus environment "(IAP, 2019) The response from IAP identified that there was a positive outlook in motivation for involvement in student organisations about political awareness and engagement. Understanding involvement in student organisations of political awareness and political linkages. The meaning of involvement in student organisations for students is a concept that can motivate them to carry out useful activities in their lives. Typical sources of meaning are from social relationships, personal growth, the pursuit of achievements, and religion [17] students think there is a fundamental value that is the basis of their activities in student organisations. According to LA, the student of the Department of Islamic Education, Ahmad Dahlan University, Yogyakarta, through his involvement in student organisations, can provide meaningful learning as a useful experience in transferring knowledge about politics. LA stressed that in its activities in student organisations can increase knowledge about positive and negative politics. • "As long as I am involved in student organisations, I feel much experience that can strengthen my knowledge about politics, and in the activities of BEM Ahmad Dahlan University Yogyakarta, I feel much experience that I gained during my activities" (LA 2019). The effect of student involvement in student organisations on political awareness and political attachment of students in tertiary institutions • The involvement of students in student organisations affects the academic life of students on campus. The majority of students acknowledged that the effects of their involvement in student organisations in addition to their skills in socialising increased, he also understood how rights and obligations for a citizen and in strengthening democratic values. • Interviewer: What is the effect of involvement in student organisations on political awareness and political engagement? • SAS: "... of course there is an effect from my involvement in the Atmajaya University student organisations, one of which is that I can find out how the rights and obligations as a student are narrow, and broadly as a citizen. At Atmajaya University, there was an election for the chairman of the Student Executing Agency, at which moment the student gave his right to vote. Here there is political learning that as citizens have rights and obligations ... "(SAS, 2019) • Reconstruction of tertiary institutions in preparing their graduates as a 21st-century society about political awareness and political attachment of students in tertiary institutions. Students' political awareness and engagement are important in the sustainability of democracy. In the current modern era, youth participation is vital for society [18]. In preparing graduates who are part of 21st-century society, tertiary institutions have a scheme in reconstructing their graduates. The involvement of students in student organisations is one of the meeting points between expectations and outcomes for Higher Education. The involvement of students in student organisations is one of the credit systems that must be collected for students before graduation. • Interviewer: Is there any part of your involvement in student organisations as a 21st Century society? • SAS: "... Involving involvement in student organisations concerning political awareness and linkages in the 21st Century, I have a view through student organisations that several moments require us to be able to think critically, problem-solving and creativity. I take the example of the moment of electing the chairman of Atmajaya University BEM With the demographic conditions of Yogyakarta'sAtmajaya University in several different regions, the voting process for the election of the head of BEM must, of course, be packaged as efficiently as possible for students of the Atmajaya University of Yogyakarta. Through socialisation that is not one-way "(SAS, 2019). • Just like SAS, AGSS also provides the opinion that student organisations are one of the reconstruction of higher education in preparing graduates as a 21st-century society • "... As a 21st-century society, the demands are hard skills and soft skills, through involvement in student organisations being an alternative in" preparing "us in Century 2 society. Concerning political awareness and attachment in my opinion at BEM Sanata Dharma University, I got several the hidden lesson I got. For example, when meeting with BEM friends, there will be a discussion and not infrequently the content of the discussion is a problem that is the current trend. Even in politics, we also had many discussions about how the global political conditions, the global economy, although not regular discussions took place. However, at least meeting with BEM friends will be able to increase the level of social sensitivity ... "(AGSS, 2019) The 21st century is a challenge and opportunity for tertiary institutions to produce graduates with capability. The demand for educational programs in Indonesia is to improve the quality of human resources [19]. This finding suggests that in preparing global citizens who have functional political literacy. Contribution of activities outside of academics can be an alternative solution. It has long been known that the student council is a capable vehicle for supporting the expression of wise and active citizenship [20]. Conclusions Leading with the results of the study, four research conclusions can be drawn, namely as follows: 1) motivation is an important factor in determining the essence of learning through student organisations towards student political awareness and engagement in tertiary institutions. Associated with the results of this study, there is a variation of motivation that can affect student enthusiasm in student organisations. So, the internal factors of students determine how inclined in activities in student organisations. 2) Students who declare positive ownership in the participation of student organisations can play a role in the miniature of the country so that they gain experience in growing awareness and political attachment to tertiary institutions. This finding confirms the claim that there are other "officers" in the efforts of political education in formal educational institutions. Meanwhile, students who stated negative ownership in the participation of student organisations were students who stated that the activities in student organisations were not sufficient to provide awareness and political engagement. Related to the findings of this research, to provide political awareness and attachment additional instruments are needed both in the form of political education with pedagogic and nonpedagogic principles. 3) There is an impact on the habits of students involved in student organisations on student political awareness and engagement in tertiary institutions. The findings of this study are a manifestation of the affirmation that in the activities of student organisations, there is a hidden curriculum for students. 4) The existence of reconstruction in learning in Higher Education to provide experiences that can make students become agents of change in the 21st century of political awareness and engagement of students in Higher Education. The findings of this study indicate that Higher Education has had a scheme in preparation for 21st Century society. In the context of Indonesia, this is a breath of fresh air in determining the future of democracy and state administration. Acknowledgement Yayuk Hidayah is a doctoral candidate at the Department of Citizenship Education at the Indonesia University of Education in Bandung, Indonesia. She also a lecturer at Ahmad Dahlan University in Yogyakarta, Indonesia. This article is part of the dissertation meant to obtain a doctoral degree in citizenship Education at the Indonesian University of Education, Bandung. Thanks to Yogyakarta State University, Sanata Dharma University Yogyakarta, Yogyakarta Duta Wacana Christian University, Atma Jaya University Yogyakarta, Yogyakarta Indonesian Islamic University, Ahmad Dahlan University Yogyakarta for permitting researchers to be able to conduct research. Thank you to all members of the Student Executive Board (BEM) who have agreed to be interviewed.
2020-01-23T09:09:53.308Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "1024fd519eb56fb331b3fd2c82f34fc85100c895", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1446/1/012052", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dc1475b6717d4044fceed311f502dbf53eec1b87", "s2fieldsofstudy": [ "Political Science", "Education" ], "extfieldsofstudy": [ "Physics", "Political Science" ] }
96804465
pes2o/s2orc
v3-fos-license
Electrochemical Copper Metallization of Glass Substrates Mediated by Solution-Phase Deposition of Adhesion-Promoting Layers Metal-to-glass interfaces commonly encountered in electronics and surface finishing applications are prone to failure due to in- trinsically weak interfacial adhesion. In the present work, an ‘all-wet’ process (utilizing solution-phase process steps) is devel-oped for depositing nucleation- and adhesion-promoting layers that enhance the interfacial adhesion between glass substrates and electrochemically-deposited copper (Cu) films. Adhesion between thick ( > 10 μ m) Cu films and the underlying glass substrates is facilitated by an interfacial Pd-TiO 2 layer deposited using solution-phase processes. Additionally, the proposed interfacial engineer- ing utilizes self-assembled monolayers to functionalize the glass substrate, thereby improving surface wettability during Pd-TiO 2 deposition. Resulting Pd-TiO 2 deposits catalyze direct electroless plating of thin Cu seed layers, which enable subsequent electrode- position of thick ( > 10 μ m) Cu coatings. The present work provides a viable route for high-throughput, cost-effective metallization of glass and ceramic surfaces for electronics and surface finishing applications. the terms of unrestricted The interfacial adhesion between metallic films and insulating substrates, e.g., glass, is intrinsically poor. This is a major roadblock in numerous electronics applications, particularly the manufacturing of printed circuit boards (PCBs) 1 and integrated circuits (ICs). 2,3 Interfacial adhesion between metallic thin films deposited on glass substrates can be improved using functionalized polymers 4 or selfassembled monolayers (SAMs). 2,5 SAMs provide interfacial 'anchoring' by chemically bonding to the substrate (SiO 2 ) as well as the deposited metal film. While SAMs have been shown to enable deposition of sub-micron scale, adherent Cu films on SiO 2 , 5 they do not provide adequate interfacial strength to enable thick (>10 μm) Cu coatings on glass. The Cu-to-SiO 2 interfacial adhesion may also be improved by utilizing metallic adhesion layers 6 or mixed-metal oxides. 7 Metallic adhesion promoters, e.g., titanium or titanium-nitride, are deposited using 'dry' methods such as physical vapor deposition (PVD). 'Dry' techniques are not desirable for high-volume manufacturing given their low throughput and high cost of ownership. On the other hand, mixed-metal oxides maybe deposited using sol-gel methods in which a metal alkoxide is co-deposited with a catalytic metal salt in a polarorganic solvent, resulting in a mixed-metal oxide adhesion layer with catalytic activity for electroless plating. 7 The methods by which the sol-gel catalyst may be deposited include dip-coating, printing, spincoating, or brushing. In the present work, we demonstrate an 'all-wet' process for depositing adhesion-promoting layers that enhance the interfacial adhesion between glass substrates and electroless-deposited copper (Cu) films. Adhesion between electrochemically-deposited thick (>10 μm) Cu coatings and the underlying glass substrates is facilitated by an interfacial Pd-TiO 2 layer fabricated using solution-phase processing. Our interfacial engineering approach utilizes self-assembled monolayers (SAMs) to functionalize the glass substrates before applying the Pd-TiO 2 coating. The SAM layer improves surface wettability during Pd-TiO 2 deposition as shown below. Pd-TiO 2 deposits effectively catalyze electroless plating of thin Cu seed layers, which enable subsequent electrodeposition of thick (>10 μm) Cu coatings. Experimental This section describes the materials and methods used for cleaning the glass slides, functionalizing them with SAMs, depositing Pd-TiO 2 * Electrochemical Society Student Member. * * Electrochemical Society Active Member. z E-mail: rna3@case.edu adhesion-promoting layers, depositing electroless and electroplated Cu films and characterization their adhesion. Glass slide cleaning.-Eagle XG glass slides (alkaline earth boroaluminosilicate) manufactured by Corning were employed as substrates. Glass slides were cleaned using a protocol described by Cras. 8 First, glass slides were rinsed with deionized (DI) water and then dried using a stream of nitrogen. Next, the glass slides were submerged in a 1:1 volume ratio of concentrated hydrochloric acid and methanol for 30 min followed by a DI rinse. The glass slides were then submerged in concentrated sulfuric acid for 30 min followed by a thorough DI rinse and drying under nitrogen. Silanization of cleaned glass surfaces.-After cleaning, glass slides were immersed in a 5 mM solution of APTES (3aminopropyltriethoxysilane, Acros Organics) in toluene solvent at 25 • C for 60 min. Slides were then rinsed with toluene, then with ethanol, and finally with DI water before drying under nitrogen. Deposition of Pd-TiO 2 ink.-An ink solution containing 1.1 mM titanium (IV) butoxide (Acros) and 1.1 mM PdCl 2 in n-butanol solvent was prepared. The ink was dropped on the glass surface via a transfer pipette. Ink volume dropped was approximately 0.1 mL per 1 cm 2 of the glass surface area. The ink was then dried in air at 130 • C for 15 min in a Thermo Scientific Heratherm OGS 60 oven. Next, the dried ink was sintered in air at 450 • C in a Hoskins electric tube furnace for 30 min. After cool down, samples were immersed in a 2 M sulfuric acid solution for 2 min followed by a DI rinse. Finally, samples were immersed in a reductant solution of 0.5 M dimethylamine borane (DMAB) for 2 min and then rinsed with DI water. We note that our aforementioned stepwise procedure of applying the Pd-TiO 2 adhesion-promoting layer differs from the sol-gel technique described elsewhere. 7 First, our procedure does not utilize chemical hydrolysis of the titanium butoxide in solution but rather relies on its reaction with moisture in air to induce oxidation (shown below). The DMAB reductant treatment then facilitates chemical reduction of the oxidized Pd to metallic Pd catalyst. Furthermore, we utilize dilute inks that enable thinner Pd-TiO 2 adhesion layers to be formed. Finally, our ink application procedure hinges on utilizing a SAM-terminated glass surface, which improves surface wettability and enables uniform Pd-TiO 2 deposition as also discussed below. Electroless Cu deposition.-Proprietary electroless Cu plating chemistry from Atotech, USA was employed. This alkaline D631 electroless plating solution consisted of the following components: copper sulfate, tartrate-based complexing agent and formaldehydebased reducing agent. The electroless Cu deposition process was operated at 35 • C. After electroless plating, samples were rinsed with DI water and then annealed in a Hoskins electric tube furnace under flowing argon at 400 • C for 60 min. Cu electroplating.-Following electroless seed layer deposition, samples were electroplated with Cu using a high-throw plating solution described by Dini and Snyder. 9 The composition of the bath was 100 g/L CuSO 4 .5H 2 O (Fisher Scientific), 270 g/L H 2 SO 4 (Fisher Scientific), and 0.1 g/L NaCl. A large area (70 cm 2 ) Cu foil counter electrode was used, which was placed at a distance of 10 cm from the working electrode. Samples were electroplated at a current density of 10 mA/cm 2 . Materials characterization.-The scotch tape test was used to qualitatively assess the adhesion of electrochemically-deposited Cu on glass with or without the Pd-TiO 2 adhesion promoter layer. Scotch Matte Finish Magic tape from 3 M was used for all tape tests. During tape testing, the applied scotch tape was removed at roughly a 90 • angle relative to the sample. In some cases, prior to applying the tape, the Cu was scratched in perpendicular directions using a diamondtip scribe. For quantitative adhesion testing, a 90 • peel strength test was performed at a peel rate of 50 mm/min and the force required to peel unit width of the plated Cu was recorded. X-ray photoelectron spectroscopy (XPS) was performed using a PHI Versaprobe 5000 Scanning X-ray photoelectron spectrometer with a monochromatic AlKα anode X-ray source. Surface roughness of deposited Pd-TiO 2 layer was measured using a profilometer (KLA-Tencor P-6 Stylus) and surface morphology was imaged using a high-resolution optical microscope (Leica DM2500M). Figure 1 is a process flow diagram detailing the sequence of glass surface preparation, nucleation and adhesion promoter deposition, and Cu metallization used in the present work. Fig. 1a shows a cleaned, hydroxyl-terminated glass surface, which promotes the deposition of 3-aminopropytriethoxysilane (APTES) self-assembled monolayer (SAM) using procedure described above. APTES self-assembly is mediated by a condensation reaction 10 that provides amine-terminated glass as shown in Fig. 1b. APTES deposition plays an important role in improving the surface wettability, as discussed below. Results and Discussion Ink containing titanium butoxide and palladium chloride dissolved in n-butanol is then applied to the amine-terminated glass surface, following procedure described in the Experimental section above. Upon drying at 130 • C for 15 min, the ink leaves a composite mixture of PdCl 2 -TiO 2 on the glass surface (Fig. 1c). A prolonged (30 min), high temperature (450 • C) sintering step in air converts the PdCl 2 embedded in the TiO 2 matrix to PdO (Fig. 1d). This is supported by XPS data discussed below. Finally, the PdO is reduced to metallic Pd via reaction with 0.5 M dimethylamine borane (DMAB) for 2 min (Fig. 1e). The Pd particles in the Pd-TiO 2 layer formed on the glass surface serve as catalytic sites for electroless Cu deposition. In our work, electroless deposition (using procedure outlined in Experimental section above) provided a ∼400 nm Cu seed layer on top of the Pd-TiO 2 adhesion promoting layer (Fig. 1f). Subsequently, Cu electrodeposition was performed on the electroless-seeded surface. Thick (>10 μm), adherent Cu electrodeposits were obtained (Fig. 1g). The glass/SAM/Pd-TiO 2 /Cu stack fabricated using the process flow shown in Fig. 1 was finally subjected to adhesion testing. Wettability of APTES-modified glass surfaces. -Fig. 2 demonstrates the improved wettability of butanol ink on an APTES-modified glass slide. A 10 μL volume of ink (composition described in Experimental section above) was dropped onto the surface of APTESmodified glass slide. The droplet spread, which in a qualitative mea- sure of the surface wettability, was measured and compared to a glass slide without APTES termination. In the case of APTES-modified glass (Fig. 2a), the ink droplet spreads out nearly covering the entire surface of the glass indicating a low contact angle and good wettability. In absence of APTES (Fig. 2b), the ink droplet shows a higher contact angle and thus poor surface wetting. The improved wettability of the glass surface with APTES-termination is critical for the subsequent uniform deposition of the Pd-TiO 2 adhesion-promoting layer. Pd-TiO 2 deposition mechanism.-To better understand the formation of the Pd-TiO 2 adhesion-promoting layer on the SAM-terminated glass surface, we performed XPS at various stages of ink application, drying, sintering and reduction. Fig. 3 shows the observed XPS spectra at each of these stages. Fig. 3a is XPS spectra of an APTES-modified glass slide. In Fig. 3a, the nitrogen 1s peak observed at around 400 eV indicates presence of APTES on the glass substrate. This peak is absent in the baseline XPS spectra of a glass slide without APTES termination. Fig. 3 also shows XPS spectra of APTES-terminated glass substrate after ink deposition and drying (Fig. 3b), ink sintering (Fig. 3c) and reduction (Fig. 3d), respectively. Titanium 2p 1 and 2p 3 peaks, observed at 464 eV and 458 eV respectively, confirm the presence of titanium on the surface after ink drying, sintering and reduction steps. Presence of palladium too is confirmed by the Pd 3d 3/2 and 3d 5/2 peaks observed around 341 eV and 336 eV, respectively. After the initial drying step (Fig. 3b), strong chlorine 2s peak is observed at 269 eV indicating that Pd is present as its chloride in the dried ink. Upon sintering at elevated temperature (Fig. 3c), the intensity of the chlorine peak is significantly reduced. After ink reduction (Fig. 3d), the chlorine peak is absent. Further insights into the transitions occurring during the drying, sintering and reduction steps can be gained by observing the Pd 3d 5/2 XPS spectra (Fig. 4). Figs. 4a, 4b and 4c show the XPS spectra after drying, sintering, and reduction steps, respectively. Fig. 4a shows the highest binding energy at 338.0 eV. This binding energy, 11 together with the chlorine peak observed in Fig. 3b, suggests the presence of PdCl 2 in the dried deposit. After sintering, the palladium 3d 5/2 peak in Fig. 4b shifts to 336 eV, consistent with the reported binding energy for PdO. 12 This binding energy shift, along with the attenuation of the Cl 2s peak (Fig. 3c), suggests that the PdCl 2 from the dried ink transitions mostly to PdO during high temperature sintering. When the sintered PdO-TiO 2 is further reduced by the chemical reductant DMAB, the palladium 3d 5/2 peak shifts to approximately 335 eV (Fig. 4c). This shift indicates reduction of PdO to metallic Pd 12,13 embedded in a TiO 2 matrix. This Pd-TiO 2 interfacial layer provides adhesion enhancement as shown below. It is worthwhile to note that a weak nitrogen 1s peak is still observed after PdCl 2 -TiO 2 ink deposition and drying (Fig. 3b), which indicates the presence of some APTES molecules on the glass surface. This peak is not evident after sintering (Fig. 3c) suggesting perhaps that the APTES monolayer is volatilized or decomposed at the higher sintering temperature. 14 Surface characteristics of Pd-TiO 2 deposits.-To characterize the surface during Pd-TiO 2 deposition, we collected surface profilometer scans at the various stages of deposition, i.e., for the untreated glass substrate (Fig. 5a), after drying the PdCl 2 -TiO 2 ink (Fig. 5b), after sintering to form PdO-TiO 2 (Fig. 5c), and finally after reduction to form the Pd-TiO 2 adhesion layer (Fig. 5d). Profilometer scans indicate that the underlying glass substrate is fairly smooth with RMS roughness of ∼7 nm. However, after ink application, sintering and reduction, sub-micron sized roughness elements are detected on the glass substrate as highlighted by arrows in Figs. 5b-5d. To observe the surface structure after Pd-TiO 2 formation, we used high-resolution optical microscopy (Fig. 5e), which confirmed the presence of sub-micron sized particles uniformly distributed on the glass substrate. While high-resolution SEM imaging and EDS compositional mapping were attempted, reliable results could not be obtained because of 'charging' on account of the poorly conducting glass surface. Nonetheless, XPS analysis (reported above - Figs. 3 and 4) provided valuable compositional information. The presence of Pd-TiO 2 particles provides anchoring sites that enhance interfacial adhesion while catalyzing subsequent electroless deposition as discussed below. We believe that the Pd-TiO 2 particle density may be modulated via the ink application, drying and sintering process conditions; however, further studies are required to understand how the particle density can be precisely controlled. Adhesion characterization of Cu films.-The Pd-TiO 2 adhesionpromoting layers formed above served as catalysts for electroless seed-layer deposition followed by electrodeposition of Cu. Electroless Cu seed layers were favored over other metals such as Ni and Co because of the high electrical conductivity of Cu, which is essential for promoting uniform current distribution during the subsequent Cu electrodeposition step. Fig. 6 shows results of tape tests for Cu films deposited on a glass substrate modified with SAM and Pd-TiO 2 following the process outlined above (see Fig. 1 schematic). Tape tests were conducted using the procedure described above. Figs. 6a and 6b are images after tape testing of an electroless Cu plated glass substrate and the tape fragment, respectively. In this case, the ∼440 nm thick electroless Cu film remained attached to the substrate without interfacial failure. Figs. 6c and 6d are images after tape testing of a glass substrate modified with SAM and Pd-TiO 2 after electroless Cu seeding and thick (∼100 μm) Cu electroplating. Additive-free Cu electroplating baths were employed (which contained ppm-levels of chloride) because these baths are known to minimize stress in electroplated films. 9 Additives-containing electrolytes may be used after optimization of the plating conditions that provide minimal stress in thick deposits. The tape test results, even after aggressive cross-hatching, confirm the superior adhesion of thick (∼100 μm) electroplated Cu films to glass. The above results demonstrate the adhesion enhance- ment provided by interfacial SAM/Pd-TiO 2 adhesion-promoting layers in enabling Cu metallization of glass substrates. In the absence of the SAM layer, reliable Pd-TiO 2 adhesion layers could not be formed and the resulting Cu films peeled off either during electrodeposition or during subsequent tape testing. It is also worthwhile to indicate that SAM/Pd layers fabricated without the TiO 2 matrix layer provided reasonable interfacial adhesion enhancement for thin (∼100 nm) Cu layers; however, could not enable thick (>10 μm) Cu coatings. In order to quantify the adhesion of electrochemically deposited Cu on glass using the sequence outlined in Fig. 1 with ∼400 nm electroless Cu seed-layer and 15 μm electroplated Cu. Fig. 7 shows results of a 90 • peel strength test. The average observed peel strength was 1.4 N/cm and the maximum peel strength was 1.8 N/cm. These peel strength values are comparable to those reported by Shen and Dow for adherent Cu films on AlN substrates enabled using chemical grafting. 15 While the above adhesion testing establishes the basic feasibility of an 'all-wet' route for depositing thick, adherent Cu films on glass substrates, further process optimization and characterization is needed to implement the process on large-area substrates and on patterned surfaces. Conclusions An 'all-wet' process for Cu metallization of glass substrates has been demonstrated. In this process, interfacial adhesion is promoted through the use of SAM/Pd-TiO 2 adhesion layer that also catalyzes electroless deposition of a Cu seed layer. The mechanism for Pd-TiO 2 adhesion layer formation involves stepwise reduction of Pd +2 to metallic Pd embedded in a TiO 2 matrix. Electrodeposition of Cu onto the electroless Cu seed layer enables thick (>10 μm), adherent Cu films. The proposed 'all-wet' strategy provides a significant adhesion enhancement as observed in tape tests and 90 • peel strength tests. Owing to it's low-cost and ease of integration, the 'all-wet' process offers promise in numerous metallization applications where insulator-metal interfaces are encountered.
2019-04-06T13:12:33.725Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "8fe1c08bb1cd650a152bcb9422c85b658035f895", "oa_license": "CCBY", "oa_url": "http://jes.ecsdl.org/content/162/14/D630.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "5cb00eeb60214f86a5f156eb5072500b3082128f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
218540693
pes2o/s2orc
v3-fos-license
Relevance of Caspase-1 and Nlrp3 Inflammasome on Inflammatory Bone Resorption in A Murine Model of Periodontitis This study investigates the role of NLRP3 inflammasome and its main effector Caspase-1 in inflammation and alveolar bone resorption associated with periodontitis. Heat-killed Aggregatibacter actinomycetemcomitans (Aa) was injected 3x/week (4 weeks) into gingival tissues of wild-type (WT), Nlrp3-KO and Caspase1-KO mice. Bone resorption was measured by µCT and osteoclast number was determined by tartrate-resistant acid phosphatase (TRAP) staining. Inflammation was assessed histologically (H/E staining and immunofluorescence of CD45 and Ly6G). In vitro studies determined the influence of Nlrp3 and Caspase-1 in Rankl-induced osteoclast differentiation and activity and on LPS-induced expression of inflammation-associated genes. Bone resorption was significantly reduced in Casp1-KO but not in Nlrp3-KO mice. Casp1-KO mice had increased in osteoclast numbers, whereas the inflammatory infiltrate or on gene expression were similar to those of WT and Nlrp3-KO mice. Strikingly, osteoclasts differentiated from Nlrp3-deficient macrophages had increased resorbing activity in vitro. LPS-induced expression of Il-10, Il-12 and Tnf-α was significantly reduced in Nlrp3- and Casp1-deficient macrophages. As an inceptive study, these results suggest that Nlrp3 inflammasome does not play a significant role in inflammation and bone resorption in vivo and that Caspase-1 has a pro-resorptive role in experimental periodontal disease. Inflammasomes are cytosolic multiprotein complexes activated in response to various stimuli, including both microbial-associated molecular patterns (MAMPs) and damage-associated molecular patterns (DAMPs). A major biological function is the final processing of various inflammation-associated cytokines, including IL-1β, IL-18 and IL-33, into the biologically active form 1 . The relevance of inflammasomes to the immune response is demonstrated by the association between mutations in the genes encoding their protein components and autoimmune inflammatory conditions, or dysregulation of the immune response 2,3 . NLRP3 is the most studied inflammasome and has been associated with various diseases and conditions characterized by chronic inflammation, including gout, cancer, type 2 diabetes and rheumatoid arthritis, besides periodontal diseases 1,4-6 . Periodontal disease is a chronic inflammatory condition induced by microbial insult derived from a highly complex dental biofilm and is the most prevalent lytic lesion of bone in humans 7,8 . This condition represents and excellent model to study the role of inflammasomes due to the abundance of MAMPs and DAMPs and the elevated proportion of macrophages in the tissue microenvironment. Interestingly, there is a relative scarcity of information on the biological roles of inflammasomes derived from clinical or pre-clinical studies in periodontal disease models. Increased expression of IL-1β and IL-18 in the gingival tissues and gingival crevicular fluid of patients with various forms of periodontal disease [9][10][11] correlates positively with increased expression of NLRP3 mRNA in this microenvironment 12 , suggesting the participation of this inflammasome in the pathogenesis of periodontal disease. The possible involvement of NLRP3 inflammasome is further suggested by the increased expression of NLRP3 in the oral epithelium 13 and in the saliva of periodontal disease patients 14 . The canonical effector protein activated downstream of most inflammasomes is the protease Caspase-1, which cleaves the pro-form of inflammatory cytokines IL-1β, IL-18 and IL-33, generating their mature secreted forms. Caspase-1 can also induce cell death by pyroptosis 15 . Osteoblasts express inflammasome components, including the core protein of NLRP3 inflamasome, NALP3 16 . Activation of Caspase-1 leads to cell death by pyroptosis of osteoprogenitor cells 17 . Infection of osteoblastic cells with Aa induced production of IL-1β and IL-18 and apoptosis, both events mediated by the activation of NLRP3 inflammasome 18 . These events may affect bone turnover and inflammatory bone resorption in vivo. Cytokines processed by the effector caspase-1, particularly IL-1β, may modulate osteoclast differentiation and activity by direct effects on osteoclasts 19 or by indirectly modulating the expression of RANKL (ligand for receptor activator of nuclear factor-kappa-B) by other cell types 20 . There is only one study assessing the role of NLRP3 inflammasome in a Pg-colonization model of experimental periodontitis that reported a reduction in pro-inflammatory cytokine production and resorption of alveolar bone 21 ; however we did not find any studies assessing the relevance of Caspase-1 in this disease model. This inceptive study intends to contribute to bridge this gap of knowledge, by providing insights into the role NLRP3 inflammasome and of Caspase-1 on inflammation and alveolar bone resorption in a murine model of bacterial-induced experimental periodontal disease. Materials and Methods Periodontal disease model. A total of 36 C57BL/6 male adult mice (age between 6 and 8 weeks) were used, including 12 wild-type (WT) mice, 12 mice genetically deficient (knockout) for Nalp3 (Nrlp3-KO), the central NLR protein in the NLRP3 inflammasome, and 12 mice genetically deficient for Caspase-1 (Casp1-KO). All mice were obtained from the Center for genetically modified and transgenic mice, School of Medicine at Ribeirao Preto-University of Sao Paulo (USP). Disruption of the targeted genes was verified at mRNA and protein level (Supplemental Fig. 1). This study was carried out in accordance with the principles stated by the Brazilian College of Animal Experimentation and was approved by the Ethical Committee on Animal Experimentation (protocol number 06/2014) of the School of Dentistry at Araraquara, UNESP-State University of Sao Paulo, Araraquara, SP, Brazil. Experimental periodontal disease was induced in 18 mice, including 6 animals of each genotype (WT, Nlrp3-KO and Casp1-KO) by direct bilateral injections of a 3 uL PBS suspension of heat-killed Aggregatibacter actinomycetemcomitans (Aa, JP1 serotype) at 1×10 9 UFC/mL 22,23 . 18 non-disease control mice (n = 6 for each genotype) received bilateral injections of the same volume of the PBS vehicle. These injections were performed under mild general anesthesia with isofluorane (Baxter Healthcare, Deerfield, IL) using a Hamilton-type microsyringe (33 gauge needle) three times/week for 4 weeks, directly in the gingival tissues at the palatal aspect between the first and second upper molars. All animals were euthanized by cervical dislocation 4 weeks after the first injection. The maxillary bones were hemisected and submitted to microcomputed tomographic analysis of alveolar bone resorption. After scanning, the specimens were submitted to routine EDTA decalcification and processing to obtain paraffin-embeded tissue blocks for the histological and immunofluorescence analyses. In vitro studies. Primary M-csf-differentiated macrophages were derived from cells obtained from the marrow of long bones (femur and tibia) of WT, Nlrp3-KO and Casp1-KO mice as previously described 24 . These cells were plated in regular tissue culture-treated and calcium phosphate-coated (Osteologic, Corning-Costar, Corning, NY, USA) 96-well plates (1×10 4 cells/well) and after 18 h, stimulated with 50 ng/mL of murine recombinant Rankl and 20 ng/mL of murine recombinant M-csf (Peprotech Inc, Rocky Hill, NJ, USA). Medium was changed and these stimuli re-applied at 72 h. Cultures were kept for an additional 48 h (a total of 5 days of osteoclastic differentiation). Cells grown in regular tissue culture-treated plastic were fixed with paraformaldehyde and permeabilized in saponin-containing buffer (BD Cytofix/Cytoperm, BD Biosciences, San Jose, CA, USA) and stained with AlexaFluor 488-conjugated phalloidin (Molecular Probes, ThermoFisher Scientific, Waltham, MA, USA) for 40 minutes, followed by DNA staining with DAPI (Sigma-Aldrich Co., St. Louis, MO, USA) for 5 minutes for the identification of actin ring formation. Total RNA was also isolated in parallel experiments for RT-qPCR. Both M-csf differentiated macrophages (20 ng/mL for 2 days) and Rankl/M-csf differentiated osteoclasts grown in regular 96-well tissue culture plates (1×10 5 macrophages/well, 1×10 4 bone marrow cells/well for osteoclasts) were lysed for RNA isolation. Macrophages were stimulated with 100 ng/mL of LPS (E.coli LPS, Sigma-Aldrich Co., St Louis, MO, USA) or with the same volume of PBS vehicle for 18 h. Cells grown on calcium phosphate-coated 96-well plates were lysed by incubation in 1% sodium hypochloride for 15 min. Three digital images from each well (covering > 80% of the well surface) of phalloidin/DAPI-stained and of calcium-phosphate coated plates were obtained at 40X magnification on an inverted digital fluorescence microscope (Evos fl, AMG Micro, ThermoFisher Scientific, Waltham, MA, USA). A trained examiner blind to the experimental conditions counted the number of osteoclasts (cells with evidence of actin ring formation and containing three or more nuclei) and measured the perimeter of the osteoclasts in the merged green/blue channel fluorescent images. In the images from calcium phosphate-coated wells, the area of exposed plastic was measured as indicative of resorbing activity. A trained examiner not aware of the experimental conditions performed these measurements using ImageJ software (v. 1.51 s, National Institutes of Health, USA -http://imagej.nih.gov/ij). Microcomputed tomography analysis (µCT scanning). The hemimaxillae were initially fixed in 4% buffered formalin for 24 h and transferred to 70% alcohol until scanning using 56 kV, 300 mA and a 0.5 mm aluminum attenuation filter, with the resolution of the slices set to 18 µm using a µCT system (Skyscan, Aartselaar, Belgium). Tridimensional images were reconstructed and the resulting images were oriented in three planes (sagittal, coronal and frontal) in a standardized manner using anatomical landmarks with NRecon and DataViewer softwares (Skyscan, Aartselaar, Belgium). A standardized 5.4 mm 3 region of interest (ROI) was set with 1.5×4.0×0.9 mm (vertical or cervico-apical x horizontal or mesio-distal x lateral or buccal-palatal). This cuboidal ROI was positioned on the central sagittal section (identified by the diameter of the root canal in the distal root of the first molar) using the following references: 1. cervical/coronal reference was the roof of the furcation area between mesial and distal roots of the upper first molar; 2. mesially we used the distal aspect of the mesial root of the first molar. The thickness of the ROI was set to 50 slices (900 µm) counted from this central section towards the palatal/medial direction on the sagittal plane. For the analysis, a standardized threshold of grey level was set to distinguish between non-mineralized and mineralized tissues. Considering that variations on the size and mineralization of the tooth structures included in the ROI were irrelevant among the different animals, the analysis assessed the percentage of mineralized tissue (MT) within the total volume (TV) of the ROI, presented as a ratio (MT/TV). A decrease on this ratio is interpreted as indicative of bone resorption. Histological analysis. The hemimaxillae with intact surrounding soft tissues were fixed in 4% buffered formalin for 48 h, decalcified in EDTA (0.5 M, pH 8.0) for 45 days at room temperature, and embedded in paraffin. Semi-serial sections of 4 µm thickness were obtained in the buccal-lingual (frontal plane) direction and stained with hematoxylin and eosin (H/E). Immunohistochemical staining of TRAP. Nine unstained semi-serial sections from each paraffin-embedded hemimaxillae spanning 1000 µm on the antero-posterior direction (sagittal plane, n = 6 animals/experimental condition and genotype) were used for detection of tartrate resistant acid phosphatase (TRAP) expression. Briefly, the sections were deparafinized in xylene and rehydrated in decreasing concentrations of ethanol. Endogenous peroxidase was blocked using 3% peroxide in methanol (5 min, RT), followed by antigen retrieval by heating (95-98 C) the sections in tris/EDTA buffer (pH 9.0) for 15 min and blocking (1 h, RT) of non-specific binding with 2% BSA. Primary antibody for TRAP (cat# ab191406, Abcam, Cambridge, MA, USA) was diluted (1:100) in background-reducing solution (Dako-Agilent, Santa Clara, CA, USA) and incubated overnight at 4 C. The detection reaction was developed using a HRP-DAB visualization system (LSAB2, Dako-Agilent, Santa Clara, CA, USA). Osteoclasts were identified as large TRAP-positive cells, containing three or more nuclei located in the vicinity of the alveolar bone. A single trained examiner who was blind to the coding identifying the experimental groups and genotypes counted the osteoclasts (at 40X magnification) located from the apical portion of the palatal root of the first molar along the periodontal ligament upwards to the alveolar bone crest and towards the center of the palate, adjacent to the depression on the palatal bone associated with the major palatine artery and nerve. Immunofluorescence analysis of the inflammatory infiltrate. Unstained semi-serial sections of 4 µm thickness (9 sections per animal and experimental group, spanning 900 µm in the sagittal plane) were deparaffinized in two changes of xylenes for 15 and 5 minutes, and then dehydrated in 100% ethanol for 2 minutes. The slides were then rehydrated through a graded ethanol series (95% and 70%) for 2 minutes each, followed by a wash in distilled water for 1 minute, and then placed in Trilogy buffer (Cell Marque, Hot Springs, AR, USA) at 96.5 C for 25 minutes. Slides were cooled in a distilled water bath for 5 minutes, and rinsed in 1x TBS (Tris-Buffered Saline). Sections were permeabilized with 0.1% Triton X-100 (ThermoFisher Scientific, Waltham, MA, USA) and blocked in 1X PBS-T (0.5% Tween 20) containing 10% normal goat serum (Life Technologies, ThermoFisher Scientific, Waltham, MA, USA) at room temperature for 30 minutes. Slides were washed with 1x TBS and incubated with primary antibodies diluted in 1x TBS for 24 h at 4 C. Primary antibodies and dilutions used were as follows: CD45 at 1:200 (rat IgG, purified anti-mouse CD45, Biolegend, San Diego, CA, USA), for 'general' leukocyte infiltration, and Ly6G at 1:100 (rat IgG, purified anti-mouse Ly6G, Biolegend, San Diego, CA, USA), for specific neutrophil (PMN) staining. Negative control included irrelevant rat IgG at 1:100 dilution. Tissues were washed three times for 5 minutes each in 1x TBS and incubated with goat anti-rat secondary antibody conjugated with AlexaFluor 594 (ThermoFisher Scientific, Waltham, MA, USA) at 1:650 and 1:200, respectively, for 2 hours, at room temperature. The DNA dye 4' ,6-diamidino-2-phenylindole (DAPI) was used to counterstain cell nuclei. Images were obtained on an EVOS fl digital inverted microscope (ThermoFisher Scientific, Waltham, MA, USA), and the median fluorescence intensity (MFI) in the red channel was determined using Image J software (v. 1.51 s, National Institutes of Health, USA -http://imagej.nih.gov/ij). Quantitative Reverse-Transcription Real-Time PCR. Total RNA was extracted from bone marrow-differentiated macrophages and Rankl/M-csf-differentiated osteoclasts (n = 7 samples from independent experiments, assessed in triplicate) using affinity columns (RNAqueous-4PCR, Ambion Inc, Invitrogen Corp., Foster City, CA, US) according to the manufacturer's protocol. The quantity and purity of total RNA were determined by UV spectrophotometry and by the 260/280 nm ratio, respectively. 500 ng of total RNA was converted into cDNA using random hexamer primers and moloney leukemia virus reverse transcriptase in a reaction volume of 20 uL (High Capacity cDNA Synthesis kit, Applied Biosystems, Invitrogen Corp., Foster City, CA, USA). The qPCR reactions were performed in a 20 µL total volume reaction, including TaqMan qPCR mastermix (TaqMan Fast Advanced, Applied Biosystems, Invitrogen Corp., Foster City, CA, USA), cDNA template, deionized water, and mouse-specific pre-designed and optimized sets of primers and probe (TaqMan gene expression assays, Applied Biosystems, Invitrogen Corp., Foster City, CA, USA; supplemental table 1). Cycling conditions were pre-optimized by the supplier of the sets of primers/probe and master mix, and 40 cycles were run on a StepOne Plus qPCR thermocycler (Applied Biosystems, Invitrogen Corp., Foster City, CA, USA). Relative levels of gene expression were determined by the ∆(∆Ct) method using the thermocycler's software and automated detection of the Ct. Expression of 18 S RNA in the same samples was used to normalize the results of the target genes. www.nature.com/scientificreports www.nature.com/scientificreports/ Statistical analysis. The statistical analysis was performed using Prism 8.3 (GraphPad Software LLC, San Diego, CA, USA). Central tendency and dispersion measures were calculated from different experiments. Comparisons between the experimental conditions (control/vehicle versus disease/stimulated) within each genotype background (WT, Nlrp3-KO, Casp1-KO) were performed using unpaired t-tests with Welch's correction. Comparisons among the different genotypes (WT, Nlrp3-KO, Casp1-KO) in each experimental condition (control/vehicle versus disease/stimulated) were performed using Brown-Forsythe and Welch's ANOVA followed by post-hoc test for pairwise comparisons. Significance level was set at 95% (p < 0.05) for all analyses. Results Inflammatory bone resorption is attenuated in Casp1-KO, but not in NLRP3-KO mice. Interestingly, µCT analysis showed a greater volume of mineralized tissue (MT) in non-disease control (PBS-injected) Nlrp3-KO and Casp1-KO animals in comparison with WT mice. Injection of heat-killed Aa effectively induced alveolar bone resorption in all genotypes, but the severity of resorption was significantly attenuated in Casp1-KO mice (Fig. 1A,B). These results suggest that Nlrp3 and Caspase-1 may have a role in physiological bone turnover and that Caspase-1, but not Nlrp3, has a role in promoting inflammatory bone resorption. Osteoclast number is increased in Casp-1 KO mice. Considering the attenuation of bone resorption in Casp1-KO mice, the increase in the number of osteoclasts associated with the induction of experimental periodontal disease was surprisingly similar in all genotypes. Osteoclasts are identified as large, positively-stained multinucleated cells located in the vicinity of alveolar bone (indicated by arrows in Fig. 2A). These results indicate that osteoclast differentiation in this inflammatory microenvironment is not affected by the lack of Nlrp3 or Caspase-1. In fact, counter-intuitively lack of Caspase-1 significantly increased osteoclast numbers, but not bone resorption (Fig. 2B). There is a slight increase in osteoclast numbers in Nlrp3-KO mice, but it was not statistically significant. Infiltration of PMNs is attenuated in Casp1-KO mice. Induction of experimental periodontal disease was associated with a marked increase in inflammatory cell infiltrate in WT, Nlrp3-KO mice and Casp1-KO mice (Fig. 3A, inflamed area corresponding to the site of injections indicated by an asterisk, 'BC' indicates the alveolar bone crest and 'R' indicates the palatal root of the upper first molar, H/E stained images). Immunofluorescence analysis (Fig. 3B) shows a marked increase on leukocyte (CD45 + ) infiltration in the gingival tissues of WT, Nlrp3-KO and Casp1-KO mice. There was no statistically significant difference on the overall inflammatory infiltrate (CD45 + cells) in diseased gingival tissues among the three genotypes; however there was a significant decrease on the PMN (Ly6G + ) infiltrate in Casp1-KO mice, which also had the least relative increase of PMN infiltration with the induction of experimental periodontal disease. www.nature.com/scientificreports www.nature.com/scientificreports/ phenotype-associated genes Il-10, Il-12 and Tnf-α in primary bone marrow-derived macrophages. In WT and Nlrp3-deficient macrophages, LPS stimulation caused a statistically significant increase in Il-10, Il-12 and Tnf-α, whereas in Casp1-deficient macrophages only Il-10 was significantly induced. In comparison with WT macrophages, expression of all three candidate genes was markedly reduced in Nlrp3-and Casp1-deficient macrophages, although the inhibition of Il-12 expression was not statistically significant (Fig. 4). Caspase-1 and Nlrp3-deficiency increases the activity of Rankl-derived osteoclasts in vitro. Macrophages from the bone marrow of WT, Casp1-KO and Nlrp3-KO mice differentiated into osteoclasts when treated with Rankl and M-csf over 5 days. The osteoclasts derived from Nlrp3-KO mice were significantly larger, which justifies the reduction in number of osteoclasts due to the limited cell growth area (Fig. 5A). Moreover, osteoclasts differentiated from the bone marrow cells of both Nlrp3-and Casp1-KO mice had a significantly greater resorbing activity than the osteoclasts derived from the bone marrow of WT mice (Fig. 5B, the clear areas indicated by asterisks in the images result from resorption of the calcium phosphate coating by the osteoclasts). Mmp-9 expression was also increased upon osteoclast differentiation by treatment with RANKL over 5 days. Osteoclasts derived from the bone marrow of Nlrp3-KO mice had significantly greater expression of Mmp-9 in comparison with osteoclasts derived from the bone marrow of WT animals (Fig. 5B). Discussion To our knowledge, this is the first study to assess the role of Caspase-1 in an in vivo model of bacterial-induced periodontal disease. The results indicate that the Nlrp3 inflammasome does not have a relevant role in the inflammatory bone resorption in this model. These results should be considered in the context of an inceptive study, generating insights and questions that shall be further explored in future studies. Inactivation of Caspase-1 gene significantly attenuated inflammatory bone resorption; however this effect was not accompanied by a reduction in the number of osteoclasts, on the inflammatory infiltrate or on the transcription of selected candidate genes associated with inflammation and mineralized and non-mineralized soft tissue degradation. These results are in contrast with those of a study that used a Porphyromonas gingivalis oral colonization model of experimental periodontitis, which showed a significant attenuation of bone resorption in Nlrp3-deficient mice 21 . The differences in experimental design (oral colonization with live bacteria versus direct injection of heat killed bacteria), in the bacterial species used (P.gingivalis versus A.actinomycetemcomitans) and in the methods of assessment of bone resorption (histomorphometric linear measurements X µCT tridimensional volumetric analysis) may account for the discrepancy. Importantly, the limitations of the experimental approach in the present study have to be considered when interpreting the data, particularly the lack of data on protein production (most notably of IL-1β, the prototypical NLRP3 inflammasome-activated cytokine) and of data exploring the biological mechanisms involved in the observed phenotype. Interestingly, most in vivo and in vitro studies related with periodontal disease used the Gram-negative bacterial species Porphyromonas gingivalis (Pg) as the exogenous stimulus. In spite of its relevance to periodontitis, Pg is not the only bacterial species associated with periodontal diseases in the subgingival dental biofilm; and some studies have demonstrated that other microbial species, such as Streptococcus sanguis 25 , Mycoplasma salivarium 26 , Fusobacterium nucleatum (Fn) 27 and Aggregatibacter actinomycetemcomitans (Aa) 28 also regulate the expression of NLRP3 inflammasome components and of the inflammasome-processed cytokines. In fact, there is evidence that NLRP3 is differentially regulated by secreted products from supra and subgingival biofilms 29 , as well as www.nature.com/scientificreports www.nature.com/scientificreports/ conflicting information regarding the inhibitory [30][31][32][33] or stimulatory 33-38 effect of Pg and Pg-derived antigens on NLRP3 expression and activation. These conflicting reports are related with the assessment of different cell types, Pg-derived antigen used, experimental design (e.g., second signal for inflammasome activation, experimental period, outcomes assessed) and on the presence or absence of hypoxia. In this study we used heat-killed Gram-negative bacteria associated with periodontal disease in humans to avoid issues with possible fluctuation of bacterial cell viability and also issues with adherence/colonization in the oral environment. In vitro studies show that Aggregatibacter actinomycetemcomitans (Aa, used in this study) induces expression of NLRP3 and IL-1β by human monocytes 39 and human PBMCs 40 . Some studies report that leukotoxin secreted by Aa is a crucial virulence factor mediating the induction of IL-1β and IL-18 41 ; however, another in vitro study using human monocytes infected with mutant strains of Aa (knockout for leukotoxin and cytolethal dystending toxin genes) also showed increased expression of IL-1β, IL-18 and NLRP3, suggesting that other molecules derived from Aa may activate inflammasomes 40 . This supports the possibility of inflammasome activation in our model using heat-killed Aa, which is further indicated by our in vitro data (Supplemental Fig. 2) demonstrating the agonistic effect of heat-killed Aa on bone marrow-derived macrophages. The goal of our model was to induce an inflammatory response and the associated inflammatory alveolar bone resorption, the two major hallmarks of periodontal disease. This model provides the two signals that are required for activation of the Nlrp3 inflammasome: exogenous microbial-derived PAMPs, which triggers the production of cytokine precursors (e.g., pro-IL1); and a second signal, represented by the interaction of DAMPs with their PRRs (e.g., RAGE, HMGB1). The two-signal model of activation of NLRP3 was already demonstrated in the context of periodontal disease by stimulating primary gingival epithelial cells with Porphyromonas gingivalis, which resulted in downregulation of NLRP3 expression and increase of pro-interleukin-1β expression. Increased secretion of interleukin-1β was only detected upon stimulation with extracellular ATP as the danger signal/second signal 34 . In addition to the presence of specific ligands/activators of NLRP3, the chronic inflammatory microenvironment of periodontal disease is characterized by high levels of reactive oxygen species (ROS), hypoxia 33 and also by tissue degradation, with accumulating DAMPs. Both ROS and DAMPs (which may also induce production of ROS) can activate multiple inflammasomes besides NLRP3, including AIM2, NLRP1, NLRC4 and NLRP6 42 . www.nature.com/scientificreports www.nature.com/scientificreports/ Thus, inflammasome activation may derive from direct recognition/interaction of bacterial and host-derived ligands by the inflammasome central/sensor proteins coupled with the detection of cell changes induced by external microbial/stress stimuli 43 . Specifically regarding periodontal disease, there is scarce information available. Increased gene expression of NLRP3 and NLRP2, but not of ASC-1, was reported in the presence of periodontal disease in humans, and these increased levels of inflammasome core genes were correlated with increased mRNA of IL-1β and IL-18, cytokines processed by the inflammasomes 12 . Increased expression of NLRP3 and AIM2 in the gingival tissues of patients with periodontal disease is positively correlated with the levels of IL-1β and IL-18, suggesting that various inflammasomes may participate in the microbial-induced inflammation in periodontal diseases 35 . In our experimental model, it is important to consider that only the NLRP3 inflammasome was disrupted in Nlrp3-KO mice (Supplemental Fig. 1) and this may cause a compensatory activation of the other functional inflammasomes (e.g., NLRC4, AIM2), with a shift in the biological effects. In fact, induction of experimental arthritis in Il10-KO mice caused a significant increase in the expression of Nalp3, Aim2 and Caspase-1 44 . Nevertheless, the lack of functional Nlrp3 did not affect the inflammatory infiltrate and alveolar bone resorption in this model. In contrast, genetically modified mice with a global gain of function mutation of Nlrp3 45 demonstrated increased production of proinflammatory mediators, which was associated with altered bone turnover and reduced bone mass; however when the Nlrp3 gain of function mutation was limited to osteoclasts there was no increase in proinflammatory cytokines accompanying the decrease of bone mass 46 . Rankl-induced osteoclast differentiation in vitro was not affected in Caspase-1 and Nlrp3-deficient macrophages, but cell size, resorbing activity and Mmp-9 expression of the resulting osteoclasts were all significantly increased in comparison with bone marrow-derived macrophages from WT mice. LPS induction of Il-10, Il-12 and Tnf-α was significantly reduced in Nlrp3-and Casp1-deficient macrophages, suggesting that Nlrp3 influences Tlr4-associated gene expression. This is surprising, considering that a major biological function of inflammasomes is the posttranslational processing of cytokines and that LPS stimulation is usually considered as a first signal priming macrophages for inflammasome activation. In our experiments, the 18h-stimulation period may have allowed for the production of molecules that provide an autocrine/paracrine second stimulatory signal for the activation of inflammasomes. Prolonged stimulation of macrophages with LPS has been recently shown to promote the maturation and secretion of IL-1β independently of Nlrp3 activation 47 . However, both constitutive and LPS-induced expression of Il-10 is reported to be significantly reduced in Nlrp3-deficient murine macrophages, even with a shorter period of stimulation (4 h) 48 . The mechanism associated with this possible crosstalk between TLR signaling and the NLRP3 inflammasome is unknown, but at least for IL-10 it is independent of the major signaling pathways activated downstream of TLR4 (p38 and ERK MAPKinases and NF-kB) 48 . It is also possible that lack of LPS-induced posttranslational modifications of Nlrp3 [49][50][51] in the macrophages derived from Nlrp3-KO mice is involved in inhibition of cytokine expression. Moreover, we cannot rule out the involvement of non-canonical Caspase-11 inflammasome in regulating the responses of macrophages to LPS in the cytosol 52,53 , although it is still unclear how LPS may enter the cells. Interestingly, the Caspase-11 non-canonical inflammasome can also induce a non-canonical activation of the Nlrp3 inflammasome 54 . Various possible scenarios may be implicated in these intriguing and apparently contradictory findings in vitro. In vivo, osteoclast-precursor cells may be derived from a different population of precursor cells influenced by a much more complex microenvironment that includes other inflammatory and stromal cell types, as well as Representative images of pit assay using Calcium phosphate-coated plates indicating the resorptive activity (clear area of exposed plastic substrate indicated by asterisks results from the removal of Calcium phosphate coating by the osteoclasts) of osteoclasts according to the genotype of the mice (40×, scale bar 1000 µm). Quantification of resorptive activity and Mmp-9 mRNA expression are depicted in the graphs. Bars represent means and vertical lines the standard deviation (n = 7 independent experiments, analyzed in triplicate) of the area of exposed plastic substrate (top) and normalized Mmp-9 mRNA expression (bottom). The asterisk (*) indicates a significant difference between resorbed area in WT osteoclasts in comparison to osteoclasts derived from Nlrp3-and Casp1-KO mice (Brown-Forsythe and Welch's ANOVA followed by Dunnett's multiple comparisons test). multiple biologically-active molecules, including cytokines, chemokines, lipid-derived mediators, as opposed to the more simple and defined microenvironment of the in vitro experiments. Also, the experimental periods in vitro are not commensurable with the experimental periods of the in vivo experiment. These possibilities will be further investigated in subsequent studies. Since Caspase-1 is the main downstream effector of all inflammasomes, Casp1-KO mice may be considered as representative of a global inflammasome loss of function, which in our experimental model was associated with a significant attenuation of inflammatory bone resorption, indicating that Caspase-1 activation in the microenvironment of periodontal disease has a relevant role in the inflammatory bone resorption. We speculate that activation of other Caspase-1 activating inflammasomes may compensate the inactivation of Nlrp3 in Nlrp3-KO mice, which may account for the lack of influence on bone resorption associated with experimental periodontitis in these mice. Strikingly, inflammatory infiltrate, osteoclastogenesis and expression of candidate inflammatory genes were not affected by the lack of Caspase-1 in our experimental model, which contrasts with the phenotype of attenuated bone resorption. Speculatively, lack of Caspase-1 may cause a functional impairment and/ or a phenotypical change through both direct and indirect influences on the cascade of events in inflammatory cells and osteoclasts; such as a shift towards the activation of the non-canonical Caspase-11 inflammasome by bacteria phagocytosed by macrophages in our experimental model. The significant impairment in PMN infiltration observed in Casp1-KO mice suggests that PMNs may play a role in alveolar bone resorption in this model and supports a phenotypical change in the inflammatory response. Interestingly, our in vitro data indicates that RANKL-induced osteoclastic differentiation is not affected in Casp1-deficient macrophages, which supports the in vivo finding of a similar number of TRAP-positive osteoclasts in WT and Casp1-KO mice. However, in vitro RANKL-differentiated osteoclasts derived from bone marrow macrophages from Nlrp3-and Casp1-deficient macrophages were larger and presented significantly increased resorptive activity, which would be expected to increase bone resorption in vivo. It is possible that this discrepancy is due to the presence of other biologically active mediators (besides Rankl and M-csf used in vitro) and cell types in the microenvironment in vivo, which may have reduced the resorbing activity. We speculate that osteoclast differentiation and activation are distinct processes that may be regulated independently. These possibilities will be explored in future experiments. In addition, these results indicate that other inflammatory pathways are involved in the pathogenesis of periodontal diseases, as inflammation and alveolar bone resorption were not completely abrogated in either Nlrp3-or Casp1-KO mice. The suggestion is that inflammasome activation may affect bone turnover indirectly (i.e., via modulation of the level of active proinflammatory mediators) or directly (i.e., via an osteoclast-specific effect). The osteoclast-specific effect may not be necessarily associated with change in osteoclast number (i.e., osteoclast differentiation), but rather with increased osteoclast activity, as indicated by the in vitro experiments (Fig. 5A,B). On the other hand, selective blockage of Nlrp3 in vitro has been shown to reduce RANKL-induced osteoclastogenesis in vitro 44 . These contrasting results may be associated with differences in the experimental approach, particularly the use of bone marrow-derived macrophages from Il10-KO mice and the criterium used to identify the osteoclasts among the TRAP-positive cells, as cell size and the presence of three or more nuclei were not considered. Of note, we did not assess the infiltration of macrophages, important osteoclast precursors and prototypical inflammasome-expressing cells, in the gingival tissues. It is possible that changes in macrophage infiltrate and/or phenotype play a role in the observed phenotype of Nlrp3-and Casp1-KO mice. The influence of Nlrp3 inflammasome and Caspase-1 activities on in vitro chemotaxis and infiltration of macrophages in vivo, as well as on the phenotype of macrophages need to be explored in subsequent studies. In summary, NLRP3 inflammasome did not play a significant role in inflammation and bone resorption in the heat-killed Aa-induced periodontal disease model; whereas lack of Caspase-1 attenuated inflammatory bone resorption and the infiltration of PMNs. Taken in the context of an inceptive study, the apparent contradictions and insights from the data presented are stimulating of further research into biological mechanisms by which inflammasomes can influence destruction of both soft and mineralized connective tissues in chronic inflammatory conditions associated with host-microbial interactions. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
2020-05-08T14:35:49.513Z
2020-05-08T00:00:00.000
{ "year": 2020, "sha1": "533e815d783c9fe6e3aa1d127880bbf7e1c8eca8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-64685-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "533e815d783c9fe6e3aa1d127880bbf7e1c8eca8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
229431075
pes2o/s2orc
v3-fos-license
Some Notes on the Omega Distribution and the Pliant Probability Distribution Family In 2020 Dombi and Jónás (Acta Polytechnica Hungarica 17:1, 2020) introduced a new four parameter probability distribution which they named the pliant probability distribution family. One of the special members of this family is the so-called omega probability distribution. This paper deals with one of the important characteristic “saturation” of these new cumulative functions to the horizontal asymptote with respect to Hausdorff metric. We obtain upper and lower estimates for the value of the Hausdorff distance. A simple dynamic software module using CAS Mathematica and Wolfram Cloud Open Access is developed. Numerical examples are given to illustrate the applicability of obtained results. Introduction This paper deals with the asymptotic behavior of the Hausdorff distance between Heaviside function and some novel distribution functions. The study can be very useful for specialists that are working in several scientific fields like insurance, financial mathematics, analysis, approximation of data sets in various modeling problems and others. Application of the Hausdorff metrics in different approximation problems is topic for many science works (for example see articles [1][2][3][4][5][6][7][8], some monographs [9][10][11][12][13][14][15][16][17][18] and references there in). Definition 1. The shifted Heaviside step function is defined by The theory of Hausdorff approximations is due to Bulgarian mathematician Blagovest Sendov. His work and achievements are connected to the approximation of functions with respect to Hausdorff distance. Definition 2. [19] The Hausdorff distance (the H-distance) ρ( f , g) between two interval functions f , g on Ω ⊆ R, is the distance between their completed graphs F( f ) and F(g) considered as closed subsets of Ω × R. More precisely, wherein ˙ is any norm in R 2 , e.g., the maximum norm (t, x) = max{|t|, |x|}; hence the distance between points A = (t A , In 2019 Dombi et al. [20] (see also [21]) suggested an auxiliary function that is called the omega function. The authors presented some main properties of the omega function as domain, differentiability, monotonicity, limits and convexity. One of the important properties is that the omega function ω (α,β) m (α, β, m ∈ R, β, m > 0) and the exponential function f (x) = e αx β (α, β ∈ R, β > 0) may be derived from a common differential equation. Once more it is shown that omega function is asymptotically identical with the exponential function (for more details see ( [20] Theorem 1) and ( [21] Proposition 1)). Some probability distributions are founded on this auxiliary function as the omega probability distribution (see [20]) and the pliant probability distribution family (see [21,22]). Hence, some probability distributions, which formulas include exponential terms, also can be approximated using this function, for example, the well-known Weibull, Exponential and Logistic probability distributions. In this paper, we study the asymptotic behavior of the Hausdorff distance between Heaviside function and the pliant probability distribution function. We study the omega distribution function. We develop a self-intelligent dynamic software module using the obtained results. Several numerical examples are presented. The Pliant Probability Distribution Family In 2020 based on omega function Dombi and Jónás [21] proposed new four-parameter probability distribution function called the pliant probability distribution function (see also ([22] Chapter 3)). Definition 4. The pliant probability distribution function F p (x; α, β, γ, m) (pliant CDF) is defined by where ω According to its properties, this new probability distribution can be applied in many fields of science and in a wide range of modeling problems. The pliant probability distribution is a generalization of the epsilon probability distribution (see [23]). In 2020 Árva noted that the omega probability distribution (see [20]) can be derived from the pliant probability distribution function after reparametrization or by utilizing its asymptotic properties (see ( [24] Lemma 2)). In 2020 Kyurkchiev [25] considered the asymptotic behavior of Hausdorff distance between the shifted Heaviside function and the so-called epsilon probability distribution. Once more he proved a precise bound for the values of Hausdorff distance. He noted that once may formulate the corresponding approximation problem for the omega probability distribution. This is the main purpose of Section 3 of this work. This section is dedicated to the behavior of the CDF function of the pliant probability distribution and more precisely "saturation to the horizontal asymptote a = 1 in the Hausdorff sense". Then the Hausdorff distance d between F p (x; α, β, γ, m) and the Heaviside function h t 0 (t) satisfies the following nonlinear equation In the next theorem, we prove upper and lower estimates for the Hausdorff approximation d. In Dombi and Jónás [21] showed in detail that the Pliant probability distribution can approximate well several other functions (see also ([22] Chapter 3)). In Table 1 are presented some of the special cases. Approximated CDF Weibull Let us consider some computational examples. The obtained results are presented in Table 2. In these examples for different values of parameters α, β, γ, m we calculate the Hausdorff distance between the Heaviside step function h t 0 (t) and the Pliant probability distribution F p (x; α, β, γ, m). Graphical results are presented in Figure 3 and it can be seen that the "saturation" is faster. In the last column of Table 2 we show which classical probability distribution can be considered as a approximation of the Pliant probability distribution function. In this section, we investigate the omega probability distribution in Hausdorff sense as a continuation of the work of Kyurkchiev [25] and as a corollary of the pliant probability distribution. Let α, β, m > 0 and t ∈ (0, m). For the function F(x; α, β, m) given in (5) we have Then the Hausdorff distance d between F(t; α, β, m) and the Heaviside function h t 0 (t) satisfies the following nonlinear equation The next theorem is a corollary of Theorem 1 in the case of the omega distribution. Theorem 2. Let and 2.1A > e 1.05 . Then for the Hausdorff distance d between shifted Heaviside function h t 0 (t) and the Omega CDF function F(t; α, β, m) defined by (5) the following inequalities hold true: In Table 3 we present several computational examples to show behavior of the Omega CDF function F(t; α, β, m) with different values of parameters α, β and m. We use Theorem 2 for computation of values of upper and lower estimates d l and d r . It can be seen that the proven bottom estimates for the value of Hausdorff distance d is reliable in approximation of shifted Heaviside function h t 0 (t) and the Omega CDF function F(t; α, β, m). Graphical representation in Figure 4 shows that the important characteristic "saturation" is faster. Conclusions Comparatively, the new four-parameter the Pliant probability distribution function can be considered as a generalization of the Epsilon and the Omega distribution. Moreover, it can be viewed as an alternative to some classical probability distributions like Weibull, Exponential, Logistic and Standard Normal distributions. The versatility properties of this probability function lay in the basics of application in different fields of science and modeling problems. The main task in this work is connected to the approximation of the Heaviside function by the Pliant CDF function about the Hausdorff metric. Besides, we present an investigation for the Omega probability distribution. In the presented article we prove upper and lower estimates for the searching Hausdorff approximation that in practice can be used as a possible additional criterion in the exploration of the characteristic "saturation". For the purpose of this work, a simple dynamic software module is developed and some numerical examples are presented. An example with real data of operating hours between successive failure times of air conditioning systems on Boeing airplanes is considered.
2020-12-10T09:07:02.553Z
2020-12-04T00:00:00.000
{ "year": 2020, "sha1": "b120a3d7a65d6fede4434f2de12d2c4707d77032", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4893/13/12/324/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2eb83587e2c742e4115fa19034759f35b5229a7e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
15047680
pes2o/s2orc
v3-fos-license
On a model of random cycles We consider a model of random permutations of the sites of the cubic lattice. Permutations are weighted so that sites are preferably sent onto neighbors. We present numerical evidence for the occurrence of a transition to a phase with infinite, macroscopic cycles. Introduction Geometric representations of systems of statistical physics have a long history going back to the treatment of the Ising model by Peierls [8]. The Feynman-Kac formula provides such a representation for quantum models. It was originally introduced for the Bose gas [4], where the symmetric nature of the particles leads to the occurrence of random permutations. Sütő showed in the ideal gas that Bose-Einstein condensation occurs if and only if infinite cycles are present [10,11]. However, this relation does not seem to be always true in interacting systems [13]. Spin models also have representations where correlations are represented by loops, see e.g. [1,3,12]. The behavior of systems of interacting particles is notoriously difficult. On the other hand, the presence of infinite cycles should not depend closely on the microscopic details of the model. Results about simpler models of cycles could prove useful, especially for the understanding of the critical behavior. The purpose of the present article is to discuss such a model. An appropriate model of random permutations must involve the spatial nature of the original physical system. Namely, particles are spread over a certain domain, and distant particles are not directly correlated. The behavior of the model should also depend on the dimension of the space. Recall that the Bose-Einstein condensation takes place in three dimensions, but not in one or two. It seems therefore natural to consider permutations Z d → Z d instead of N → N. For Λ ⊂ Z d finite, we will take the probability for a permutation π : Λ → Λ to be proportional to x∈Λ e −α|x−π(x)| 2 . The motivation for these Gaussian weights comes from the Wiener measure for bosonic trajectories in the Feynman-Kac representation of the Bose gas. The parameter α is proportional to the temperature of the system (not to the inverse temperature, as usual in statistical mechanics!). This model was progressively introduced in [4,6,7,5]. There should be no long cycles when α is large, i.e. when sites are heavily discouraged from jumping to a neighbor. Cycles should increase in size when α decreases. The main question is whether a transition occurs for some value α c > 0, below which a fraction of the sites find themselves in infinitely long cycles. We present numerical evidence that points towards the occurrence of infinite cycles in three dimensions. A fraction of sites with positive density belong to infinite cycles. A natural question is how many large cycles are found in a typical permutation, on a cube of size L? One can argue that cycles are similar to a closed random walk, which has Hausdorff dimension two, so their number should grow like L 3 /L 2 = L. But one can also argue that infinite cycles represent correlations in the Bose condensate -two particles are correlated if they belong to the same cycle. In this case, there should be just one macroscopic cycle. The surprise is that neither conclusion above is correct: As it turns, infinite cycles are macroscopic, i.e. they involve a positive fraction of the sites, and their number fluctuates. This behavior was observed in the ideal Bose gas [11]. The model of random permutations is introduced in Section 2, where it is also shown that there are no infinite cycles at high temperature. A detailed probabilistic setting is described in Section 3. Our numerical results are presented in Section 4. They show a surprisingly close relationship between the random cycle model and the ideal Bose gas. The systems do not only share similar features qualitatively, but also quantitatively! The model As is usual in statistical mechanics, we define the model first in a bounded domain, and then consider an appropriate thermodynamic limit. Let Λ ⊂ Z d be a large but finite cubic box centered at the origin, and let B Λ denote the set of permutations on Λ (i.e. bijections Λ → Λ). The probability for π ∈ B Λ is given by (2.1) Here, the parameter α represents the temperature, | · | denotes the Euclidean distance in Z d , and Z(Λ) is the partition function A cycle γ of length k is a k-tuple of distinct sites (x 1 , . . . , x k ). Given a permutation π, we let γ 0 denote the cycle that contains the origin, and 0 = 0 (π) its length. That is, γ 0 = 0, π(0), π 2 (0), . . . , π 0 −1 (0) . (2.3) Notice that 1 0 |Λ|. The interesting phenomenon is the possible occurrence of infinite cycles, that may occur in the thermodynamic limit. This motivates us to introduce the function ϕ(α) that represents the probability that the origin belongs to an infinite cycle. First, we define and The existence of the thermodynamic limit is guaranteed by Cantor's diagonal argument: There exists a subsequence Λ m of increasing cubes such that P Λm ( 0 = k) converges simultaneously for all k. In this paper, the notation Λ Z d always refers to this specific subsequence. It is clear that k P Λ ( 0 = k) is equal to one for each finite Λ. But it may be strictly less than one in the limit Λ Z d (this is formalized by Fatou's lemma in analysis). The probability ϕ(α) that the origin belongs to an infinite cycle is then defined by Let α c denote the critical temperature, We expect that ϕ(α) is monotone decreasing in α (and strictly monotone decreasing for α < α c ), but we are unable to prove it. If one chooses α = 0 in (2.1), one easily checks that P Λ ( 0 = k) = 1 |Λ| for all k; then P ( 0 = k) = 0, and ϕ(0) = 1. It is easy to show the absence of infinite cycles when α is large enough. The following theorem implies that α c < ∞ for any dimension. We can define a thermodynamic potential by Here, we can take the limit along any sequence of boxes of increasing sizes, as can be established by a standard subadditive argument. The function f (α) is convex, since ∂ 2 ∂α 2 log Z Λ is given by the expectation of positive fluctuations. We conjecture that f (α) is analytic for all α, except for α = α c . The probability model This section is more technical and it can be skipped on first reading. But we hope to catch the interest of probabilists, which prompts us to turn the ideas above into a probability model. The situation is actually not trivial; our results require a condition, see (3.8) below, that we cannot prove in the case of Gaussian weights. The elements of the probability space are lattice permutations. The σ-algebra contains the event that the origin belongs to an infinite cycle, and the probability measure is inspired by (2.1). 3.1. The probability space. Let B be the set of all permutations π on Z d (i.e. bijections Z d → Z d ). We redefine B Λ so that it is a subset of B, by setting Let B xy be the set of permutations such that x is sent onto y, We let B denote the σ-algebra generated by {B xy } x,y∈Z d . For any n and any x 1 , . . . , x n , y 1 , . . . , y n ∈ Z d , the probability of the set is defined by Here, ξ(x, y) is a nonnegative, symmetric function on Z d × Z d . An interesting example is ξ(x, y) = |x − y| 2 . The normalization Z(Λ) is given by The thermodynamic limit Λ Z d in (3.4) exists at least on a subsequence of increasing cubes, thanks again to Cantor's diagonal process. We now introduce the event that the origin belongs to an infinite cycle. First, consider where represents the event that the origin belongs to a cycle of length k. Next, let be the event for the origin to belong to a cycle of infinite length. It is clear that B The proof of the existence of a measure is not trivial, and we need the following assumption: For any x ∈ Z d , we suppose that This is equivalent to another condition that is more technical, but also more explicit: This condition means that sites do not jump straight to infinity in one step. Notice that it fails to be true when α = 0. It trivially holds if ξ(x, y) = ∞ when |x − y| is larger than some cutoff distance. It can also be established for ξ(x, y) = |x − y| 2 when α is large, but we cannot prove it for arbitrary α > 0, although it is certainly true. This theorem is proved at the end of Section 3.2. The condition (3.8) is necessary. Otherwise, there exists x ∈ Z d such that y p(B xy ) = 1 − ε with ε > 0. Consider then the decreasing sequence of sets (B n ) with B n = ∪ y,|y−x|>n B xy . Then ∩ n B n = ∅, but so p(B n ) does not go to 0 and p cannot be a probability measure. Notice that Theorem 2.1, in Section 2, extends straightforwardly to general ξ(x, y). It follows that P (B Construction of the measure. Let us introduce the algebra B generated by the sets B xy . It is not hard to verify that the set function defined in (3.4) extends uniquely to a finitely additive measure on B . We need to prove that it is σ-additive within the algebra. Proof. We show the counterpositive, namely that if (B n ) is a decreasing sequence in B such that p(B n ) > ε > 0 for any n, then ∩ n B n is not empty. Given π ∈ B and Λ ⊂ Z d , we denote A(π; Λ) the set of permutations whose restriction to Λ coincides with π. Precisely, A(π; Λ) = π ∈ B : π (x) = π(x) for all x ∈ Λ . (3.11) Since B n ∈ B , for each n there exists a finite set Λ n such that B n is given by The set B n is uncountable, but there are only countably many distinct sets A(π; Λ n ). Thus Eq. (3.12) is really a countable union of disjoint sets. We can suppose that (Λ n ) is an increasing sequence of cubes centered at the origin. Let k n be some integer, and C n ⊂ B n be the set of permutations where each site of Λ n is sent at distance less than k n : We have (3.14) It follows that From the assumption (3.9) we can choose k n large enough so that p(B n \ C n ) 1 2 ε. Then (C n ) is a decreasing sequence such that p(C n ) 1 2 ε for all n. In a fashion similar to (3.12), we can decompose C n into the disjoint union for some suitable permutations π n,i . The number r n is now finite. We have Λ n ⊂ Λ n+1 . Two sets A(π, Λ n ) and A(π , Λ n+1 ) are either disjoint, or A(π, Λ n ) ⊃ A(π , Λ n+1 ) if the restrictions of π and π on Λ n coincide. We can define a tree with vertices (n, i), n = 1, 2, . . . and 1 i r n , and with an edge between (n, i) and (n + 1, j) whenever We also connect the vertices (1, i), 1 i r 1 , to the root 0. There are r n vertices at distance n from the root, so all incidence numbers are finite. For each vertex (n, i), let q(n, i) denote the length of the longest path descending from (n, i). For each n it is infinite for at least one i. In addition, each (n, i) with q(n, i) = ∞ is connected to at least one vertex (n + 1, j) with q(n + 1, j) = ∞. These properties hold true because incidence numbers are finite, i.e. because we defined the tree starting with the sequence (C n ) instead of (B n ). We can select an infinite path (n, j n ) with q(n, j n ) = ∞ for all n. Since any x belongs to some Λ n , we can define a map π : Z d → Z d by setting π(x) = π n,jn (x). (3.18) Observing that π is a permutation, and that π ∈ C n for all n, we see that ∩ n C n is not empty. Proof of Theorem 3.1. It follows from Lemma 3.2 that p is a σ-additive premeasure on B . It has a unique extension to a measure on B by the Carathéodory-Fréchet extension theorem. In order to show that p(B (∞) 0 ) is equal to the probability of infinite cycles as defined in (2.6), let us observe that B ). The result then follows from the σ-additivity of p. 4.1. Description of the method. We have performed intensive Monte Carlo simulations of the random cycle model with Gaussian weights for the jumps. The dynamics is pure Metropolis; a change in the permutation configuration is accepted or rejected according to the change in "energy". A "step" of our code consists in sweeping the sites in lexicographic order. For each site x, we randomly choose a site y in a window centered at x with size depending on the temperature α. Given x, y, we consider changing the permutation π into π , where π is defined as follows (see Fig. 1): π (x) = y, π π −1 (y) = π(x), (4.1) and π (z) = π(z) for z = x, π −1 (y). The energy difference between new and old permutations is ∆H = H Λ (π ) − H Λ (π), where H Λ (π) = x∈Λ |x − π(x)| 2 denotes the "Hamiltonian" of the model. We have Then according to the Metropolis prescription, the change is accepted with probability The initial configuration is usually the identity permutation π(x) ≡ x, but we have also considered initial random configurations chosen over all permutations Λ → Λ with uniform distribution. It turns out that the system thermalizes extremely well, irrespective of the initial condition. Measurements are taken after suitable thermalization. All numerical computations were performed on a personal computer and they can be reproduced rather easily. Most of our measurements are for the random variable ρ k , that represents the fraction of sites that belong to cycles of length less than or equal to k. Precisely, ρ k is defined by where x is the length of the cycle that contains x. We have 0 ρ k (π) 1 and 1 k |Λ|. It is related to ϕ(α) if we assume that the system is essentially translation invariant. We expect that 4) with N such that 1 N |Λ|. 4.2. Fixed temperature, different dimensions. We have first fixed α = 0.2 and considered cubic boxes in dimensions d = 1, d = 2, and d = 3. Fig. 2 depicts the graphs of the expectation ρ k of the fraction of sites in cycles of length less than k. The horizontal axis in Fig. 2 are k/|Λ|; they take values between 0 and 1. Fig. 2 (a) shows that almost all sites belong to cycles with very small length compared to the volume of the system. The situation is different in d = 2 and d = 3. In d = 2, around 25% of the sites belong to small cycles, and 75% belong to macroscopic cycles, i.e. to cycles whose length is a fraction of the volume. We expect that this density is equal to ϕ(α) defined in (2.6). The same holds for d = 3, with respective densities 3% and 97%. Fig. 3 shows the situation at the higher temperature α = 2, where there are clearly no macroscopic cycles. From these preliminary exploration, it seems that macroscopic cycles are present in dimensions greater or equal to 2, instead of 3! 4.3. Two dimensions. The main numerical result for d = 2 is shown in Fig. 4, where the expectation ρ k is plotted for α = 0.1 and different lattice sizes. It is manifest that finite size effects are very important. The density of sites in small cycles grows from 10% for L = 50 to 30% for L = 2000. We expect the curves to continue their progression upwards as the size increases, until no macroscopic cycles is left in the limit L → ∞. A possible explanation involves random walks. The cycle containing the origin is a self-avoiding closed random walk -but its probability differs from that of random walks because of the presence of all other cycles. Nonetheless, the analogy with random walks is worth pursuing. Random walks are recurrent in d = 2. The probability f n for the simple random walk to return to the origin for the first time after n steps satisfies [2] n 1 f n = 1, f n ∼ (n log 2 n) −1 . (4.5) The "macroscopic cycles" in d = 2 are those that are big with respect to the size of the domain, i.e. whose mean square distance √ n is larger than the size L of the domain. Let ρ(L) denote the density of sites in long cycles. The condition √ n ∼ L implies that the probability for the origin to belong to a long cycle is roughly (4.7) In first approximation this is proportional to k, as observed in Fig. 4. To summarize this section about two dimensions, we have understood that there are no really macroscopic cycles, and that the evidence suggested in Fig. 2 (b) comes from strong finite-size effects. 4.4. Three dimensions. Fig. 5 shows ρ k for α = 0.8 and different sizes of the domain. In contrast to Fig. 4 there are no noticeable finite-size effects. It is thus clear that macroscopic cycles are present in three dimensions, and that numerical simulations work remarkably well. We have computed the density of sites in macroscopic cycles as a function of the temperature, see Fig. 6. Recall that it should be equal to ϕ(α). We find that ϕ(α) seems to be continuous, and that the critical temperature is α ≈ 1.7. It is instructive to compare it with the critical temperature for the Bose-Einstein condensation of the ideal gas, which is known exactly. The difference between our model and the Feynman-Kac representation of the ideal gas is that our particles are frozen on the sites of the cubic lattice. But the comparison is otherwise possible. The particle density in our system is equal to one, and It is tantalizingly close to the critical temperature in the random cycle model! In addition, the density of the condensate in the Bose gas is given by (4.9) It is plotted in Fig. 6 along ϕ(α). ϕ(α) and ρ BEC 0 (α) appear to be close, but it seems that their differences cannot be accounted for by numerical errors of by finite size effects. All numerical results so far were for the average ρ k over many permutations. We now discuss the distribution of macroscopic cycles in a typical permutation. Fig. 7 shows ρ k for α = 0.5 and L = 30. We plot the value of ρ k (π) for the permutation π after τ steps, and its average ρ k over all permutations before τ steps. The latter converges as τ → ∞, as expected. The jumps in the graph of ρ k (π) correspond to macroscopic cycles. A jump at k |Λ| means that a macroscopic cycle of density k |Λ| is present, and accordingly the height of the jump is given by The number of macroscopic cycles (of density larger than ε > 0) seems to fluctuate. As Λ Z 3 , the distribution of this number converges to some nontrivial distribution. This behavior was observed by Sütő in the ideal gas [11]. Macroscopic cycles are due to particles in the condensate, for which the underlying distribution of permutations is uniform. Let us tentatively extrapolate this observation to our model. It suggests that the domain Λ splits into two sets Λ f and Λ ∞ , that corresponds to sites in finite and macroscopic cycles, respectively. While these sets are random, their respective volumes are close to (1 − ϕ(α))|Λ| and ϕ(α)|Λ|. Suppose that the distribution of macroscopic cycles in Λ ∞ is the same as if the permutation on Λ ∞ was chosen with uniform probability. The average density of the longest cycle in uniformly distributed random permutations is known, see e.g. [9], and is equal to 0.6243. . . . We report in Table 1 the results of numerical measurements in our model. We find values that seem to be in total agreement. This is extremely surprising, since the jumps in our permutations satisfy certain spatial restrictions; with uniform permutations, sites can go to infinity in but one step. Conclusion We have considered a model of random permutations on the cubic lattice. It is a priori a crude approximation for the ideal Bose gas in the Feynman-Kac representation. Surprisingly, its behavior is close to the Bose gas both qualitatively and quantitatively. The critical temperatures are very close, and the average densities of the longest cycle coincide. But there seems to be a small difference between the density of sites in macroscopic cycles and the density of the Bose-Einstein condensate. The study of larger systems on powerful computers may shed more light on this issue. The fact that this simple model is so close to the ideal Bose gas suggests to use it in order to simulate interacting systems. A natural generalization is to introduce interactions between permutation jumps. Can we do it so that the critical temperature of the random cycle model is in quantitative agreement with the critical temperature of the interacting Bose gas? Many mathematical aspects need clarifying as well. One would like to know about analytic properties of the thermodynamic potential (2.12). The existence of a probability measure on the space of permutations on Z d should be completed. The absence of cycles in one and two dimensions should be established, and also certain properties such as the monotone decreasing behavior of ϕ(α) with respect to α.
2014-10-01T00:00:00.000Z
2007-03-12T00:00:00.000
{ "year": 2007, "sha1": "5b7310fba7db182b1f7ea84258dfecc4a411cf23", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0703315", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4b84a2a828d5d834a919015a6e9d2ae4e8a6db0d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
235684407
pes2o/s2orc
v3-fos-license
A tool for rapid assessment of wildlife markets in the Asia-Pacific Region for risk of future zoonotic disease outbreaks Decades of warnings that the trade and consumption of wildlife could result in serious zoonotic pandemics have gone largely unheeded. Now the world is ravaged by COVID-19, with tremendous loss of life, economic and societal disruption, and dire predictions of more destructive and frequent pandemics. There are now calls to tightly regulate and even enact complete wildlife trade bans, while others call for more nuanced approaches since many rural communities rely on wildlife for sustenance. Given pressures from political and societal drivers and resource limitations to enforcing bans, increased regulation is a more likely outcome rather than broad bans. But imposition of tight regulations will require monitoring and assessing trade situations for zoonotic risks. We present a tool for relevant stakeholders, including government authorities in the public health and wildlife sectors, to assess wildlife trade situations for risks of potentially serious zoonoses in order to inform policies to tightly regulate and control the trade, much of which is illegal in most countries. The tool is based on available knowledge of different wildlife taxa traded in the Asia-Pacific Region and known to carry highly virulent and transmissible viruses combined with relative risks associated with different broad categories of market types and trade chains. Introduction A growing body of evidence linking wildlife trade and consumption to zoonotic events has prompted conservationists, epidemiologists, and virologists to issue warnings of zoonotic disease outbreaks with pandemic potential if such practices are not halted [1][2][3][4][5][6][7][8][9][10][11]. These warnings have gone largely unheeded. Now, with the COVID-19 pandemic adversely affecting every country in the world, there are renewed calls for urgent controls and even outright bans of the wildlife trade [8,12]. China, arguably the biggest wildlife consuming and trading nation, imposed a broad ban on wildlife trade and markets [13]. However, there is also opposition to wildlife trade bans from several quarters, citing restrictions on livelihood opportunities and reduced access to food for local communities who depend on wildlife, and concerns that trade will be driven underground [14][15][16][17][18]. While almost all wildlife trade has some level of zoonotic risks, some taxonomic groups (e.g., primates, bats, pangolins, civets, and rodents) are high-risk reservoirs of more virulent pathogens. Thus, the trade should be tightly regulated and monitored to prevent the sales of such high-risk species [4]. Particular types of wildlife markets and trade chains can also increase risk of disease transmission and spread based on: 1) the numbers and types of wildlife taxa being traded, especially the diversity of animals for sale; 2) interactions between wildlife, people, domestic, or peridomestic species; 3) length of prior and posterior trade chains; 4) connectedness of the market within the network of markets; 5) stressors on animals in markets; and 6) movement patterns of buyers and traders beyond points of sale [19]. Because even rare zoonotic events associated with the wildlife trade can have catastrophic socio-economic consequences, strategic wildlife trade prohibitions are important to reduce the probability of future trade-related pandemics. But given the opposition to wildlife trade bans, it is more likely that more nuanced approaches will emerge that balance market risk levels with subsistence hunting and use of wildlife by rural people [20,21]. We present a tool (Appendix A) to assess wildlife markets in the Asia-Pacific region for future risks of zoonotic outbreaks based on the types of disease-risk taxa sold and different trade situations. The tool can guide the region's governments, especially the public health and wildlife sector authorities, to assess the relative risks of serious emerging infectious diseases associated with wildlife trade and inform development of appropriate policies and regulations to control the wildlife trade. The tool can also be used by other stakeholders, including non-governmental and community-based organizations, to monitor markets for risks associated with wildlife trade. Methods The tool is based on available knowledge of different wildlife taxa that are: 1) sourced and traded in the Asia-Pacific Region and known to be reservoirs of highly virulent, transmissible viruses; and 2) market types and trade chains. Descriptions about the tool and embedded formulae are presented in Appendix B. Zoonotic and wildlife-trade science is an evolving field. With accumulating knowledge about viruses and other pathogens, primary and intermediate wildlife hosts [22], insights into the role of wildlife trade chains in zoonoses will also improve and can be used to adjust the parameters in the tool. In the meantime, because of the urgency to assess wildlife markets and prevent another pandemic, this tool can be used invoking the precautionary principle. Market trade risk We identified 11 generalized trade situations in the Asia-Pacific region (Table 1) and assessed them for risk using three variables: Transmission Risk (TR), Spread Potential (SP), and Zoonotic Virus Risk (ZVR) (see Appendices A, B). These variables adequately classify risks of potential zoonoses based on market size, crowding of wildlife that creates stressful situations, hygiene conditions, number and turnover of people through the market, distance buyers may travel after visiting a market, and points along market trade chains that could allow viruses to accumulate and amplify the potential for zoonoses. Each market type was given a qualitative score from 1 to 10, representing the combined contributions of the three variables (Appendix A). Because of the importance to clearly convey the level of uncertainty when assigning relative risk attributes to different features or processes in a wildlife trade chain, we applied levels of uncertainty to our estimates of TR, SP, and VZR through independent scoring of the variables by regional experts. The scores were used to obtain a combined 'Market Risk' score where <1 = Very Low Risk; 1-2 = Low Risk; 3-5 = Medium Risk; 6-8 = High Risk; and 9-10 = Very High Risk (Appendices A, B). Improvements in trade hygiene, regulation, sale, and butchering practices could diminish risk to some extent, but the socio-economic and health consequences of zoonotic disease pandemics associated with trading in high disease-risk wildlife argues for a broader set of actions-no matter how clean the cages and knives are, dangerous viruses can spillover to humans in trade chains that include high diseaserisk taxa. High disease-risk taxa The assessment and scoring of taxonomic groups commonly traded in the Asia-Pacific region for hosting zoonotic viruses with epidemic or pandemic potential [19] and the risk categorizations of the taxonomic groups are presented in Table 2 (details in Appendix A). Some taxonomic groups, such as rodents, are highly diverse, and the group could include species that are of lower risk than others [23]. However, given the severity of economic, health, and social costs and consequences of epidemics and pandemics, and current knowledge gaps about pathogen host species [24][25][26], we employ a precautionary principle and consider entire taxonomic groups to be high disease risk until more information is available. We hope that such an approach will encourage and catalyze epidemiological and zoonoses research to de-list or up-list species as relevant and appropriate. As the status of species changes, the model can be adjusted. We use simple, transparent, Boolean logic formulae to enable these adjustments. Evaluating risk of specific markets or points of sale: traded taxa risk The Taxonomic Risk categories are combined with a qualitative index based on numbers of individuals from the respective taxonomic categories found in a market ̶-the premise being that numbers of individuals can amplify pathogen prevalence and risk of transmission (Appendices A, B). However, even small numbers of high disease-risk taxa can pose greater risks of transmission and spillover. For example, bats are known to carry many serious pathogens [27][28][29][30]. Thus, even a small number (1-3) of Pteropid bats in a market should pose at least a medium risk. When using the tool, the numbers of wild animals for sale in a specific market should be estimated-or counted, if few-and the data entered in the relevant column (Appendix B). These numbers are converted into qualitative threat categories. Information on traded taxa and numbers of animals of each taxon can be derived from snapshot surveys, estimates from several site visits, or based on expert assessments. The estimates can include live and dead animals and even parts if they can be used to estimate numbers with some reliability (e.g., numbers of heads). Finally, the Taxonomic Risk category and Number-based Category are combined for a Cumulative Risk Factor using the matrix shown in Fig. 1. Combining market and taxon risks Market and taxon risk assessments are combined in a risk matrix of traded taxa (Y axis) and markets (X axis) (Appendix A; Fig. 2) that provide an assessment of disease risk associated with specific wildlife markets. Risk levels for a given location may vary over time as different combinations and numbers of taxa are traded and the tool can be used to monitor these changes, including from better regulation of markets for high disease-risk taxa. Ecohealth and wildlife trade Loss, fragmentation, and degradation of tropical forests are significant drivers of emerging infectious diseases [31][32][33][34][35]. Forest clearing and settlement exposes loggers, hunters, and settlers to novel zoonotic pathogens [4]. Wildlife sourced for the commercial trade can also introduce novel pathogens further afield [3,4,36]. The decline or loss of some species, especially top predators, degrades ecosystems and creates conditions that elevate risks of zoonotic events, albeit indirectly [4,37]. Therefore, the trade in wildlife species that play important roles in structuring ecosystems and maintains ecosystem diversity and health should also be prohibited and we have included these taxa such as Felidae and Canidae in the list of taxa to be assessed for market risk. Testing the tool We tested the tool using survey data from 36 wildlife markets in Lao PDR collected by Greatorex et al. [19]. These included permanent wildlife markets in larger cities (N = 5), wildlife markets in smaller towns (N = 12) and villages (N = 10), and roadside stalls (N = 9). We also used recent data from 10 wildlife markets in Laos and eight sales from northern Myanmar (Table 3, Appendix C). Results For the Greatorex et al. [19] data analysis, each market type had days where disease risk was estimated as very low risk (VLR), low risk (LR), high risk (HR), and very high risk (VHR) ( Table 3). The smaller town markets consistently had VHR days with little variation, likely because the markets are concentration points for high disease-risk taxa brought in from surrounding villages. Markets in larger urban centers generally had MR levels or above, with some consistently estimated as VHR, driven, in part, by high numbers of bats, wild birds, rodents, viverids, and other high disease-risk taxa. The risk levels of village markets and roadside sales showed considerable daily variation, depending on the presence or absence of high disease-risk taxa. These markets often had high disease-risk species, such as bats, not commonly seen in the larger urban markets. The taxa for sale on any given day in these markets depended on what hunters bring in. Thus, one day there may be only squirrels and another day bats and civets causing disease risk to shift from LR to VHR. A precautionary approach would argue that disease risk levels for markets should be assessed based on the highest risk levels, and these rural markets regularly had VHR days. Some village markets, however, were consistently VLR, perhaps due to wildlife depletion in the surrounding areas. In Myanmar, the single warehouse sale (a trader's house) was VHR because of the presence of langur and pangolin, while the restaurant sales at two venues were MR and VLR, and a town market was VLR because only reptiles were observed for sale on that day. Of the four roadside stall sites, three were VLR and one was HR. Some markets predominately had dried animal parts, but these included endangered species, such as tiger, gaur, elephant skin, or ivory. While the trade in these species is illegal and they are very high conservation value species, the risk from zoonosis was low. Some clear trends from the test of the tool are that smaller town markets consistently have very high disease risks. Village and roadside sale venues regularly presented high disease risk situations. For all market types, there were high disease risk situations depending on the numbers of the taxa being traded that day. Thus, for wildlife trade situations in Southeast Asia there are regular very high to high disease risk situations, indicating that almost all unregulated wildlife trade has a disease risk level that requires tight regulation and monitoring. COVID-19 response: calls for bans and systemic policies The COVID-19 pandemic has elicited serious reevaluations of the consequences of the wildlife trade. China imposed broad bans of terrestrial vertebrate wildlife markets and consumption [21]. Vietnam, another significant market for consumption and a conduit for wildlife to China, followed suit with tightening laws pertaining to trade and consumption of wildlife [20], but stopped short of a ban [38]. The effectiveness of these actions, however, remains uncertain. There are anecdotal reports of some wildlife markets reopening or continuing to operate in China and monitoring the vast numbers of markets can be challenging for authorities. Much of the wildlife traded and consumed in China is sourced from other countries in Asia through trade chains. Thus, neighbouring countries should also take steps to prohibit or tightly regulate the wildlife trade and monitor markets from rural sources through to urban markets and along the international trade chains for high disease-risk wildlife. The trends from market data in Laos and Myanmar-two regional source countries for China, but also consumption countries-indicate that even small, rural markets and roadside sale venues rank high for disease risk. Millions of people in Asia still practice subsistence hunting for local consumption and trade [39]. However, regulations that permit such practices should be evaluated with the knowledge that wildlife that Pandemics from emerging zoonotic pathogens are expected to become more frequent if current rates of wildlife exploitation, forest encroachment, and environmental degradation continue [31,40]. Thus, there is an urgent need for systemic policies on wildlife trade, informed by science, to prevent such outcomes [36]. Our tool can help to assess wildlife trade situations, from village trade stalls to urban markets for zoonosis risk, inform policy, and enable monitoring of markets for relative zoonotic risks. Public health authorities can then make decisions on whether to prohibit or permit trade depending on the types and numbers of taxa being sold in these market situations. Because the tool provides relative risk levels along various market types that could be links along a trade chain, health and enforcement authorities can also identify strategic points along the trade chain where relevant actions can be taken for effective outcomes. For instance, if large urban markets are being supplied with high zoonotic disease-risk taxa, such as primates or bats, from rural markets, it may be more strategic to close the rural markets that sell these species or work with local communities to apprise them of the risks of hunting and trading in these species. Even at national scales, prohibitions and tighter regulations in wildlife markets would call for close monitoring of markets, including where some trade of low disease-risk wildlife would be permitted. However, it is unlikely that governments would have adequate resources to monitor all markets, especially in rural areas [21]. Thus, it is important that market monitoring engages non-governmental stakeholders, including from the public [21]. These monitors should have ready access to information and a practical tool that is easy to use yet provides a robust assessment of the market conditions to detect and report illegal cases. The tool we present here meets these criteria. We acknowledge that it can be improved and refined, but such improvement can evolve from its use. The growing recognition of risk levels in other types of trade situations, such as wildlife farming and exotic pet markets, may require further adjustments to the tool. One Health approach for a holistic strategy One Health is a multisectoral, transdisciplinary approach to health that recognizes the inter-connectedness between people, nature, and their shared environment, and ecosystem health is a core component of the approach [36,[41][42][43]. The tool links ecological health with public health in accordance with the One Health concept. Forest fragmentation, degradation, and loss have been associated with emerging infectious zoonoses with potential to cause epidemics and pandemics [3,4,31,34,36,43,44]. Hunting for local consumption has been a long-time practice among rural communities that live in and around the forests. But recently, the practice has intensified and shifted to supply market demands, creating 'empty forests' across Southeast Asia, bereft of wildlife because of intense hunting pressure [e.g., [45]]. Small rural markets may have few wildlife in stock, but they could be the sources for larger markets downstream along wildlife trade chains, especially as roads facilitate access for commercial buyers into remote, rural areas. These purchases may then be consolidated along the trade chains, increasing zoonotic risk. Moreover, thousands of small markets can contribute to ecological degradation of forests, especially if the markets are sourcing ecologically important species, such as primates, bats, felids, canids, and some Perrisodactlya. Thus, rural markets that carry even small numbers of these species should qualify as medium to very high risk. For example, in our classification, even one great ape in a rural market qualifies it as very high risk. As ecological communities are degraded with the removal of predators, populations of high disease-risk species can rise due to ecological release, increasing the risk of dangerous zoonotic events [46][47][48]. Most wild felid and canid populations in Asia's forests are in decline because of hunting pressure, and prey and habitat loss. Because of the ecological role of predators in controlling populations of higher disease-risk prey taxa, we rationalize that even small numbers of wild felids and canids in markets should be adequate thresholds for the markets to be considered as at least of Medium Risk to capture the eventual ecological and epidemiological fallout from removing the predators from the ecosystems [37]. While these taxa do not carry as many zoonotic-potential viruses compared with higher disease-risk taxa such as rodents, they do carry some high-risk viruses (Appendix A) and can be infected by SARS-CoV-2 [49,50]. Conclusions We present this tool to assess various wildlife markets and trade chains for potential zoonotic disease emergence events and to, thereby, inform policy decisions aimed at regulating or closing them based on objective analyses. Overall, the tool was able to discriminate variation among market types, localities, and risks on different days based on our testing with field data and can be used to guide decision-making by health and wildlife authorities. We have kept the tool simple and transparent so it can be used by a range of stakeholders. We acknowledge that it is not perfect, but it is based on the best available knowledge currently, with transparent assumptions. We have provided access to the formulae used to assess risks Table 3 Test of wildlife trade disease risk tool using field data from wildlife sale venues in Laos from Greatorex et al. [19], WWF Laos (2021) so they can be refined as new information becomes available and adjusted to various regional, national, or market contexts. With predictions that human activities are setting the stage for more serious pandemic-proportion zoonotic spillover events [23,24], this tool is timely for decision-making using precautionary principles. We hope it will also catalyze necessary research to close knowledge gaps for improvement. Author statement All authors contributed equally to conceptualization, methodology and analysis, writing and revision. Declaration of Competing Interest The authors declare that they have no conflicts of interest.
2021-06-23T13:13:41.289Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "31bd55c1b7946ba09de5e9bbec7b46eaf64ebfa9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.onehlt.2021.100279", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2acacead792f728fe08eedbd9003666a0685c36", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
18874046
pes2o/s2orc
v3-fos-license
Structural and Phylogenetic Analysis of Laccases from Trichoderma: A Bioinformatic Approach The genus Trichoderma includes species of great biotechnological value, both for their mycoparasitic activities and for their ability to produce extracellular hydrolytic enzymes. Although activity of extracellular laccase has previously been reported in Trichoderma spp., the possible number of isoenzymes is still unknown, as are the structural and functional characteristics of both the genes and the putative proteins. In this study, the system of laccases sensu stricto in the Trichoderma species, the genomes of which are publicly available, were analyzed using bioinformatic tools. The intron/exon structure of the genes and the identification of specific motifs in the sequence of amino acids of the proteins generated in silico allow for clear differentiation between extracellular and intracellular enzymes. Phylogenetic analysis suggests that the common ancestor of the genus possessed a functional gene for each one of these enzymes, which is a characteristic preserved in T. atroviride and T. virens. This analysis also reveals that T. harzianum and T. reesei only retained the intracellular activity, whereas T. asperellum added an extracellular isoenzyme acquired through horizontal gene transfer during the mycoparasitic process. The evolutionary analysis shows that in general, extracellular laccases are subjected to purifying selection, and intracellular laccases show neutral evolution. The data provided by the present study will enable the generation of experimental approximations to better understand the physiological role of laccases in the genus Trichoderma and to increase their biotechnological potential. Introduction Laccases (benzenediol:oxygen oxidoreductase, EC 1.10.3.2) are metalloenzymes that belong to the multicopper oxidase (MCO) family. These enzymes catalyze the oxidation of various aromatic substrates with the concomitant reduction of molecular oxygen to water. This redox process is mediated by two centers that contain four atoms of copper in their +2 oxidation state. These copper atoms are classified as T1 (blue copper), T2 or T3 according to their spectroscopic characteristics [1]. Laccases are generally monomeric glycoproteins with molecular weights that range from 60 to 70 kDa, and up to 30% of their molecular weight is made up of carbohydrates [2]. These enzymes are widely distributed in nature, and the physiological functions that they perform depend both on their origin and on their biochemical and structural properties. In fungi, laccase activities have been related to the degradation of lignocellulose material, the production of pigments, sporulation, processes of morphogenesis, phenomena of pathogenesis toward plants and animals [3], the oxidation of antibiotics produced by microorganisms that are antagonists of plant pathogens and antimicrobial components of plants, such as flavonoids or phytoalexins [4]. This great functional versatility is partly due to the fact that laccases possess low substrate specificity and exhibit a broad range of redox potentials [2]. Because of this flexibility, these enzymes are able to act on ortho-and paradiphenols, methoxy-substituted phenols, aromatic diamines and benzenothiols. Furthermore, these enzymes can oxidate organic and inorganic metallic compounds. In addition, the gamut of substrates for laccases can extend to non-phenolic compounds through the inclusion of redox mediators, with which they are able to oxidize large polymers, such as lignin, cellulose or starch [5]. This peculiarity has been exploited by various biotechnological processes, including biopulping, bioremediation, the breakdown of colorants, the enzymatic conversion of chemical intermediates and the synthesis of pharmaceutical products, among others [6]. The majority of laccases used in biotechnology are derived from fungal species. The presence of laccases has been documented in various groups of fungi, including yeasts [7], filamentous ascomycetes [8] and white [9] and brown rot fungi [10], as well as mycorrhizal species [11]. In general, until now it has been found that fungi produce more than one laccase enzyme, the expression of which are closely related to environmental conditions or the stage of the life cycle and lifestyle of the fungus [12,13]. For these reason, these enzymes are synthesized in variable quantities, which makes the identification of the complete laccase system in a single species difficult. The characterization of families of laccase genes has progressed due to the availability of genomic sequences. Tblastn analysis has allowed for the definition of the complete number of laccase genes and their corresponding proteins in both basidiomycete [11,14,15] and ascomycete fungi [8,12]. Species of the genus Trichoderma are characterized by rapid growth, the ability to assimilate a large variety of lignocellulose substrates and resistance to toxic chemical products [16]. Several species in the genus, particularly T. reesei/Hypocrea jecorina, are good producers of extracellular enzymes that degrade plant cell walls, such as cellulases and hemicellulases, for which reason they have been used in the production of recombinant proteins at industrial levels. Other species, such as T. harzianum/H. lixii, T. virens/H. virens and T. atroviride/H. atroviridis, are used as biological control agents against fungal pathogens of plants and nematodes [17]. Extracellular laccase activity has been detected in various strains of Trichoderma spp., including isolates not identified at the species level [18] as well as distinct strains of T. viride, T. reesei, T. atroviride and T. longibrachiatum [19,20]. Laccase activity associated with the conidia of T. atroviride, T. viride and T. harzianum has also been documented. In these strains, it is hypothesized that the enzyme is found in the membrane or in the periplasmic space [21,22]. Recently, the purification and characterization of extracellular laccases in wild strains of T. harzianum [23], T. atroviride [24] and T. reesei [25] have been documented. Catalano et al. [26] have evaluated the role of an extracellular laccase from T. virens in the mycoparasitism of that species against the sclerotia of the phytopathogens Botrytis cinerea and Sclerotinia sclerotiorum. Despite the fact that currently there are access to the complete sequence of the genomes of T. atroviride, T. virens [27], T. reesei [28], T. harzianum (http://genome.jgi-psf.org/Triha1/Triha1.home. html) and T. asperellum (http://genome.jgi-psf.org/Trias1/Trias1. home.html), at this date, an in silico analysis has not been performed that would characterize the number of genes coding for laccase activity in these species and the structural characteristics of the coded proteins. It has been documented that laccases sensu stricto from asomycetes has a number of signatur characteristics not present in laccases from basidiomycetes. These signatures that are additional to the L1-L4 domains [29] and allow differentiate such proteins from other Multi-copper oxidases (MCOs), includes an SDS-gate [30], a C-terminal DSGL/I/V domain [31], and the presence of a F/L residue in axial coordination of the T1 copper [32]. Although two of the studies cited above included a search to detect the presence of laccases in some of the genomes available for the species of the Trichoderma genus [25,26], a comparative analysis of the identified genes to elucidate the number of laccases sensu stricto, the relationships between them, their possible cellular localization and their putative functions has not been performed. This analysis constitutes the principal objective of the present study. Materials and Methods From the NCBI GenBank database, we obtained the sequences of various multicopper oxidases, including those of Saccharomyces cerevisiae (Fet3p, 763529), Melanocarpus albomyces (Laccase, 40788173), Cucurbita maxima (Ascorbate oxidase, 885589) and Myrothecium verrucaria (Bilirubin oxidase, 456712), which were used as queries to search for laccase genes in species of Trichoderma. Various members of the family of MCOs were used to assure the identification of all possible laccases in the genomes analyzed based on the identity of copper binding sites. In addition, only the genes and sequences of amino acids from crystallized proteins were used, for which there is no doubt regarding their identity. A Blastp/Blastn analysis was performed on the database of the public genomes of T. asperellum (http://genome.jgi-psf.org/Trias1/ Trias1.home.html), T. atroviride (http://genome.jgi-psf.org/ Triat2/Triat2.home.html), T. harzianum (http://genome.jgi-psf. org/Triha1/Triha1.home.html), T. virens (http://genome.jgi-psf. org/TriviGv29_8_2/TriviGv29_8_2.home.html) and T. reesei (http://genome.jgi-psf.org/Trire2/Trire2.home.html). Sequences were selected for the presence of the four preserved motifs of copper-binding characteristic of all MCOs. To analyze the structural characteristics of Trichoderma spp. laccases, the online programs of the Center for Biological Sequence Analysis (CBS) (http://www.cbs.dtu.dk/services/) were used. The programs SignalP Version 4.0 and PrediSi were used to determine the presence of the peptide signal for secretion and putative cleavage sites, whereas NetNGlyc 1.0 was used to determine the sites of N-glycosylation (Asn-XXX-Ser/Thr). For those proteins that were classified as intracellular, we used the packages TargetP Version 1.1, iPSORT (http://ipsort.hgc.jp/) and MitoProt (http://ihg.gsf.de/ihg/mitoprot.html) to establish their putative subcellular localization. The position and composition of the cupredoxin domains were analyzed in SWISS-MODEL (http://swissmodel.expasy.org/). For the phylogenetic analysis of the putative sequences of laccase, multiple alignment was performed with CLUSTALX Version 2.0.11 (http://www.clustal.org/clustal2/) using the predetermined parameters. The sequences used for phylogenetic analysis with their respective accession numbers for GenBank and JGI genome portal are presented below (the key used in this article for each species appears in parentheses The alignments obtained were manually adjusted. Based on the generated alignments, phylogenetic trees were constructed with MEGA Version 5.05 (http://megasoftware.net/) through the Neighbor Joining method using three different models of evolutionary distance (p-distances, Dayhoff and Jones-Taylor-Tornton). Statistical significance was evaluated with a bootstrapping of 1000 repetitions. The phylogenetic trees were confirmed using the maximum likelihood method (data not shown), and the alignments were differentially edited to corroborate the topology of the obtained trees. A phylogram of the analyzed species of Trichoderma was constructed using the rpb2 gene (coding for RNA polymerase B II) through Bayesian analysis in accordance with [27]. The rates of synonymous and non-synonymous substitutions of Trichoderma laccases in the nucleotide sequences aligned by codons were calculated with the SNAP package (www.hiv.lanl.gov/content/ sequence/SNAP/SNAP.html). Number and Structure of Laccase Genes in Trichoderma To determine the number of laccase genes in the five analyzed species of Trichoderma, we conducted a Blastp/Blastn analysis of the database of the genome of each species. The search produced a total of 47 sequences that presented the four copper binding motifs characteristic of MCOs. In T. reesei, a total of 7 genes were identified; in T. harzianum, 9 genes; T. atroviride and T. virens presented 10 genes each, and in T. asperellum, 11 genes were found. To determine which of the identified genes coded for laccases sensu stricto, the structural characteristics previously reported in the literature that distinguish laccases from other copper blue oxidases were sought based on a comparative analysis of laccase sequences and crystallographic evidence (Table 1). Based on this analysis, a single gene coding for laccases was found in T. ressei and T. harzianum, two genes were found in T. atroviride and T. virens and three genes in T. asperellum (Table 2); the rest of the identified genes belong to other members of the MCO family and were not considered in further analyses. The number of laccase genes in ascomycete fungi varies considerably. Among the species that are characterized by having a larger number of laccase genes are P. anserina and S. macrospora with 9 genes [12], N. crassa with 8 [12], A. niger with 6 [8], and C. globosum with 4 [13]. Among the species characterized by having a low number of laccase genes are G. gramminis with 3 genes in var. tritici [33] and 2 in var. gramminis [33]. Yeasts are a particular case within the ascomycetes, as it has been reported that they do not have laccase genes [13], although the presence of 2 genes [7] has been documented in H. acidophila. Thus, Trichoderma belongs within the group of fungi with a low number of laccase genes. Nevertheless, the results of the structural analysis performed in the present study indicate that the laccase genes previously reported in ascomycetes should be reviewed in the future, as it is possible that several of them do not code for sensu stricto laccases (see below). To determine the relative position of the laccase genes in the genomes of all Trichoderma species, approximately 15 kb upstream and downstream regions were analyzed in each case. This analysis shows that the different laccase encoding genes within the same genome of the analyzed species are not arranged in clusters, but are far from each other. It was also found that the genomic context of the corresponding orthologous genes is similar between all analyzed species (data not shown). In general, the genes encoding for intracellular laccases show a higher synteny than the extracellular ones. To date, there are no data in the literature that allow us to compare the relative position of laccase genes for ascomycetes. However, the arrangement of the laccase genes in Trichoderma is consistent with what was found in the basidiomycete L. bicolor, where most laccases encoding genes are randomly distributed [11]. Nevertheless, the existence of laccase gene clusters has been observed in C. cinerea [14] and P. ostreatus [15]. These differences in genomic architecture indicate that this type of genes does not have a conserved location within fungal genomes but that their disposition reflects the evolutionary history of each species. The nucleotide sequence length of Trichoderma spp. laccase encoding genes varies between 1765 (Tas_154312) and 2303 bp (Tas_71665). The GC percentage of these genes ranges from 46% (Tas_71665) to 58% (Tr_122948). The number and position of introns in laccase encoding genes in fungi have been employed to classify them into subfamilies [11,14]. The structure of the genes found in the five species of Trichoderma separates them into three subfamilies. In each subfamily, the number of introns is preserved, and their positions are similar (Fig. 1). The first subfamily exhibits an intron between 64 and 68 bp in size and is made up of a gene of each one of the species T. atroviride, T. asperellum and T. virens (Fig. 1A). The second subfamily is characterized by having two introns of between 53 and 111 bp, with one of these genes being found in the five species of Trichoderma (Fig. 1B). In general, the subdivision of Trichoderma laccase encoding genes in these two Table 1. Observed signature sequences in laccases. Signature Reference Axial coordination Leu or Phe SDS gate Ser143, Ser511 and Asp561 in TaLcc1 [42] C-termini Asp-Ser-Gly-(Leu/Ile/Val) [31] In the fungal laccase signature sequences L1-L4, an X represents an undefined residue, whereas the multiple residues within brackets represent a partially conserved residue. doi:10.1371/journal.pone.0055295.t001 subfamilies is in agreement with previous reports of a low number of introns in this type of genes in the ascomycetes. In two varieties of G. gramminis, the gene sequences of LAC1 and LAC2 present 2 introns [33], just as in the single gene for extracellular laccase reported for the aquatic species Myrioconium sp. [34]. The presence of 3 introns has been reported in the lac2 gene [35] in the case of P. anserina and in the Bclcc1 and Bclcc2 genes of B. cinerea [4]. In the yeast H. acidophila, 3 introns have been documented in the gene that encodes for the extracellular enzyme and 2 for the intracellular enzyme gene [7]. A unique case is that of the Tas_ 71665 gene of T. asperellum, with a structure of seven introns and for which no orthologous genes were found in the other analyzed Trichoderma species (Fig. 1C). The only ascomycetes laccase gene that surpasses this number is the lac-1 gene of C. parasitica, which possesses 12 introns [36]. These two last genes represent intermediaries in the structure of introns/exons between the majority of ascomycetes and the basidiomycetes C. cinerea and L. bicolor, in which two subfamilies of genes have been found whose number of introns varies between 13 and 15 [11,14]. The introns found in the Trichoderma laccase genes preserve the splicing sites that comply with the GT/AG rule [37]. Previous data show the usefulness of the number and location of introns for generating subfamilies of ascomycetes laccase genes. This information could be useful for the identification of genes with distinct functions within a single species or for recognizing the same gene in different species. This analysis is beyond the scope of the present study and should be conducted in greater detail in the future. Interestingly, as a demonstration of the usefulness of gene family classification based on the presence of introns, the subfamilies of laccase genes formed in Trichoderma agree with classes established according to laccase protein structural characteristics (see below). Structural Characteristics of Trichoderma Laccases In general, the identification of laccases according to their amino acid sequence involves the recognition of the four segments L1-L4 [29]. However, such regions are common to all MCO family members including ascorbate oxidases, ferroxidases and bilirubin oxidases, meaning that their presence is not sufficient to confirm that a protein sequence corresponds to a laccase sensu stricto. Because of this finding, in the present study we also considered the SDS-gate and the C-terminus end, which are distinctive characteristics of ascomycetes laccases ( Table 1). The set of structural characteristics found in the amino acid sequence of Trichoderma laccases is detailed below. The analysis of the amino acid sequences of the 9 putative laccases that were found shows that their lengths are similar to the typical laccases of fungi (566-600 aa), and the calculated molecular mass for the protein sequences is between 61.83 and 66.84 kDa with acidic isoelectric points ( Table 2). These results are in agreement with what has previously been reported for fungal laccases, which regularly have molecular weights between 60 and 70 kDa and which have isoelectric points (pI) that vary between pH 4.0-6.0 [3]. In particular, within the genus Trichoderma, purified laccases from the wild strains WL1 of T. harzianum and CTM 10476 of T. atroviride presented, in their glycosylated form, molecular weights of 79 and 80 kDa, respectively [23,24]. Analysis in SWISS-MODEL of the proteins found in the genomes reviewed in this study showed that each one of the sequences is formed by three cupredoxin domains that are ordered in sequential form and are common to all the MCO family members [30]. As expected, the residues of amino acids that bind to copper T1 were located in domain I, whereas the residues that coordinate coppers T2/T3 were distributed between domains I and III (Fig. 2). Laccases are distinguished by the presence of four consensus sequences, L1-L4, which possess a length of between 8 and 24 amino acid residues and are distributed along the polypeptide chain. Within these regions, one finds amino acid residues that serve as ligands for the copper atoms, as well as other preserved or partially preserved residues, which are critical for maintaining the conformational folding of the enzyme. Such characteristic sequences are found based on the multiple alignment of more than 100 laccases of plants and fungi [29] and represent the distinctive mark for the identification of putative new laccases. Regions L1-L4 in the Trichoderma sequences have a high degree of similarity with the consensus designated by [29] (Fig. 2). However, some amino acid residues differ from the consensus. This finding is especially evident in L2, where Thr is replaced by a Ser in the second position. This change of amino acid is found in the Lac2 of G.graminis var. tritici and G. graminis var. graminis [33], as well as in the Lac1 of B. fuckeliana (anamorph = B. cinerea) [4], both of which are ascomycete fungi. In this same segment, in Trichoderma, changes in the consensus QYCDGL are observed: Tyr is replaced by Ala, Cys by Ser/Ala/Trp, Asp by Gly/Glu and Leu by Val (Fig. 2). Thus, the results obtained here show that although laccases of basidiomycetes fully comply with the consensus designated by L2 [15], this region varies considerably in ascomycete laccases. In segments L1 and L4 of the Trichoderma, gene changes that involve residues of amino acids with propensities toward similar conformations or similar hydropathic indices are observed. For example, in L1, Trp is replaced by Phe (Fig. 2). This same change is observed in C. parasitica Lac3 [38]. In segment L4, the amino acid located 10 residues downstream from the preserved Cys corresponds to the axial position of copper T1. This residue is usually a Met in the MCOs; however, in the laccases, the Met residue is replaced by a Leu or Phe [39], as in the Trichoderma spp. putative laccases (Fig. 2). Axial coordination is one of the factors affecting the redox potential (E 0 ) of laccases [39]. It has been previously suggested that laccases with high E 0 (700-800 mV) possess a Leu or Phe in axial coordination, whereas laccases with a Met residue have low E 0 (500 mV). Based on the substitution of this residue, laccases are classified into three classes: Lac 1 (M, Met), Lac 2 (L, Leu) and Lac 3 (F, Phe) [32]. Taking this characteristic into account, the putative laccases Ta_40409, Tas_68620, Th_539081, Tr_122948 and Tv_194054, are classified as Lac 2, whereas Tas_71665, Tas_154312, Ta_54145 and Tv_48916 are classified as Lac 3. These results are consistent with the hypothesis that Lac 1 laccases are primarily present in plants [32]. This classification has been performed in one isoenzyme of each of the ascomycetes: C. parasitica, N. crasa, and P. anserina, belonging to class 2 [32]. Although this classification is still used to elucidate the functional relatedness between laccases, it is important to take into account that crystallographic data obtained from native high potential laccase of T. versicolor (TvL) have helped to describe other structural characteristics that might contribute to high E 0 in these enzymes [40]. Such structural features include a reduction of electron density in the metal and the ligating amino acid. In high redox potential laccases, the distance between Cu +2 and one of the histidines from the T1 binding pocket is longer compared with that in enzymes of the middle potential group; the hydrogen bond between Glu-460 and Ser-113 in TvL seems to be responsible for this [40]. Interestingly, this particular serine is conserved in Trichoderma laccases, but the Glu residue is not present. Furthermore, it is well known that several factors can affect the E 0 value of metalloproteins, including electrostatic intramolecular interactions, and solvation effects [41]. It is important to conduct further studies that allow for determining those factors involved in the modulation of redox potential in Trichoderma laccases. Laccases have been the subject of intense investigation directed at understanding both their catalytic mechanism and the molecular determinants that modulate their broad range of E 0 [40]. Although the laccase reaction scheme is not entirely understood, it is known that both binding and the oxidation of the substrate occurs in the T1 site and the electrons are transferred to the T2/T3 center, where the reduction of molecular oxygen takes place [40]. The reduction of a dioxygen molecule to two water molecules requires four electrons and four protons. The electron transfer pathway to the trinuclear center corresponds to the preserved motif Hys-Cys-Hys located at L4 [39], which is present in Trichoderma laccases (Fig. 2). Conversely, proton transfer is assisted by the so-called SDS-gate [30]. This gate is formed by two residues of Ser and one of Asp and is conserved in ascomycete laccases but has not been detected in basidiomycete laccases. In T. arenaria laccase TaLcc1, this gate is formed by Ser143, Ser511 and Asp561 [42]. Multiple alignment with TaLcc1 identified the SDSgate amino acids in Trichoderma spp. laccases (Fig. 2). However, in Tas_154312 and Ta_54145, the amino acid that corresponds to Ser143 in TaLcc1 is replaced by Gly and in Tv_48916, by Ala. Additionally, in Th_539081, the residue that corresponds to Ser511 in TaLcc1 is replaced by a Thr (Fig. 2). This result suggests that Trichoderma laccases have adopted various strategies to facilitate the transfer of protons to the trinuclear site, thus modifying its catalytic activity. An essential aspect of the catalytic activity of laccases is the mode of interaction and reaction with various substrates. The availability of a chemical compound to be used as a laccase substrate depends on both the nature and position of the substituents in the phenolic ring of the compound and also on the chemical environment of the sustrate binding site. The amino acid residues that constitute the substrate cavity form loops and are founds in domains II and III. In the laccases of various organisms, the loops have different amino acid compositions, which results in diversity in the size and shape of the substrate binding site [43]. The nine Trichoderma spp. proteins present the substrate binding sites that are described in the tridimensional structure of crystallized laccases. As in other laccases, the sequences that form loops I-IV are little conserved in comparison with MaL and TaLcc1 (wild proteins of M. albomyces and T. arenaria, respectively), except in loop IV in C7-C8, which is also part of the L4 segment (Fig. 2). It has been determined that Pro192 in MaL (Ala193 in TaLcc1) in the loop I/B1-B2 interacts with the organic substrate [31]. This residue is present in five Trichoderma spp. laccases, although in this position, Th_539081 and Tv_194054 possess a Gln residue whereas Tas_68620 and Tas_71665 have Thr and Ser, Figure 2. Alignment of laccase sequences from Trichoderma spp. The alignment was constructed with the Clustal X multiple-sequence alignment program. The accession number of each sequence in the JGI GeneBank is indicated on the left of the alignment. An asterisk indicates that the residues at a position are identical in all sequences in the alignment; a colon indicates that conserved substitutions have been observed and a period indicates semiconserved substitutions. Putative signal sequences are indicated by italics and the mitochondrial targeting peptides are enclosed in boxes. The conserved residues involved in copper binding are in red, and the complete L1-L4 regions are indicated by a double line under the alignment. The sequences of potential substrate loops were identified based on loops I-IV of the rMaL [44] and TaLcc1 [42], laccases of M. albomyces and T. arenaria, respectively, and are underlined with a bold line. Amino acids shaded in yellow indicate residues in contact with the substrate. The residues forming the SDS gate are shaded in green in color, and the amino acid shaded in blue classified the laccases as class 1 (Met), class 2 (Leu) or class 3 (Phe). The conserved C-termini are in dark blue. doi:10.1371/journal.pone.0055295.g002 respectively (Fig. 2). In B4-B5, fungal laccases typically have Glu or Asp as the substrate ligand [31,42]. Directed mutagenesis studies performed on MaL have shown that the Glu235 (Asp 236 in TaLcc1) carboxy group is of great importance for substrate binding because it stabilizes the cationic radical that is formed when Hys508 initiates the catalytic cycle [44]. However, in Tas_71665, Tr_122948 and Tv_194054, this residue is replaced by Ala. The multiple alignment of these sequences with other members of the MCO family revealed that in the same position, Ala is only present in a laccase of the dermatophyte fungus A. gypseum (anamorph = Microsporum gypseum). In the recombinant M. albomyces laccase (rMaL) expressed in T. reesei, the Cys residue of the tripeptide Ala297-Cys298-Gly299 (Leu297-Cys298-Gly299 in TaLcc1) located in loop II/ B7-B8 is conserved in Trichoderma spp. laccases. This amino acid is also involved in substrate binding and was found in N. crassa, P. anserina and C. parasitica laccases [44]. Furthermore, the involvement of rMaL Phe427 (Val428 in TaLcc1) of loop IV/ C4-C5 in aligning substrate molecules in the correct orientation for oxidation has been suggested [44]. In the laccases of various organisms, including Trichoderma spp., this residue varies considerably, although the majority of basidiomycete laccases display Pro in this position. The differences found in the loop sequences of Trichoderma spp. laccases suggest a low substrate specificity and, most likely, various catalytic capacities, which indicates that each possesses different physiological functions, which is reinforced by subcellular localization (see below). A preserved segment of four residues of amino acids has been identified at the C-terminal end of Trichoderma spp. laccases (Fig. 2), a highly conserved sequence in ascomycetes corresponding to the consensus Asp-Ser-Gly-(Leu/Ile/Val). In laccases of the ascomycetes M. albomyces [31], P. anserina [35], T. arenaria [42], N. crassa [45] and M. thermophila [46] the C-terminal end is posttranslationally processed, leaving the active protein with the sequence Asp-Ser-Gly-Leu (DSGL) as the final amino acid residues at this end. As in the Trichoderma spp. laccases, the majority of ascomycete laccases do not present this C-terminal extension, which is removed post-translationally. Determination of the three-dimensional structure of MaL and TaLcc1 revealed that the C-terminal end DSGL is packed within a tunnel that leads to the trinuclear site and forms a plug [31]. In the crystal structure of other known laccases, this cavity is open and allows molecular oxygen to access the catalytic site [42]. In MaL and TaLcc1, the C-terminus plug impedes the movement of molecular oxygen and other solvents to the enzyme. Furthermore, the C-terminus carboxyl group forms a hydrogen bridge with the lateral chain of Hys140 in MaL (Hys141 in TaLcc1), which also coordinates copper T2. Directed mutagenesis studies performed on MaL cDNA revealed that a change or deletion of the C-terminal end dramatically affects enzyme activity [47]. Given those findings, it was suggested that ascomycete laccases use the C-terminal DSGL plug to carry out their catalytic function, forming a proton transfer pathway [42]. The C-terminus block appears to be a characteristic trait of ascomycete laccases because it has not been described in basidiomycete laccases. Nevertheless, Rigidopurus lignosus R1L laccase presents the C-terminus sequence DSGLA. Among basidiomycete laccases, R1L is the most closely related phylogenetically with ascomycete laccases; for this reason, it was initially suggested that this sequence of the C-terminal end was more an evolutionary relic than a functional characteristic of the enzyme [39]. However, more recent observations documented in this study have shown that the C-terminal end DSGL is not an evolutionary relic in fungi, as it provides important functions for the ascomycete enzyme. Furthermore, even when laccases of basidiomycetes lacks DSGL motif, it was recently established 2using directed evolution approach 2 that the C-terminus would play a role in enzyme performance by influencing optimal pH and Km values for phenolic compounds [48]. This evidence requires further comparative studies between ascomycetes and basidiomycetes regarding the evolution of the C-terminal end and its functional role. In the course of the analysis to identify laccases sensu stricto in Trichoderma and to compare them with those reported in other ascomycetes, the particular case of the 6 enzymes reported for Aspergillus niger [8] emerged, specifically those designated as Mco G, Mco J and Mco M. When we perform an analysis to find the signatures of laccases sensu stricto (Table 1), we find that the Mco G protein lacks the DSGL motif. Furthermore, Mco J and Mco M have a deletion of approximately 50 aa immediately after the L4 segment, which includes the DSGL motif. Interestingly, it was found that these two last ''truncated'' enzymes oxidize a limited number of substrates, whereas the former attacks all probed substrates. This suggests that some laccases from ascomycetes can be partially functional even if not presenting all the elements to be considered as laccases sensu stricto. These beings a special case and beyond the objectives of the present work, further analyses of these three enzymes were not performed and they were excluded from the phylogenetic analysis (see below). Over the course of the evolutionary process, laccases have maintained a high degree of similarity in terms of amino acid sequence and three-dimensional structure. Generally, the laccase sequences of members of a group of fungi exhibit levels of amino acid identity of 50% or more, whereas the identity levels between sequences of members of different groups is approximately 30% [49]. These identity values are met, in general, for Trichoderma spp. laccases. The identity percentage among the 9 laccases varied between 30 and 88% ( Table 3). The Ta_54145/Tas_514312/ Tv_48916 triad is more similar among themselves with an identity value above 83%. The rest of the laccases possess identity values between 51 and 86%. An identity percentage comparison between laccases of a single species has not been conducted in ascomycetes. In the case of basidiomycetes, the percentages of identity of the 8 C. cinerea laccases varied between 46 and 77% [50], whereas in the case of P. ostreatus, these values were between 45 and 89% between the 6 enzymes found [49]. Thus, Trichoderma laccases present a greater interval of identity than those found in basidiomycete species, indicating a strong selective pressure on the various genes. With respect to their relationship to the laccases of other fungal species, Tas_154312, Ta_54145 and Tv_48916 proteins share 53% identity with G. graminis var. tritici and G. graminis var. graminis Lac2 [33] and an identity of between 63 and 65% with F. oxysporum Lcc4 laccase [10]. The group of enzymes Tas_68630, Ta_40409, Th_539081, Tr_122948 and Tv_194054 possess an identity greater than 63% with F. oxysporum lacc1 [10] and less than 30% with other ascomycete laccases. Interestingly, F. oxysporum lcc1 laccase is an intracellular laccase like the Trichoderma laccases, which show a stronger phylogenetic relation with it (see below). The Tas_71665 protein has an identity of between 51.4 and 56.2% with the putative laccases of B. fuckeliana, S. sclerotiorum and P. tritici-repentis. The identities of the nine Trichoderma spp. laccases with respect to crystallized MaL and TaLcc1 laccases are between 27 and 37%. When comparing Trichoderma laccases with those from the basidiomycete fungi T. versicolor, P. ostreatus and A. bisporus, the identity is below 25%. The identity values clearly show the separation of the Trichoderma spp. laccases into three subfamilies, which is directly related to the putative subcellular localization of laccases sensu stricto (see following paragraph). Prediction of Subcellular Localization of Trichoderma spp. Laccases and Possible Physiological Functions The majority of known fungal laccases are monomeric proteins with extracellular activity, although intracellular laccases have also been identified, particularly in white rot fungi [3]. The localization of laccases is associated with their physiological function and determines the range of substrates available for the enzyme. In fungi, the functions of extracellular laccases related to the degradation of lignocellulose material, recycling of organic material, reduction of oxidative stress and pathogenesis toward plants and animals have been extensively studied [3,4]. Of the nine Trichoderma spp. laccases, it was determined that four correspond to extracellular laccases: one each in T. atroviride and T. virens, and two in T. asperellum. The rest are intracellular proteins found in the five species analyzed ( Table 2). The putative signal peptide of the extracellular laccases corresponds to the first 18 residues and presents the typical characteristics of signal peptides, that is, a highly hydrophobic region and Ala and Val residues in positions -1 and -3, respectively, relative to the cleavage site [51]. The mature forms of the laccases Tas_71665, Tas_154312, Ta_54145 and Tv_48916 possess between 9 and 11 putative Nglycosylation sites ( Table 2). The average glycosylation is usually between 10 and 25%, although laccases with a carbohydrate content of greater than 30% have been detected [2]. Glycosylation influences enzyme secretion, and it has been suggested to play an important role in catalytic center stabilization, protection against hydrolysis, copper retention, and laccase thermal stability [52]. Extracellular laccase activity has previously been reported in Trichoderma spp. [18,21], arriving in certain cases at purification of the protein [23,24]. Levasseur et al. [25] reported homologous overexpression of the T. reesei gene 124079 in the strain Rut-C30, and even when the recombinant protein presented biochemical properties similar to those reported for other laccases, the structural analysis carried out in this study suggests that the enzyme TrLAC1 studied by those authors corresponds to a pigment synthesis MCO and not to a laccase in sensu stricto. That is, the amino acid sequence obtained for this enzyme does not present either the SDS-gate or the C-terminal end DSGL/I/ V. Recently, Catalano et al. [26] evaluated the participation of the extracellular T. virens LCC1 laccase (corresponding to laccase Tv_48916 analyzed in this study) in mycoparasitism towards sclerotia of the phytopathogens B. cinerea and S. sclerotiorum. The hypothesis in that study was that LCC1 is capable of attacking the sclerotia melanin of the studied fungi. However, although their results show that the enzyme can participate in mycoparasitism by T. virens against S. sclerotiorum, the same result was not obtained against B. cinerea sclerotia, making it necessary to conduct more studies in this area. It would also be important to experimentally evaluate other functions of extracellular Trichoderma laccases, such as those cited at the beginning of this section. The analysis carried out with various bioinformatic packages suggests the intracellular localization of laccases Ta_40409, Tr_122948, Tv_194054, Th_539081 and Tas_68620. Unlike extracellular laccases, little is known about the activity of intracellular laccases, both for the genus Trichoderma and for other fungal species. In several reports, the presence of laccase activity associated with the membrane or periplasmic space during the maturation of T. atroviride, T. viride and T. harzianum conidia [21,22] has been suggested. The results of the present study show that Trichoderma spp. do not possess laccases associated with the plasma membrane, such laccase activity associated with conidia reported in those studies is possibly due to the extracellular enzyme that remained trapped in the periplasmic space or the cell wall during the process of maturation of these structures. In this sense, fungal proteins have been reported to be located either adhering to the cell wall or in the extracellular medium, among which are included hydrophobins and adhesins [53,54], as well as hydrolytic enzymes [55]. Although, in certain cases, the interaction of the protein with the cell wall is only transitory, occurring while the protein reaches the extracellular medium, in other cases, specific mechanisms have been proposed for the retention of the proteins in the wall and their simultaneous localization in the extracellular medium [56]. There has also been documentation of A.oryzae enzymes that are secreted into the medium when the fungus is grown in a solid substrate but are retained in the cell wall when the fungus grows in submerged cultures [57]. This finding indicates that it is possible that some Trichoderma laccases remain trapped in the periplasmic space or cell wall during conidia maturation, which is a possibility that should be investigated more thoroughly in the future. In ascomycetes, the activity of intracellular laccase has been detected in H. acidophila and has been related to the synthesis of melanin [7]. In the human pathogenic fungus Cryptotoccus neoformans, laccase activity is found to be associated with the membrane and constitutes a virulence factor [56], whereas the phytopathogenic fungus F. oxysporum possesses two intracellular laccases, Lcc1 and Lcc3, which may be involved in the protection of the fungus against oxidative stress and toxic compounds [10]. In the basidiomycetes L. bicolor [11] and P. ostreatus [15], these isoenzymes are involved in the development of the fruiting body. In addition, it is possible that the intracellular laccases of fungi participate in the transformation of low molecular weight phenolic compounds produced in the cell [3]. Laccases associated with conidia are linked to the synthesis of pigments and other substances that protect the cell from stress factors, such as enzymatic lysis, temperature and UV light [3,10]. It is possible that Trichoderma intracellular laccases are related to any of the processes described above; further experimental work is needed to confirm any of these functions. Surprisingly, when using various bioinformatics programs to analyze four of the laccases classified as intracellular by the package SignalP 4.0, the laccases presented a signal peptide and processing characteristics of mitochondrial localization ( Fig. 2; Table 4). The MitoProt program shows processing sites congruent with the mitochondrially located peptide of the other packages (Table 4). These results should be viewed cautiously, as currently there are no reports of mitochondrial localization of laccases either in fungi or in any other biological group in which this enzymatic activity has been found. With the unexpected finding of signs mitochondrial localization of laccase in Trichoderma, the first question that arises is whether this is feasible for enzymes that in fungi have only been reported as either cytoplasmic or associated with the plasma membrane or extracellular. Possible hypotheses, assuming that the prediction of mitochondrial localization is correct, arise from what has been documented for proteins in other fungi. There are examples in fungi of subcellular localization changes or the presence in two distinct cellular compartments of the same enzymatic activity. The latter case has been referred to as dual localization, dual targeting or dual distribution [58]. In S. cerevisiae, it has been found that up to a third of the proteins considered to be mitochondrial can present an alternative subcellular localization [59]. Among the best-studied examples of dual mitochondria-cytoplasmic localization in fungi are aconitase [60] and fumarase [61]. Moreover, in these two examples, the proteins present in mitochondria and cytoplasm come from the same gene, which generates a single transcript and a single translation product, that is, they are not isoenzymes in distinct compartments. However, in these two examples, the ''original'' localization of the protein is mitochondrial, and the ''new'' localization is cytoplasmic, which is the opposite of what would be occurring with the products of Trichoderma laccase proteins. Nevertheless, it is possible to consider the relocalization of a secretion protein to mitochondria in fungi. Recently it has been documented that tryptophan-rich sensory protein/peripheral-type benzodiazepine receptor (referred to as TspO/MBR) that is found in the Golgi-associated secretory pathway in plants is directed toward mitochondria when expressed heterologously in S. cerevisiae [62]. The modification of the subcellular localization of a protein may be caused by a change of a single amino acid in the signal peptide, which can be generated by a single nucleotide mutation [63]. The second question that arises pertains to the function of a laccase in mitochondria. Experimental evidence suggests that a protein that reaches a new subcellular location can develop new functions [63]. Although it is difficult to speculate about the possible function of a mitochondrial laccase in fungi, it is feasible to establish hypotheses based on our understanding of the structural aspects of this enzyme. As mentioned above, laccases are enzymes that can bind four copper atoms, which are important for catalytic activity. Intracellular Cu +2 can have damaging effects because it induces the formation of reactive oxygen species (ROS); therefore, there are mechanisms that regulate its concentration [64]. Further, it has been documented that mitochondria possess a pool of copper that responds to changes in copper levels in the cytoplasm [65]. It is possible that the mitochondrial location of Trichoderma laccases contributes to the homeostasis of mitochondrial Cu +2 under particular circumstances, which is a possibility that would be important to evaluate experimentally in the future. The prediction of the subcellular localization of eukaryotic proteins is a complicated task in that there is always a certain degree of uncertainty; therefore, protocols and bioinformatics packages have been designed that have optimized the certainty of the prediction [66]. Although speculative hypotheses are proposed in agreement with bioinformatics and experimental evidence collected in eukaryotes in general and in fungi in particular, it would be most prudent to assume that the four Trichoderma enzymes can be considered to be intracellular, which is supported by the phylogenetic analysis (see following paragraph). The here described role of putative mitochondrial signal peptides in the subcellular localization of Trichoderma laccases can be experimentally verified in the future. One way to do this, is to design an expression vector in which the putative mitochondrial target sequences are fused to the green fluorescent protein (gfp) gene and monitoring the localization of the expressed recombinant protein. Phylogenetic Analysis of Trichoderma spp. Laccases A phylogenetic analysis performed according to two distinct criteria separated Trichoderma laccases into two distinct clades with a bootstrap value of 99%. Interestingly, all intracellular laccases are grouped in the first of these clades. The remaining three extracellular laccases are included in the second clade with the exception of Tas_71665 (Fig. 3a), which grouped in the clade of intracellular proteins but in a different terminal branch. Although Trichoderma extracellular laccases exhibit close phylogenetic relationships with F. oxysporum, G. gramminis, A. niger and C. parasitica orthologs, intracellular laccases show relationships with S. thermophile, N. crassa, S. macrospora, P. anserina, F. oxysporum and C. globosum orthologs. One may suggest the hypothesis that the phylogenetic closeness between these isoenzymes involves structural similarities in terms of the regions and amino acids discussed above for Trichoderma laccases. It is important to emphasize that all the proteins included in the phylogenetic analysis of Figure 3a were ''curated'' according to the same criteria as those of Trichoderma (Table 1) to ensure that they included only laccases sensu stricto. Because of this approach, the phylogram generated here excludes proteins that were used in previous phylogenetic analyses of ascomycete laccases [12,25], which are possibly another members of MCO family. In the future, it will be important to consider this aspect of treatment to obtain more robust phylogenetic patterns that will provide a clearer idea of the evolutionary process of this enzymatic function in ascomycetes. This approach will allow for a better definition of laccases with similar functions among distinct species of this group of fungi. The phylogenetic separation of the subfamilies of laccases is fully congruent with the analysis of sites or characteristic sequences of each subfamily of genes discussed above. Both the structure and number of genes found in the species of Trichoderma and the phylogenetic analysis suggest that the common ancestor of said genus possessed two laccases, with one being intracellular and the other extracellular. Comparative genomic analysis together with the results of molecular phylogeny leads to the conclusion that the ancestral state of Trichoderma/Hypocrea was mycoparasitical and that it later acquired saprophytic characteristics that helped it to pursue its prey through lignified substrates [27]. These studies have shown that T. atroviride is the earliest species within the genus, whereas T. virens and T. reesei appeared later. Furthermore, during the evolutionary process, T. reesei lost a significant quantity of the genetic information present in T. atroviride, which was retained in T. virens [27]. The number of Trichoderma laccase genes and the phylogenetic analysis of the proteins coded by these genes are congruent with the described scenario (Fig. 3b). With the generated data, it is possible to support the hypothesis that during the process of speciation, species such as T. atroviride and T. virens maintained the original number of laccase genes present in the common ancestor, while T. harzianum and T. ressei represent species that lost copies of extracellular laccase. On the other hand, several lines of evidence strongly suggest that the Tas_71665 gene of T. asperellum was acquired through a horizontal gene transfer event. This gene has the following structural features, which are discussed in the previous sections: i) it has the lowest GC-content (46%) of all known laccase genes; ii) it has a unique structure with seven introns; iii) the genome context of this gene differs from those of other laccase genes in T. asperellum; iv) no orthologous genes were found in the other analyzed Trichoderma species; v) compared to laccases in other genera of ascomycetes, this protein shows higher identity values (51.4-52.6%) with laccases of B. fuckeliana, S. sclerotiorum and P. tritici-repentis; vi) in the phylogenetic analysis, the protein encoded by this gene does not cluster toghether with other laccases from Trichoderma. Most of these features have been recognized as evidence of horizontal gene transfer in fungi [67]. It has been previously suggested that in S. macrospora and P. anserinabelonging to the same Order as Trichoderma (Sordariales) -the acquisition of laccase genes occured through horizontal transference - [12]; the author postulating the possibility of gene transfer from S. sclerotiorum to S. macrospora. Taken together, our results strongly suggest that T. asperellum acquired the Tas_71665 gene through horizontal gene transfer, either from one of the known necrotrophic phytopathogens B. fuckeliana, S. sclerotiorum and P. tritici-repentis, or from a related fungus. This is noteworthy because there is experimental evidence showing mycoparasitic activity of Trichoderma, althought not specifically T. asperellum, against B. fuckeliana and S. sclerotiorum [26], which indicates that this gene could have been acquired during the mycoparasitic process. This pattern of retention/loss/gain of laccase genes by groups of species in Trichoderma may reflect phenomena of selective pressure associated with the lifestyle of each of these species. The phylogeny obtained shows that the hypothesis of two laccase genes in the common Trichoderma ancestor with later events of retention, loss or gain can be extended to the Ascomycetes group. In fact, this asymmetrical pattern of change in gene families has been reported for the case of family 28 of ascomycetes glycosyl hydrolases [68], where duplications and losses of genes in distinct groups of fungi within this subdivision are observed. Detailed studies of ascomycete genomes to locate more laccase isoenzymes and carry out a broader phylogenetic analysis would allow for the corroboration of this hypothesis. Analysis of the evolutionary patterns of Trichoderma laccases shows that the proportion of synonymous substitutions of the majority of enzymes varies between 56 and 78%, indicating saturation ( Table 5). The exceptions were Tas_154312 and Ta_54145 (36%), as well as Tas_68620 and Ta_40409 (43%). The majority of the non-synonymous substitutions show values higher than 70%, indicating saturation. A low percentage of nonsynonymous substitution is found in 36-37%, between 6 and 8%. These values, together with the percentage non-synonymous/ percentage synonymous (pn/ps) proportions, show that the majority of Trichoderma extracellular laccases evolved under a process of purifying selection with the exception of Ta_71665, which has values that indicate neutral modifications (Table 5). In the case of the intracellular laccases, the majority of values indicate neutral changes, although in the case of the pair Tas_68620 and Ta_40409, purifying selection is shown. Levasseur et al. [25] found that the TrLACgene, identified by those authors as a T. reesei laccase, evolved under positive selection. However, as mentioned above, our data suggest that this gene is not a laccase sensu stricto but rather is another member of the MCO family. To this date, there are no studies that would allow us to compare the evolutionary patterns of Trichoderma laccases found in this study with those of ascomycete laccases. Trichoderma intacellular laccases and downward triangles extracellular ones. Green circles denote the laccases from M. albomyces [44] and T. arenaria [42]. (B) Phylogenetic tree of Trichoderma species showing gain, loss and retention of laccase genes. doi:10.1371/journal.pone.0055295.g003 Conclusions The search for laccases sensu stricto in ascomycetes involves the location in the amino acid sequence of motifs that are not present in basidiomycete laccases. The characterization of such motifs allows for better description of the possible functional properties of the proteins. The identification of ascomycete laccases sensu stricto is important to conduct robust phylogenetic analyses of this enzymatic function. The genus Trichoderma has preserved a limited number of functional laccase genes in the course of the evolutionary process. Structural and phylogenetic evidence suggests that the common ancestor of Trichoderma spp. had two laccase genes, with one being intracellular and the other extracellular. Species within the genus Trichoderma tend to preserve intracellular laccase activity, whereas evolution patterns of extracellular activity are variable. In the case of T. asperellum, there is strong evidence of horizontal gene transference through the mycoparasitic proceses. The herein presented data will contribute to the understanding of the functional role of laccases in the genus Trichoderma and to the optimization of their biotechnological applications. Table 5. Synonymous and non-synonymous substitution rates (%) between lacccase genes of Trichoderma spp.
2018-04-03T05:37:02.687Z
2013-01-31T00:00:00.000
{ "year": 2013, "sha1": "b327869dd896ca4c48926ab4cd9746da03242e17", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0055295&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b327869dd896ca4c48926ab4cd9746da03242e17", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
4952701
pes2o/s2orc
v3-fos-license
The human gut microbiota: Metabolism and perspective in obesity ABSTRACT The gut microbiota has been recognized as an important factor in the development of metabolic diseases such as obesity and is considered an endocrine organ involved in the maintenance of energy homeostasis and host immunity. Dysbiosis can change the functioning of the intestinal barrier and the gut-associated lymphoid tissues (GALT) by allowing the passage of structural components of bacteria, such as lipopolysaccharides (LPS), which activate inflammatory pathways that may contribute to the development of insulin resistance. Furthermore, intestinal dysbiosis can alter the production of gastrointestinal peptides related to satiety, resulting in an increased food intake. In obese people, this dysbiosis seems be related to increases of the phylum Firmicutes, the genus Clostridium, and the species Eubacterium rectale, Clostridium coccoides, Lactobacillus reuteri, Akkermansia muciniphila, Clostridium histolyticum, and Staphylococcus aureus. Introduction The gut microbiota has recently been recognized as an important factor for the development of metabolic diseases and is considered an endocrine organ involved in the maintenance of energy homeostasis and host immunity. 1 Changes in the composition of the gut microbiota due to environmental factors may result in a change in the relationship between the bacteria and the host. This change can result in a lowgrade chronic inflammatory process and in metabolic disorders such as those present in obesity. 2 The human gut microbiota consists of up to 100 trillion microbes that exist in a largely symbiotic relationship with their human hosts, carrying at least 150 times more genes (the microbiome) than the human genome. 3 Based on 16S rRNA-targeted molecular analyses, most bacteria detected in fecal samples from healthy human volunteers belong to two phyla, Bacteroidetes and Firmicutes. The gram-negative Bacteroidetes phylum includes the genera Bacteroides, Prevotella, Parabacteroides, and Alistipes, while the gram-positive Firmicutes includes species such as Faecalibacterium prausnitzii, Eubacterium rectale, and Eubacterium hallii, 4 as well as many other low abundance species. The metabolism of some bacteria can facilitate the extraction of calories from the diet, increase fat deposition in adipose tissue, exacerbate hepatic inflammatory processes, and provide energy and nutrients for microbial growth and proliferation. 5,6 Several microbial genes involved in human metabolism are enriched or depleted in the guts of obese humans. 7 Obese people tend to have a higher proportion of genes which encode membrane transport functions 8 and are involved in butyrate production, 9 whereas the genes related to cofactor, vitamin, and nucleotide metabolism or transcription are more frequently depleted. 8 Considering this influence of the gut microbiome on the onset and progression of obesity as well as its consequences, knowledge about the gut microbiota could contribute to the development of adjuvant treatments that can beneficially modulate obesity. Some studies have already evaluated the gut microbiota composition in obese individuals; however, the characterization of this microbiota is still not well established, and some results are discordant. Here, we present a review of the physiology and composition of the human gut microbiota with a focus on obese individuals. We divided our review into two topics: the physiology of the gut microbiota and the composition of this microbiota in obese patients. Methods In order to discuss gut microbiota composition of obese individuals, we undertook a systematized literature search that included observational studies (crosssectional, cohort, or case-control) and experimental studies. The following exclusion criteria were used to reduce possible relationships observed due to other comorbidities: diabetes, intestinal diseases, cancer, experimental studies, and studies that supplemented gut microbiota modulators. The literature search was performed in the MEDLINE and Scopus databases, and the references of studies obtained were scanned for other relevant articles that may not have been detected by the primary search. Only studies published in English in the last 10 years were considered for review. The following Medical Subject Headings Methodological quality was assessed using the STROBE recommendations (Strengthening the Reporting of Observational Studies in Epidemiology Statement) with separate checklists for conference case-control studies, cohort studies, and cross-sectional studies, and CONSORT recommendations (Consolidated Standards of Reporting Trials) using a checklist of items for reporting trials of nonpharmacological treatments. The final system was a combination of STROBE and CONSORT. We also conducted a narrative review about the subject in the following topics: function of the gut microbiota on the development of lymphoid structures, function of the gut microbiota on the immune system, function of the gut microbiota on nutrient and lipid metabolism, function of the gut microbiota on the hormones involved in food intake, and gut microbiota and obesity: future perspectives. There were no restrictions placed on the year of publication in this section. Physiology of the gut microbiota The gut microbiota harbors incredibly large microbial and genetic diversity, with distinct species associated with specific parts of the gastrointestinal tract. The stomach contains about 10 1 microbial cells per gram of content. The duodenum contains about 10 3 cells; the jejunum,10 4 cells; the ileum, 10 7 cells; and the colon, 10 12 microbial cells per gram of contents. 10 Therefore, the quantity of bacteria increases from the proximal to the distal portions of the gastrointestinal tract. Notably, the large intestine contains more than 70% of all microorganisms in the body, which are usually associated with the health/disease of the host. 11 In addition, the diversity of bacteria is higher in the lumen and lower in the mucus layer. 12 High numbers of bacteria in the gastrointestinal tract result in biochemical diversity and metabolic activity that interacts with host physiology. These microorganisms can facilitate the metabolism of nondigestible polysaccharides, produce essential vitamins, and they also play an important role in the development and differentiation of the intestinal epithelium and the host immune system. 13 Most species are anaerobic and belong to two phyla: Firmicutes and Bacteroidetes. Bacteria belonging to the phyla Proteobacteria, Verrucomicrobia, Actinobacteria, Fusobacteria, and Cyanobacteria are widely spread in human populations, but at much lesser abundance. 14 Although controversial, the ratio of Firmicutes-to-Bacteroidetes has been investigated and associated with the predisposition of diseases. 15 Moreover, the low abundance of phylum Proteobacteria associated with a high amount of the genera Bacteroides, Prevotella, and Ruminococcus has been associated with a healthy intestinal microbiota. 16 The maintenance of a healthy gut microbiota is important for a symbiosis relationship with the host. Function of the gut microbiota on the development of lymphoid structures The lymphatic system consists of a set of lymphatic vessels that interconnect primary to secondary lymphoid organs. Recirculation of the interstitial fluid and the transport of lymphocytes and antigen-presenting cells occur through this system. These immune cells are produced in the primary lymphoid tissues (thymus and bone marrow) and are activated in the secondary lymphoid tissues (spleen, lymph nodes, and mucosaassociated lymphoid tissue (MALT)). 17 Among the MALT, the gut-associated lymphoid tissues (GALT) are non-encapsulated tissues composed of Peyer's patches, isolated lymphoid follicles, and crypt plaques 18 that begin to form during embryogenesis, when the environment is sterile. At this stage, the mesenchymal cells are induced by retinoic acid to produce the chemokine (C-X-C motif) ligand 13 (CXCL13) that attracts the human lymphoid tissue inducer (LTi) cells. Mature LTi cells induce differentiation of stromal cells and attract immune cells, which form the GALT. 17 The maturation of this tissue depends on microbial colonization after birth. 19 The stromal and epithelial cells recognize bacterial peptidoglycan through the signaling pattern recognition receptors (PRR), nucleotidebinding oligomerization domain-containing protein 1 (NOD1), and Toll-like receptors (TLRs). Activation of these receptors by the gut microbiota increases the expression of CC chemokine ligand 20 (CCL20) and b defensin 3 ligand (HBD3), which activate the formation of isolated lymphoid follicles from the binding of chemokine receptor 6 (CCR6) in LTi. 20 Changes in the microbial composition, which happens in obese individuals, can further disrupt the integrity of the intestinal barrier promoted by GALT, leading to pathological bacterial translocation and the initiation of an inflammatory response. 21 Function of the gut microbiota on the immune system Besides acting on the maturation of GALT, the commensal bacteria also prevent the intestinal colonization by pathogens. The gut microbiota improves the function of the epithelial barrier, while its absence decreases the production of antimicrobial peptides by Paneth cells. This event causes intestinal barrier dysfunction and increases bacterial translocation. 22 Furthermore, bacteria-induced myeloid differentiation factor 88 (MyD88) signaling in the intestine increases epithelial cell IgA secretion. In addition, bacterial flagellin activates Toll-like receptors 5 (TLR5) from dendritic cells, and promotes the differentiation of B lymphocytes into IgA-producing cells 23 IgA binds to the microbial antigens, neutralizes the activity of the pathogens, and prevents infection. 24 Commensal bacteria modulate the innate immune response of the host by stimulating the production of homeostatic levels of pro-IL-1b by resident macrophages so that the response of these cells to an enteric infection occurs more rapidly. 25 The protective role of IL-1b in intestinal immunity is mediated by the induction of expression of endothelial adhesion molecules, which contribute to neutrophil recruitment and destruction of pathogens in the gut. 26 Besides that, modulation of natural killer (NK) T cells is also performed by commensal bacteria. NK T cells are a subset of T cells that simultaneously express both T cell receptor (TCR) and NK cell receptors. These cells promote inflammation from the secretion of cytokines IL-2, IL-4, IL-13, IL-17A, IL-21, tumor necrosis factor (TNF), and interferon-g (IFN-g). 27 Maintenance of homeostasis of these cells prevents an exaggerated inflammatory reaction. 28 Also, an increase in inflammation has been associated with an increase in obesity-associated diseases, such as cardiovascular disease 29 and type 2 diabetes. 30 Intestinal dysbiosis (changes in gut microbiota composition) can be related to the trigger of a persistent low-grade inflammatory response in obese individuals. Lipopolysaccharides (LPS) contain lipid A, which can cross the intestinal mucosa through tight junctions or with the aid of chylomicrons. 31,32 Lipoproteins are responsible for the absorption and transport of dietary triglycerides, and could thus initiate an inflammatory process that could result in the insulin resistance often observed in obesity. 31,32 In the systemic circulation, LPS causes an innate immune response in liver and adipose tissue. This occurs from the binding of LPS to the LPS binding protein (LBP), which activates the CD14 receptor. 32 This complex binds to Toll-like 4 receptors (TLR4) on macrophages and adipose tissue, resulting in a signaling pathway that activates the expression of genes encoding pro-inflammatory proteins, such as factor nuclear kappa B (NF-kB) and activator protein 1 (AP-1). 32,33 LPS concentrations are low in healthy people, but may reach high concentrations in obese individuals and cause metabolic endotoxemia. 31 This metabolic endotoxemia is related to the development of insulin resistance. 34 The molecular mechanisms that relate the activation of TLR4 by LPS with insulin resistance still need to be clarified, but evidence indicates that it involves alteration of insulin receptor signaling by the presence of inflammatory cytokines. 35 Function of the gut microbiota on nutrient metabolism and lipid metabolism The gut microbiota derives its nutrients from the fermentation of carbohydrates ingested by the host. Bacteroides, Roseburia, Bifidobacterium, Fecalibacterium, and Enterobacteria are among the bacterial groups that typically ferment undigested carbohydrates and synthesize short chain fatty acids (SCFA) 36 such as acetate, butyrate, and propionate. A significant amount of acetate enters the systemic circulation and reaches the peripheral tissues, while the propionate is mainly used in liver, and the butyrate is used in intestinal epithelium as an energy source. 37 The total and relative concentrations of SCFA depend on the fermentation site, the carbohydrate consumed, and the composition of the gut microbiota. 38 In addition to synthesizing vitamin K and vitamin B components, several species belonging to the Firmicutes and Actinobacteria phyla are conjugated linoleic acid (CLA) producers. 39 CLA is a mixture of positional and geometric isomers of linoleic acid shown by some studies to have anti-obesity properties such as: increase in energy metabolism and expenditure, decrease in adipogenesis, decrease in lipogenesis, and increase in lipolysis and adipocyte apoptosis. 39 The biological effects of CLA have been attributed to two possible mechanisms of action: 1) CLA displaces the arachidonic acid from cell membrane phospholipids, which decreases the synthesis of arachidonic acidderived eicosanoids such as prostaglandins and leukotrienes involved in inflammation, 40 and 2) CLA mediates activation of transcription factors such as peroxisome proliferator-activated receptors (PPARs), which impact cell processes such as lipid metabolism, apoptosis, and immune function. 40 Short chain fatty acids The gut microbiota of obese mice had a higher amount of genes that encode enzymes involved in carbohydrate metabolism and greater capacity to extract energy from the diet and to produce SCFA when compared to non-obese mice. 41 In addition, germ-free mice were resistant to diet-induced obesity. 42 SCFAs bind to G protein-coupled receptors (GPCR41 and GPCR43). 36 Acetate binds primarily to GPCR43, the propionate binds to both GPCR41 and GPCR43, and the butyrate binds to GPCR41. GPCR41 and GPCR43 receptors are expressed in the intestinal epithelium 37 and in adipose tissue. 36 The presence of GPCRs in adipose tissue suggests that this tissue is an important target for the metabolites produced by the gut microbiota. One study identified that rats fed a high fat diet had higher GPCR43 expression in adipose tissue and in vitro. SCFA increased the expression of PPARs, an important mediator of adipogenesis. 43 SCFAs that are bound to GPCR41 stimulate the expression of leptin in adipocytes and those that bind to GPCR43 appear to stimulate adipogenesis. 44 Thus, the profile of fatty acids produced may be related to the development of obesity. However, further investigations should be performed to confirm these results in humans. Lipid metabolism The endocannabinoid system is expressed in tissues that control energy balance (pancreas, muscle, gut, fat, liver, and hypothalamus) and regulates feeding behavior and metabolism. 45 This system is composed of bioactive lipids that bind to cannabinoid receptors, which results in cell signaling. The best characterized of these lipids are anandamide (AEA) and 2-arachidonoylglycerol (2-AG), 46 which activate receptors coupled to G, CB1, and CB2 proteins, thus activating the PPARa, GPR55, and GPR119 receptors. 47 The modulation of the gut microbiota or the reduction of CB1 activation improves the integrity of the intestinal barrier and reduces metabolic endotoxemia and low-grade inflammation. 47 Metabolic endotoxemia increased adipocyte hyperplasia and recruitment of macrophages into adipose tissue in a CD14 dependent pathway and increases the production of activin A, which activated the proliferation of adipocyte precursor cells. In addition, the consumption of a high fat diet caused endotoxemia and favored the development of metabolic diseases, suggesting that components of gut bacteria can remodel adipose tissue. 48 The control of this mechanism can prevent the development of obesity and its comorbidities. 48 In addition to altering the adiposity process, the microbiota acts at many levels, from lipid processing and absorption to systemic lipid metabolism. 49,50 This change can be explained by the assimilation of cholesterol by bacterial cells, binding of cholesterol to bacterial cell walls, inhibition of hepatic cholesterol synthesis, redistribution of cholesterol from the plasma to the liver through the action of SCFA and/or deconjugation of bile acids by hydrolysis. 51 Evidence also suggests a link between dysbiosis and pathological changes in the metabolism of deconjugated bile acids in obese patients. 52 Bacterial bile salt hydrolase (BSH) enzymes in the gut cleave the amino acid side chain of glyco-or tauro-conjugated bile acids to generate unconjugated bile acids (cholic and chenodeoxycholic acids), which are then amenable to further bacterial modification to yield secondary bile acids (deoxycholic and lithocholic acid). 53 Secondary bile acids binded to cellular receptors, such as G protein-coupled receptor TGR5, 54 and reduced macrophage inflammation and lipoprotein uptake resulting in less atherosclerotic plaque formation, which decreased the development of atherosclerosis. 55 Function of the gut microbiota on the hormones involved in food intake The gut microbiota has been implicated in the control of food intake and satiety through gut peptide signaling, where bacterial products activate enteroendocrine cells by modulating enterocyte-produced paracrine signaling molecules. 56 Gut microbiota may increase production of certain SCFA, which have been shown to be associated with an increase in peptide YY (PYY), 57 ghrelin, insulin, and glucagon-like peptide-1 (GLP-1) production. 58 Ghrelin was negatively correlated with Bifidobacterium, Lactobacillus, and B. coccoides/Eubacterium rectale, and positively correlated with Bacteroides and Prevotella. 59 Ingestion of oligofructose, a prebiotic that promotes the growth of Bifidobacterium and Lactobacillus, decreased the secretion of ghrelin in obese human. 60 GLP-1 also is modulated by the gut microbiota and is responsible for controlling food intake and insulin secretion. The concentration of this hormone was lower in obese individuals compared to eutrophic individuals. 61,62 Butyrate produced by intestinal bacteria was present in smaller amounts in obese individuals 63 and regulated energetic homeostasis by stimulating adipocytes to produce leptin and by inducing GLP-1 secretion by L cells. 64 At least in mice, modulation of the gut microbiota by probiotics increased the production of butyrate by commensal bacteria, inducing the production of GLP-1 by intestinal L cells and thus reducing adiposity. 65 In addition, the gut microbiota may favor the formation of specific bile acids that activate the TGR5 receptors. Intestinal bacteria dehydrate chenodeoxycholic acid 66 and produce lithocholic acid, which binds to TGR5 67 and increases energy expenditure in brown adipose tissue and GLP-1 secretion by activation in the intestinal L cells, 54 thus preventing obesity and insulin resistance. 68 The insulin concentrations also appear to be altered in accordance with the gut microbiota. 69 Gut microbiota transplantation from lean subjects to patients with metabolic syndrome increased insulin sensitivity. 70 This effect is probably related to the reduction of chronic low-grade inflammation, resulting from LPS translocation and, consequently, to greater activation of the insulin signaling cascade. 71 Like GLP-1, PYY is also produced by intestinal L cells in the form of PYY1-36 and PYY3-36, the latter being present in higher concentrations in the postprandial period, causing a sensation of satiety. 72 Obese individuals produced less PYY3-36, and no resistance to the hormone was observed. Batterham et al. 73 found a 30% reduction in food intake 90 minutes after the infusion of PYY3-36 in obese individuals, a value similar to eutrophic patients. The modulation of the gut microbiota with prebiotic (oligofructose) of healthy subjects resulted in increased bacterial fermentation, glucose tolerance, and reduced appetite from increased concentrations of GLP-1 and PYY, 74 probably due to a mechanism associated with the production of propionate by intestinal bacteria. 75 Therefore, the gut microbiota is also related to the development of obesity, due to the possible capacity to alter the food intake. The human gut microbiota composition in obesity Phyla changes after weight loss A higher Firmicutes-to-Bacteroidetes ratio related to obesity was observed in obese children when compared to normal weight children, 76,77 in overweight/obese women with metabolic syndrome when compared with overweight/obese women with non-metabolic syndrome, 78 and in Japanese overweight individuals when compared with non-overweight individuals. 79 Furthermore, the Firmicutes phylum has been shown to be negatively correlated with the resting energy expenditure (REE) as well as positively correlated with fat mass percentage. 80 A crossover clinical trial observed that a 20% increase in the Firmicutes phylum abundance was associated with an increase of 150 kcal in energy harvest. 81 Finally, one study reported a decrease in the Firmicutes-to-Bacteroidetes ratio after weight loss by obese individuals (Table 1). 15 Obese individuals seem to have fewer Bacteroidetes counts than normal weight individuals. 79,82,83 On the other hand, two studies associated the Bacteroidetes phylum with weight gain in pregnant women. 84,85 A cross-over study with 29 subjects did not find Eubacterium / Clostridium Firmicutes gram C; anaerobic; non-spore-forming/sporeforming " " Butyrate; "Harvest energy from the diet 116 differences in the proportion of Bacteroidetes between obese and non-obese individuals (Table 1). 86 A decrease of Firmicutes was observed after Rouxen-Y gastric bypass (RYGB) 87,88 and after laparoscopic sleeve gastrectomy (LSG). 89 In contrast, Bacteroidetes counts increased after RYGB and LSG 88,89 but after a very low-calorie diet. 89 This phylum decreased with a concomitant increase in the Firmicutes phylum. A decrease in the Firmicutes-to-Bacteroidetes ratio after diet therapy also was observed, and the Bacteroidetes proportion was positively correlated with a percentage of loss of body fat (Table 1). 15 Obesity related genus changes The genera Staphylococcus 77,84,85,90,91 and Clostridium 84,85,89,92 have been shown to be positively associated with obesity. A decrease in the genus Faecalibacterium was reported after LSG, 89 while the same genus increased after RYGB. 93 All these genera belong to the Firmicutes phylum ( Table 1). The Firmicutes phylum contains many butyrate producing species, and an increase in butyrate and acetate synthesis may contribute to an increase in energy harvest in obese people. 15,94 Furthermore, acetate can be absorbed and used as a substrate for lipogenesis and gluconeogenesis in the liver. 49 The genus Bacteroides, which belongs to the phylum Bacteroidetes, was shown to have an inverse relationship with obesity in overweight/obese women with metabolic disorder 78 after RYGB 88,93 and LSG (Table 1). 89 Bifidobacterium, which belongs to the phylum Actinobacteria, was also shown to have an inverse relationship with obesity in pregnant women, 84,90 children, 91 and infants of normal weight mothers 85 ; however, this genus was decreased in individuals subjected to RYGB. 88,93 Bifidobacterium species have been shown to deconjugate bile acids, which may decrease fat absorption. 95 In contrast, strains of the same species can have contradictory effects, as it has been shown that different Bifidobacterium strains might increase (strain M13-4) or decrease body weight (strain L66-5). 96 Methane-producing archaea (methanogens) have been shown to affect caloric harvest by increasing the capacity of polysaccharide-eating bacteria to digest polyfructose containing glycans, which leads to increased weight gain in mice. 41 A study demonstrated that humans with methane detectable via a breath test have a significantly higher body mass index (BMI) than methane-negative controls (Table 1). This implies a higher amount of M. smithii in obese individuals, which was not observed in studies assessing gut archaeal populations. 97 Gut microbiota and obesity: future perspectives Although several links have been reported between the gut microbiome and obesity (Table 2), the mechanisms are not yet understood that explain how and when the microbiome affects the obese state. Most studies investigating the relationships between obesity and the gut microbiome use very small sample sizes and use a variety of analytical methods to infer the intestinal microbial composition. Such factors are likely responsible for the considerable heterogeneity observed in the results. For instance, different DNA extraction kits have an impact on the assessment of the human gut microbiota, making it difficult to compare data across studies. 98 Probiotics, prebiotics, and antibiotics have been evaluated, and they may become new therapeutic possibilities for the treatment of obesity. Oral supplementation with probiotics seems to reduce the concentrations of low-density lipoproteins (LDL) and total cholesterol; to ameliorate atherogenic indices; to improve glycemic control 99 ; to reduce body weight, waist circumference, BMI, and abdominal visceral adipose tissue 100 ; to improve body composition, 101 and to reduce the concentrations of pro-inflammatory markers such as interleukin 6 (IL-6) and TNF-a[ 102 Prebiotics also have been shown to contribute to weight loss and improve metabolic parameters including insulin resistance. 60 Nevertheless, modulations performed with probiotics show results only for specific strains and for the period evaluated, with little data available regarding long-term benefits. In addition, the different ways in which different hosts can react to supplementation make it impossible to carry out generalizations. In the future, the modulation of the gut microbiota may be a way of assisting in the treatment of obesity, but for this idea to become a reality, there is a need to understand the metabolic interactions between the modulated bacteria and the host. Conclusions Although there is a large amount of heterogeneity in the data that is available, the following conclusions can be drawn from the literature review: 1) obesity was characterized by the presence of intestinal dysbiosis, marked by the distinct microbiome profile existing between obese and non-obese individuals; 2) the resulting dysbiosis could change the functioning of the intestinal barrier and the GALT, allowing the passage of structural components of bacteria, such as LPS, and activating inflammatory pathways that may contribute to the development of insulin resistance by alteration of insulin receptor signaling by the presence of inflammatory cytokines; 3) intestinal dysbiosis could alter the production of gastrointestinal peptides related to satiety, resulting in an increased food intake and contributing to a self-sustaining cycle; and 4) lipid metabolism could be altered by the changes observed in the gut microbiome, resulting in a stimulus to increase body adiposity (Fig. 1). Understanding the changes occurring in the gut microbiome of obese individuals and the physiological consequences of these changes is a necessary step in creating modulation strategies that can be used to help treat this condition. Disclosure of potential conflicts of interest No potential conflicts of interest were disclosed.
2018-04-27T03:46:33.751Z
2018-05-24T00:00:00.000
{ "year": 2018, "sha1": "9e01bdfc9eae7045c71847e1744ce8ae03de6474", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19490976.2018.1465157?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "2b748cdede2972d9d7beb70b6e2809741fb4a64a", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
46997473
pes2o/s2orc
v3-fos-license
The Role of Nutrition and Literacy on the Cognitive Functioning of Elderly Poor Individuals Maintaining cognitive function is a prerequisite of living independently, which is a highly valued component in older individuals’ well-being. In this article we assess the role of early-life and later-life nutritional status, education, and literacy on the cognitive functioning of older adults living in poverty in Peru. We exploit the baseline sample of the Peruvian noncontributory pension program Pension 65 and find that current nutritional status and literacy are strongly associated with cognitive functioning for poor older adults. In a context of rising popularity of noncontributory pension programs around theworld, our study intends to contribute to the discussion of designing accompanyingmeasures to the pension transfer, such as adult literacy programs andmonitoring of adequate nutrition of older adults. ARTICLE HISTORY Received 31 January 2017 Accepted 26 March 2018 Introduction During the last years, several low-and middle-income countries have implemented noncontributory pension programs as a way to fight poverty in old age. In general, these programs transfer small pensions (means-tested or universal) to older adults who do not have a pension from the contributory pension system. The increasing popularity of these programs represents an important shift in the provision of social protection for old age. For example, many Latin American countries have implemented these programs as a way to rapidly expand old age protection in economies with large informal labor markets where the coverage of traditional pension systems is low. 1 These programs can be costly (Aguila, Mejia, Perez-Arce, Ramirez, & Rivera Illingworth, 2016), but many positive outcomes have been found on the well-being of older people with regard to health (Aguila, Kapteyn, & Smith, 2015;Galiani, Gertler, & Bando, 2016) and social and family support networks (Case & Menendez, 2007;Edmonds, Mammen, & Miller, 2005). Without denying the importance of having a secure stream of income in old age, there are other aspects of the individual's well-being that will not necessarily improve with a small cash transfer. For example, keeping good cognitive functioning in old age is crucial for the autonomy of older individuals, which is a highly valued dimension in their well-being. Although the decline of cognitive functioning is part of the aging process, there are some protective measures that can help to retard this decline. Some of these measures could eventually be considered in the design of accompanying benefits and interventions to the pension transfer. In this study, we use a representative sample of older adults living in poverty in Peru and identify a significant role of nutritional status and literacy on the cognitive functioning of older adults. Therefore, our study intends to contribute to the discussion about designing complementary benefits to noncontributory pensions, such as adult literacy and nutritional programs. Intervention programs designed to improve literacy and nutritional status have shown to be effective in maintaining cognitive health in late life. Indeed, while increasing literacy in children and adults is a goal in itself and supported by the Sustainable Development Goals agenda for 2030 (Hanemann, 2015), illiteracy has additional severe economic, social, and well-being consequences in particular for older adults (Roman, 2004). Additional benefits of literacy for maintaining cognitive health can be expected: Illiteracy is supposed to contribute substantially to the development of dementia in both developing and developed countries, with a magnitude, for instance, in South Korea estimated around 16% (Suh et al., 2016). Regarding adult literacy programs in developing countries, Abadzi (2003) discusses the characteristics shared by successful interventions on increasing adult individuals' cognitive skills. The author shows that interventions combining inputs from neuropsychology (e.g., Neuroalfa literacy method, REFLECT methodology) with an adequate environment for learning (in terms of course duration, classroom conditions, materials, teaching preparation, etc.) are particularly effective in improving adult individuals' cognitive skills, such as memory, attention, verbal skills, and hypothesis formation. However, there is little evidence available on the possibilities of improving literacy in older adults and the associated possible benefits for older-age cognitive outcomes. Our study intends to provide observational evidence toward testing the possible benefits of literacy programs on older-age cognitive outcomes. Regarding nutritional interventions, Hughes and Ganguli (2009) show evidence of different nutritional interventions positively affecting cognitive functioning in old age. Their article discusses empirical studies in which different exposures to micronutrients (e.g., polyphenols and antioxidants), macronutrients (e.g., fat, protein), and dietary patterns (e.g., the Mediterranean diet) affect cognitive health. Similarly, Van Dyk and Sano (2007) review different interventions and control trials seeking to predict the effects of nutrition and different diets on cognition in older people. They mention that some types of diets (e.g., whole grains, natural sugar, and fish) may protect against cognitive decline. Interventions related to nutrition manipulation are generally carried out in institutions such as old-age homes and not in large-scale programs such as the noncontributory pension schemes. The Nutrition Transfer for Senior Adults program (Pension Alimentaria Para Adultos Mayores) was implemented in Mexico City in 2001 to provide cash on a debit card for older residents (aged 70 or older) of poor areas to buy food in associated grocery shops. However, the purchase of other items was also permitted, and the program became a more standard noncontributory pension scheme for older people in Mexico City (Juarez, 2009). It has been found that early childhood experiences like poverty and deprivation are associated with lower cognitive function at older ages (Case & Paxson, 2008;Guven & Lee, 2013) and that, in general, cognitive impairment and dementia may have their origins in early-life environment (Glymour, Kosheleva, Wadley, Weiss, & Manly, 2011). Particularly, inadequate nutrition may hinder the building up of cognitive reserve at early ages and increase the risk of old-age cognitive impairment and dementia (Melrose et al., 2015;Whalley, Dick, & McNeill, 2006). A recent study showed that exposure to famine at early ages increased cognitive aging (Kang et al., 2017). Proxies for early-life nutritional status such as height, arm span, limb length, and other anthropometric measures have been shown to be associated with cognitive function at older ages, with arm span (available in our data) being a valid and reliable proxy for other anthropometric measures as well as an adequate marker for childhood development and nutritional status (Huang et al., 2008;Jeong et al., 2005;Kim, Stewart, Shin, & Yoon, 2003;Maurer, 2010;Patel et al., 2011). Due to potentially different early-life conditions of women and men with regard to length of schooling and nutrition in the parental household, one may assume gender differences in the links among nutrition, literacy, and cognitive function. Similarly, later-life nutritional status has been shown to be associated with cognitive impairments (Suominen et al., 2005). Our data set includes the Mini Nutritional Assessment (MNA), which is an instrument utilized to evaluate the risks of undernutrition and malnutrition of older individuals (Guigoz, 2006;Harris & Haboubi, 2005;Vellas et al., 1999). The challenges of aging populations in developing countries require more knowledge about successful aging (García-Lara, Navarrete-Reyes, Medina-Méndez, Aguilar-Navarro, & Ávila-Funes, 2016). One substantial difference between developed and developing countries is the level of schooling of today's older cohorts. Education is a powerful and independent indicator of cognitive reserve and strongly associated with later-life cognitive function (Stern, 2002). The influence of education to modify the relationship between nutritional status and cognitive function has rarely been investigated in samples of individuals with low education levels. In our sample, only 21% of individuals have completed primary education or more, and the rate of illiteracy measured with a reading test is 27%. This setting allows estimation of the extent to which early-and late-life nutritional status and cognitive function are directly associated at older ages, both with and without the influence of education and literacy. We argue that education is protective of later cognitive function and that literacy may drive this relationship. Background Peru's economy has substantially improved in the last decade. For instance, the national poverty rate (measured with poverty lines for household consumption per capita) decreased from 58.7% in 2004 to 21.8% in 2015. Educational attainment has also improved in the same period. However, large inequalities still persist in the country. For instance, the poverty rate in 2015 in rural areas was three times that in urban areas (45% vs. 15%, respectively), and education attainment in rural areas, on average, was 3 years lower than in urban areas. Looking at the education level achieved by different age groups in Peru's National Household Survey (ENAHO) of 2012, which is the year of collection of our sample, we observe some acute differences between individuals living in rural and urban areas. In rural areas 33%, 21%, 16%, and 12% of individuals aged 60 to 64, 65 to 69, 70 to 74, and 75 to 79, respectively, have at least completed primary education, while these percentages are 75%, 66%, 60%, and 55% in urban areas. Women living in rural areas had the lowest education level. Only 23%, 17%, 9%, and 5% of women aged 60 to 64, 65 to 69, 70 to 74, and 75 to 79, respectively, have at least completed primary education, while these percentages are 48%, 34%, 27%, and 21% for men in rural areas. In a study utilizing the ENAHO 2011, Olivera and Clausen (2014) describe that about 74% of the total elderly population (aged 65 and older) do not receive any type of pension and that this percentage is 99% and 94% among the elderly individuals who are considered extreme poor and non-extreme poor, respectively. This can explain why a large share of the old population keeps working at advanced ages. About 50% and 90% of the population aged 65 or older still work in urban and rural areas, respectively. The low levels of pension coverage are explained by the large informal market operating in Peru in which social security contributions are not compulsory. The noncontributory pension program Pension 65, which gives a bimonthly transfer of 250 Soles (about $77) to eligible individuals, was introduced at the end of 2011 as a new instrument to fight poverty in old age. As today, this program has half a million recipients and costs each year about 0.12% of GDP. To be eligible for this program, individuals must be aged at least 65, not be covered by social security, and live in a household officially classified as extremely poor. As mentioned in the introduction, we use the sample of the baseline survey conducted at the end of 2012, which is intended to be used to evaluate the program. It is worth mentioning that there are ethnic differences in the access to education and other services as indigenous groups have systematically suffered from social exclusion, particularly groups living in the highlands of Peru. Due to the eligibility criteria of living in extreme poverty, most of the recipients of Pension 65 are located in rural areas and are indigenous. In our sample, 62% of the individuals live in rural areas and the rest in urban areas; 70% of the individuals have an indigenous mother tongue. Indeed, our study is also interesting because we are looking at the relationship among cognition, nutrition, and education in a population of individuals who have experienced cumulative deprivations in many dimensions. For example, Dell (2010) illustrates the long-term effects of mandatory mining work in Peru's highlands on the current health of indigenous people. Other hardships suffered by the generation of our sample are that the illiterate were not allowed to vote in political elections before 1980 and that the Agrarian Reform Bill (Reforma Agraria) was only implemented during the early 1970s. This major redistribution of land represented the end of the Haciendas system, in which an impoverished labor force of peasants was attached to rural states. Sample and procedures The data come from the Survey of Health and Wellbeing of the Elderly (Encuesta de Salud y Bienestar del Adulto Mayor [ESBAM]). This survey was commissioned by Peru's Ministry of Development and Social Inclusion, financed by the Ministry of Economy and Finance, and collected by the National Institute of Statistics of Peru with the aim to serve as baseline of the noncontributory pension program Pension 65. The data include information on socioeconomic variables, health, nutrition, cognitive functioning, anthropometrical measures, and biomarkers. The sample is representative of households with at least one member aged 65 to 80 and whose socioeconomic classification score lies in the vicinity of the official threshold to determinate extreme poverty. The data were gathered in 12 (out of 24) departments of Peru. In Peru, households receive an official socioeconomic score in order to determine eligibility for social assistance. This is the so-called targeting score SISFOH (Sistema de Focalizacion de Hogares) based on the household's material conditions, assets, incomes, household size, and labor status and schooling of their members. The households can be classified as non-poor, non-extreme poor, and extreme poor by comparing their SISFOH score with official regional poverty thresholds. 2 The program Pension 65 requires that recipients must live in a household classified as extreme poor. The original sample consists of 4,151 individuals, but after dropping observations with missing values in our covariates of interest, the final sample is composed of 3,910 individuals. Cognitive function Cognitive function is assessed with a reduced version of the Mini-Mental State Examination (MMSE) (Folstein, Folstein, & McHugh, 1975), which is similar to that of the Survey on Health and Well-being of Elders (SABE) conducted in capital cities of Latin America and the Caribbean in 1999 and 2000 (Pelaez et al., 2005). This instrument was designed to take into account low literacy levels predominant in Latin American older adults (Maurer, 2010). The MMSE score of SABE ranges from 0 to 19 points, but the score available in ESBAM ranges from 0 to 14 points because this does not include the backward counting test. In ESBAM, five components of cognitive functioning are assessed. Orientation assesses correct answers on day of month, month, year, and day of week; immediate recall is assessed by asking the respondent to repeat three words that were read out loud. Delayed recall is assessed by asking to repeat the same three words after a certain delay, each correct word received one point. Immediate and delayed recall are summarized to an episodic memory score (0 to 6 points). The respondent then had to follow three actions: "I will give a piece of paper. Take this with your right hand, bend in half with both hands, and place on your legs." Each correct action receives one point. Drawing is assessed by the request to draw two intersected circles, provided that the circles do not cross more than half. Orientation, action, and drawing are summarized to a mental intactness score (0 to 8 points). The distinction between episodic memory and mental intactness is similar to the one done by Lei, Smith, Sun, and Zhao (2014) with Chinese data. Education Given the low level of education attainment in the sample, education is measured with dummy variables indicating the levels "no education," "uncompleted primary education," and "primary education or more." The number of years of education is also used in the analysis. In ESBAM, individuals who reported uncompleted primary education, preprimary education, or no education as their highest level of schooling were also asked whether they are illiterate ("do you know how to write and read?"). Then the individual was requested to read a paragraph of 20 words to validate the self-selection answers. A dummy variable illiterate takes value 1 if the reading test was failed and zero otherwise. The reading test was not collected for 108 individuals, so that illiterate is based on the self-report of these individuals. Nutritional status The nutritional status experienced in early life is assessed by arm span, which is an indicator similar to leg length (Jeong et al., 2005) and more reliable than height measures, which can be biased due to shrinkage in old age (Huang, Lei, Ridder, Strauss, & Zhao, 2013). Arm span has been found to be an effective surrogate measure for height and indicator for nutritional status in an Indian sample (Datta Banik, 2011). The actual nutritional status in old age is assessed by an adapted version of the MNA (Guigoz, 2006;Olivera & Tournier, 2016;Vellas et al., 1999), which is one of the best validated and most widely utilized screening tools for malnutrition (Morley, 2011). The MNA has also been used in the SABE study. The MNA is composed of items related to diet quality, mobility, disease history, and anthropometrical measures. In our study, the maximum possible score of the adapted MNA is 22. Higher scores indicate better nutritional status. Mental disorders This is a self-reported variable computed from a list of 12 diseases. The variable takes value 1 if the individual indicated at least one of the following diseases: (1) depression, (2) stroke or brain hemorrhage, and (3) diseases of the nervous system, Alzheimer's disease, or memory loss and takes 0 otherwise. Smoking This variable is constructed from a question that includes two parts: (1) "Have you smoked cigarettes (tobacco) during the last 30 days?" and (2) "Even if you have not smoked during the last 30 days, have you smoked before?" The variable included in our analysis takes value 1 if an individual indicates having smoked during the last 30 days or before and 0 otherwise. Sociodemographic variables We include the following variables in the regressions: age and dummy variables for urban area of residence (rural is the reference value) and retired status (working is the reference value). Strategy of data analysis Regarding the method of analysis, we regress the score of cognitive functioning on sociodemographic, nutritional, and education variables. The score of cognitive functioning ranges between 0 and 14, and its distribution tends to be left-skewed. By construction, the score suffers from "ceiling effects" and hence Tobit models are employed to take into account the right censoring of the dependent variable, as in Maurer (2010). These models assume that the dependent variable is a latent dependent variable Y Ã censored at Y, so that one can only observe this value if Y Ã ! Y. The model formalization is the following: Y Ã is the latent cognitive functioning score, X are explanatory variables, β are coefficients, and ε is the error term, which is assumed to be normally distributed. The models are estimated by maximum likelihood, and the standard errors are robust and clustered at the level of the department. Although not reported, the regressions include dummy variables for departments as a way to control for departmental unobserved effects. Moreover, to account for potential gender differences, all regressions are run separately for women and men. To account for heterogeneous effects of education at different levels of nutritional status, we include interaction terms of education with arm span and MNA with the previous list of explanatory variables. Finally, the models are run in a subsample of individuals who did not receive any formal schooling (but some were literate) as a way to uncover the impact of literacy and nutrition without any influence of formal education. Results The descriptive statistics are reported in Table 1. The average age is 71 years for men and women. The average years of education is only 2.7 for the total sample, but there are some differences by gender. On average, men have 3.6 years of education and women have 1.6 years. In addition, more women experience illiteracy. About half of women are illiterate, while 11% of men are. This translates to men having a better score of total cognitive functioning and mental intactness than women. There are not differences in the score of episodic memory. The average MNA score is 13.58 points, and there are statistically significant differences in favor of men. The differences of means by gender of all variables entering into the regressions are reported in the last column of Table 1. As anticipated, initial analyses suggested differential results patterns for men and women. For this reason, the analysis will be stratified by gender. The main regression results of the Tobit models are reported in Table 2. Mental disorders are negatively associated with each component of cognition in both genders, but smoking is not significant in any case. As a check, we ran the regressions of Table 2 with three dummy variables for each of the three diseases embedded in the variable mental disorders. There is not a statistically significant relationship between cognition and depression, but there is a statistically significant relationship between cognition and the other two diseases. We do not find a significant statistical relation between smoking and cognition. As a check, we also ran the regressions of Table 2 with an alternative construction for smoking indicating lifetime smoking (this variable takes value 1 if an individual has smoked before the last 30 days and 0 otherwise), and neither reveals a significant relationship with cognition. The MNA score is positively and sizably associated with each component of cognition for both men and women. The association of MNA with total cognition is larger among women than among men. For instance, an extra point in the MNA score (having a better nutritional status) is associated with an increase of total cognition in the same magnitude as if men were 1 year younger and women 2 years younger. Education is also an important and statistically significant predictor of total cognition and its components. For instance, compared to no education, having uncompleted primary education is associated with an increase in total cognition of 0.77 and 1.47 points in the cases of men and women, respectively. Moreover, having completed primary education or more is associated with an increase in total cognition of 1.47 and 2.23 points for men and women, respectively. Arm span is significantly associated with total cognition and mental intactness. In the case of regressions for total cognition (Table 2's columns 3 and 6), having 10 extra centimeters of arm span is similar to being about 1.7 years younger both for men and women. Our results show that the association of education and current nutritional status (measured with MNA) with later-life cognitive functioning is more important for women than men and that the contribution of nutritional status experienced in childhood (measured with arm span) in explaining later-life cognition is similar between men and women. Table 3 shows the results of models employing interactions of education with nutritional status. In general, with the exception of education and arm span for men's episodic memory, we do not observe presence of heterogeneous effects of education and nutritional status. Results do not qualitatively change when the variable education is measured in terms of years of schooling instead of educational levels. Table 4 shows the results of applying the same analysis to a subsample of individuals who did not receive any formal schooling (189 men and 802 women). Of interest, in this sample there are 87 men and 104 women who are literate according to the reading test, which reflects that they might have learned to read through informal channels or at adult literacy programs. Note. *p < .1. **p < .05. ***p < .01. All models include dummies of departments. Robust standard errors clustered by department are reported in parentheses. The reference value for the dummy variables of education is "no education." Associations between predictors and cognitive functioning are similar to previous models but in some cases fail to reach significance, potentially due to the small sample size. Arm span is positively associated with episodic memory for men, and it is positively associated with mental intactness and total cognition for women. The MNA score is positively associated with cognition for both genders (for men in mental intactness, for women in all components). Last, being illiterate is significantly negatively associated with cognitive functioning in both genders, except in mental intactness for women (although the p value is marginally larger than the 10% threshold). Note. *p < .1. **p < .05. ***p < .01. All models include dummies of departments. Robust standard errors clustered by department are reported in parentheses. The reference value for the dummy variables of education is "no education." We know that nutrition and cognition are positively associated with economic status, and therefore not including a variable controlling for economic status may lead to an omitted variable problem. However, we should keep in mind that our sample is composed of very poor older adults whose living conditions (captured by the SISFOH score) are very similar and located in the vicinity of the official threshold to determine extreme poverty. If our sample were a population-wide sample, not accounting for the economic status may be a problem. In any case, we have assessed how our results can change if we add into the regressions of Tables 2 and 4 a variable indicating the economic status of the individual. We do not have access to the official SISFOH score, but at least we know which individuals were assigned into control (a score just above the official threshold) and treatment (a score just below the official threshold) groups for a posterior impact evaluation (see note 2). As the SISFOH is an indicator of the economic status of the individual's household, we consider this is a good proxy for the economic status of the individual. The inclusion of a dummy variable indicating the treatment or control assignment does not change our results or the statistical significance of the coefficients of interest. In addition, the coefficient of this dummy is not statistically significant in the regressions of Table 2 (full sample of analysis) and is significant in the model regressions of mental intactness and total cognition for men in Table 4 (the subsample of uneducated individuals). Explanation of findings Maintaining cognitive functioning is vital for autonomous and well-functioning old age. While to date no interventions exist to reverse cognitive decline, evidence on life-course factors able to delay cognitive decline and impairment is urgently needed (Leshner, Landis, Stroud, & Downey, 2017). This study tested the associations between early-life (arm span) and later-life (MNA) nutritional status and cognitive functioning, suggesting-in line with previous studies-that improving nutrition could improve cognitive outcomes in developing countries. Further, uncompleted and completed primary education were positively associated with cognitive function, suggesting-although our research design does not allow causal conclusions-that even small "doses" of schooling are vital for maintaining cognitive reserves up to higher ages. A reverse association of children with higher initial cognitive skills being selected or able to receive longer schooling is also possible. Men with less favorable early-life nutritional status and low education show particularly poor cognitive function. These observational findings should justify further research with rigorous causal tests, and the elaboration of noncontributory pensions programs should be flanked with programs to improve nutrition and literacy at older ages. We find gender differences in the interaction between arm span and having at least primary education; this interaction is only significant in men's episodic memory. This suggests that, for men, education seems to "compensate" for early nutritional deficits and decreases in importance with better early-life nutritional status. We suggest biological and gender normbased explanations for why this interaction of nutrition and education does not reach significance in women. Biologically driven gender differences could determine how well arm span reflects early-life nutritional status, or there could be gender differences in associations of nutritional status and cognition. Further, boys and girls may have systematically received nutrition of different quality during childhood. The higher number of uneducated women suggests that at least part of the gender differences are driven by selection into education in childhood. Ilahi (2001) finds gender differences in schooling and household chores of boys and girls in Peru, although we have little empirical evidence on the childhood conditions of our sample of older adults in Peru. The lack of significant interaction for mental intactness is likely to be due to ceiling effects, that is, this measure not being sensitive enough to catch differences in more elevated cognitive functioning of educated men and women. Identifying precedents of cognitive impairment and dementia in developing countries is vital, as it is the leading cause of disability in developing countries (Sousa et al., 2009). Earlier research has found illiteracy to be an important contributor to dementia all over the world (Suh et al., 2016). Further, we find that malnutrition at early ages may put individuals, especially men with low education, at particular risk for cognitive impairment at older ages. This finding is aligned with the evidence that chronic malnutrition in infancy is associated with cognitive and behavioral deficits across the life-span (Wachs, 1995). Moreover, lower nutritional status in childhood, assessed as height-for-age, has been found to be associated with dementia, cognitive impairment and higher burden of infection at older ages, as well as with higher burden of infection net of socioeconomic variables, infection being an important possible determinant of later-life morbidity and mortality (Dowd, Zajacova, & Aiello, 2009). This article also confirms the importance of education, which has been found to be the main contributor to maintaining cognitive function up to old age (Lee, Kawachi, Berkman, & Grodstein, 2003;Meng & D'Arcy, 2012). Our article adds that education may be particularly beneficial due to acquiring literacy. Being literate could be the main prerequisite for successfully completing education and engaging in cognitively stimulating tasks across the life-span like reading or using technological devices (depending on their availability). As stated at the beginning of the article, both nutritional status and literacy are dimensions that can be affected by tailor-designed interventions. Our research, in line with previous studies in different contexts, points to the possibility of benefits of improving nutrition and literacy at older ages. Future research with rigorous study designs is needed to provide evidence if improvements in these dimensions are indeed able to improve older individuals' cognitive function. To preserve cognitive functioning and have more autonomous older adults, it is key that programs fomenting retirement, such as noncontributory pension programs, test potential of additional interventions to their design. Limitations We use a measure of cognitive functioning that is based on the MMSE, which is one of the most used and studied tests worldwide, and that is similar to the one collected in SABE for older adults in Latin America. Although there is an advantage in comparability, the use of a single measure of cognitive functioning imposes restrictions to analyze how our main variables of interest are associated to other measures (e.g., the Montreal Cognitive Assessment) or other relevant cognitive dimensions not included in MMSE (e.g., attention, speed of processing general information). Arm span is only a crude indicator of early-life nutritional status, but it has been shown to be similarly valid as limb length. However, we cannot rule out that differences in arm span may have already been produced at the prenatal stage, as socioeconomic differences in childhood growth seem to be prenatally determined (Howe et al., 2012). It is also possible that individuals with shorter arm span did not receive equivalent schooling compared with men and women with greater arm span, as found in the Honolulu Asian Study for height (Abbott et al., 1998), leaving the question of causality of this association unresolved. We cannot rule out that this kind of selection may have biased our findings. Geographical specificities of Peru include different altitudes of the respondents' residences. High altitudes result in blood anomalies and have been shown to be associated with compromised infant health (Wehby, Castilla, & Lopez-Camelo, 2010). ESBAM does not contain information about district of birth, so we cannot possibly rule out migration between high-and lowaltitude areas and the potential confounder of birth in a high-altitude area. It is necessary to keep in mind that due to harsh circumstances of older poor people living in Peru, a significant part of the birth cohorts under investigation have already died at age 65. Based on period life-tables of 1950 and 1951 (Arriaga, 1966), approximately 25% of men and 32% of women have been expected to survive until age 65 in the year the sample was collected. We cannot rule out that selective or gender-differential mortality has been producing these results. Last, we did not find any association between smoking and cognitive function, which may be due to selective mortality of smokers at younger ages. Finally, we use a dichotomous measure of literacy available in this study (see Hanemann, 2015 for a continuous conceptualization of literacy of different proficiency levels) and find that literacy provides a distinctive advantage during the aging process to preserve cognitive functioning. Similar to the findings suggesting positive links between nutrition and cognitive functioning, we acknowledge however the lack of a study design enabling us to causally test whether literacy and nutrition interventions improve cognitive outcomes at older ages. Nonetheless, the analysis of this observational cross-sectional study gives strong implications of potential mechanisms to improve cognitive functioning in older people in a developing country. Future research with rigorous study designs is necessary to provide more evidence about the benefits of literacy and nutritional interventions on old-age cognition. We further cannot rule out a potentially reverse mechanism in that literacy could be associated with cognitive ability due to the fact that individuals with higher cognitive abilities are more likely to learn to read at earlier life stages, even without proper schooling. Further studies with longitudinal designs should explore these mechanisms in more detail. Conclusion We find that early-life and later-life nutritional statuses are significantly associated with cognitive function in old age, replicating the importance of adequate nutrition for cognitive functioning. Further, without any schooling, being literate is strongly positively associated with cognitive functioning, suggesting that literacy may drive associations of education and cognition. Nutritional status should be monitored in older adults deprived from socioeconomic resources in order to prevent cognitive impairment. Policies supporting adequate nutrition at all ages could help in preserving cognitive levels in the older population, especially in less developed countries. Equally, policy measures implementing literacy programs for adults could have potentially important positive benefits for cognitive function up to older ages. In a context of rising popularity of noncontributory pension programs around the world, this study sheds light on the importance of testing the benefits of additional interventions with the aims of keeping a good nutritional status in old age and improving literacy, ideally to preserve later-life cognitive functioning and maintain autonomy in older individuals. It could be the case that these non-contributory pension schemes can follow the example of some conditional cash transfer programs and incorporate inkind components related to nutrition and literacy as a way to enhance the well-being of poor individuals at older ages. and Social Inclusion (MIDIS) had already completed the census of socioeconomic variables intended to update its household targeting score system (SISFOH) and where, according to administrative records, 70% of Pension 65 beneficiaries lived. Households in the sample were randomly drawn from the total number of households showing a SISFOH score located at ±0.3 standard deviations from the SISFOH cut-off for extreme poverty. Households with a score above and below this threshold were classified as "nonextreme poor" and "extreme poor," respectively, and therefore were assigned into control and treatment groups for a posterior impact evaluation. The sampling selection was probabilistic, independent in each department, stratified in rural/urban areas, and carried out in two steps. In the first step, the primary sampling units (PSUs) were census units in urban areas and villages in rural areas with at least four households living in poverty and with elderly members. The selection of PSUs was made by probability proportional to size according to the total number of households. In the second step, four households were randomly drawn from each PSU for interview and two for replacements.
2018-06-21T15:55:26.053Z
2018-07-12T00:00:00.000
{ "year": 2018, "sha1": "65a6498b9b7862d8eaaf11179b368497463b60b3", "oa_license": "CCBYNCSA", "oa_url": "https://orbilu.uni.lu/bitstream/10993/35463/2/Leist%20Novella%20Olivera%202018%20JASP.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9023d69dc1efb5ade192e21a2b72c863830c78c3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
257025275
pes2o/s2orc
v3-fos-license
Importance of Thyroid Hormone level and Genetic Variations in Deiodinases for Patients after Acute Myocardial Infarction: A Longitudinal Observational Study This study aimed to examine the influence of thyroid hormone (TH) levels and genetic polymorphisms of deiodinases on long-term outcomes after acute myocardial infarction (AMI). In total, 290 patients who have experienced AMI were evaluated for demographic, clinical characteristics, risk factors, TH and NT-pro-BNP. Polymorphisms of TH related genes were included deiodinase 1 (DIO1) (rs11206244-C/T, rs12095080-A/G, rs2235544-A/C), deiodinase 2 (DIO2) (rs225015-G/A, rs225014-T/C) and deiodinase 3 (DIO3) (rs945006-T/G). Both all-cause and cardiac mortality was considered key outcomes. Cox regression model showed that NT-pro-BNP (HR = 2.11; 95% CI = 1.18– 3.78; p = 0.012), the first quartile of fT3, and DIO1 gene rs12095080 were independent predictors of cardiac-related mortality (HR = 1.74; 95% CI = 1.04–2.91; p = 0.034). The DIO1 gene rs12095080 AG genotype (OR = 3.97; 95% CI = 1.45–10.89; p = 0.005) increased the risk for cardiac mortality. Lower fT3 levels and the DIO1 gene rs12095080 are both associated with cardiac-related mortality after AMI. To our knowledge, there are no reports studying the association between circulating TH ranges and genetic variability of genes related to TH axis on the long-term mortality in CAD patients after acute MI (AMI). Our study aimed to examine the prognostic importance of TH level and genetic polymorphisms DIO1, DIO2, and DIO3 on long-term outcomes in patients with CAD after AMI. Methods Study population. In total, 330 AMI patients with ST-segment elevation and non ST-segment elevation in the cardiac Intensive Care Unit (ICU) at the Lithuanian University of Health Sciences Hospital were invited to participate in the study. Standard treatment had been given according to the existing guidelines for AMI management [47][48][49][50] . Inclusion criteria covered ages over 18 years and an AMI diagnosis. Patients were excluded if they were taking thyroid medications or amiodarone, had increased levels of TSH ( > 4.8 mIU/l), indicating hypothyroidism, reduced TSH ( < 0.5 mIU/l), indicating hyperthyroidism, or if they had serious systemic disease (e.g. cancer, autoimmune disease, or chronic renal disease). All eligible participants provided written informed consent. The final study population was comprised of 290 patients with AMI (72% men and 28% women; mean age, 62 ± 11 years). Study design. Eligible participants were evaluated for socio-demographic factors and clinical characteristics such as history and type of AMI, HF, left ventricular ejection fraction (LVEF), Killip class, and current medication use. Participants were also evaluated for known CAD risk factors, including diabetes mellitus (DM), arterial hypertension (AH), and body mass index (BMI). All patients underwent coronary angiography. The majority of patients were after primary percutaneous coronary intervention (PCI). Troponin I, lipid profiles, N-terminal pro-B-type natriuretic peptide (NT-pro-BNP), TH concentrations, and DIO1, DIO2, DIO3 genetic polymorphisms were evaluated from a blood samples drawn before intervention procedures. Follow-up data on mortality (time and cause of death) was used in the analysis as a primary outcome of interest. During a period of two-year follow-up, outcome data from 283 of the 290 participants was collected. The data was obtained from death certificates, post-mortem reports, and medical records. When data could not be obtained from these sources, the study team attempted to conduct telephone interviews with participant family members to obtain self-report mortality data or contacted the Causes of Death Register at the Institute of Hygiene of the Lithuanian Ministry of Health. Cardiac and all-cause mortality were ascertained. Documentation of death due to cardiac arrest or arrhythmias, death due to MI or progressive HF were regarded as cardiac-related mortality. The prospective study protocol was approved by The Regional Biomedical Research Ethics Committee and is described elsewhere 51 . Evaluation of TH and NT-pro-BNP. Blood samples were taken within 24 hours of patients' admission to the ICU. The blood was centrifuged and the serum was frozen at -80° C. Serum samples were analysed in a single batch after completion of this study. Serum levels of T3, fT3, fT4, rT3 and TSH were analysed using an automated enzyme immunoassay analyser (Advia Centaur XP; Siemens Osakeyhtio). The normal range for total T3 was 0.89-2.44 nmol/L, fT3 3.50-6.5 pmol/L, fT4 11.50-22.70 pmol/L, rT3 24.50-269.30 pg/mL and TSH 0.55-4.78 mIU/L. The serum NT-pro-BNP levels were assessed using two-side chemiluminescent immunometric assay with Immulite 2000 immunoassay System; Siemens, Germany. All subjects included in the study were also evaluated for troponin I, lipid concentrations, serum glucose levels and underwent a common blood test. Statistical analysis. Data is expressed as mean ± standard deviation (SD) for variables with Gaussian distribution and as median (25th-75th percentile) for variables without normal distribution. Normality of continuous data was assessed using the Kolmogorov-Smirnov test, analysis of the Q-Q plots and distribution in the histograms. Normal distribution was assessed and if necessary variables were natural-log transformed (ln). We specifically used a log transformation for NT-pro-BNP, TSH, and rT3 parameters. Each SNP was tested for Hardy-Weinberg equilibrium (HWE) http://ihg.gsf.de/cgi-bin/hw/hwa1.pl 55 , in case and contro l populations, using the Chi-square test or the Fisher's exact test before inclusion in the association statistics (p > 0.01 threshold). Baseline clinical characteristics, TH levels, fT3 ranges (1 st quartile versus ≥2 nd quartile of fT3), NT-pro-BNP, and DIO1, DIO2, DIO3 genotypes were compared across the cardiac-related death and survivors groups. Student's t, Mann-Whitney's U, Chi-square or Fisher's exact tests were used to compare group scores as appropriate. Correlations between fT3, NT-pro-BNP were assessed using Pearson product-moment analysis (Pearson r). A p value <0.05 (two-tailed) was regarded as significant. Univariate and multivariable Cox regression analyses were used to assess hazard ratio [HR] for all-cause and cardiac mortality. We made stringent attempts to control for the potentially confounding effect of (ln) NT-pro-BNP and other relevant sociodemographic and clinical factors such as age, Killip class, history of MI, history of hypertension, history of diabetes mellitus, history of chronic pulmonary disease and ST-elevation myocardial infarction. Kaplan-Meier survival curves for cardiac-related death and a log-rank (Mantel-Cox) test were employed for the analysis of survival curves. Statistical analyses was performed using the Statistical Package for the Social Science (SPSS23) for Windows. During the two-year follow-up period there were a total of 14 cardiac-related and 21 all-cause deaths. Patients in the cardiac-related death group were older, with more frequent cases of previous MI, a higher Killip class, a higher level of NT-pro-BNP, and more frequent cases of first quartile fT3 levels, as compared to survivors ( Table 3). As well, there was a trend between first quartile of fT3 and higher cardiac-related mortality rates during first 30-days after a cardiac event (data not shown): patients with first quartile of fT3 consisted of older women with more severe HF (Killip class>I), followed by more cases of DM, higher NT-pro-BNP and troponin I levels, lower T3, reduced hemoglobin and hematocrit levels. Negative associations between fT3 and NT-pro-BNP (r = −0.30, p < 0.001) were established. Association between deiodinases gene polymorphisms and cardiac mortality. Genotype dis- tributions of all SNPs were found to be in HWE (p = 0.203 for rs11206244-C/T, p = 0.457 for rs12095080-A/G, p = 0.105 for rs2235544-A/C, p = 0.492 for rs225014-T/C, p = 0.677 for rs225015-G/A, p = 0.226 for rs945006-T/G). A relationship between gene polymorphisms and mortality was made in both cardiac mortality and survivor patient groups. Associations between DIO1 (rs11206244-C/T, rs12095080-A/G and rs2235544-A/C), DIO2 (rs225014-T/C, rs225015-G/A), and DIO3 (rs945006-T/G) gene variants and cardiac mortality showed that in a case of assessed DIO2, DIO3 polymorphisms, none of the SNPs were significantly associated with cardiac mortality in this AMI cohort. However, the DIO1 gene rs12095080 heterozygous AG genotype (OR = 3.97; 95% CI = 1.45-10.89; p = 0.005) showed a significant increased risk for cardiac-related mortality, while the major wild type homozygous AA genotype (OR = 0.26; 95% CI = 0.09-0.71; p = 0.006) was linked to increased survival. Allele analysis revealed that mutant G allele was significantly associated (OR = 3.31; 95% CI = 1.27-8.61; p = 0.036) with the risk of two year cardiac mortality (Table 4). the prognostic importance of clinical variables, thyroid hormones, nt-pro-Bnp and deiodinase genotypes on the mortality. Univariate regression analysis indicated that age, Killip class, NT-pro-BNP and history of chronic pulmonary disease were associated with all-cause mortality. The multiple Cox regression model showed no significant predictors of all-cause mortality ( Table 5). Univariate regression analysis indicated that age, Killip class, previous MI, NT-pro-BNP, history of chronic pulmonary disease as well as first quartile versus ≥ second quartile of fT3 and DIO1 gene rs12095080 were Gene/chromosome location Polymorphism ID Function Table 1. General information about genotyped loci for DIO1, DIO2 and DIO3 polymorphisms. DIOdeiodinases, MAF ‡ -reported minor allele frequencies in single nucleotide polymorphisms databases from 1000 Genome Phase III combined population (http://www.ncbi.nlm.nih.gov/snp), MAF -minor allele frequencies in the present cohort, UTR -untranslated region; int -intron. Discussion In this research study we aimed to explore possible associations between serum levels of TH, genetic polymorphisms of DIO, and NT-pro-BNP with long-term outcomes in AMI patients. It was found that lower fT3 levels, DIO1 gene rs12095080, as well as higher NT-pro-BNP on admission are all associated with cardiac-related mortality after AMI. The hypothesis proposing that Age (years), mean ± SD 62.0 ± 11.4 Body mass index, mean ± SD 29.9 ± 17.8 Systolic pressure (mmHg), mean ± SD 141.8 ± 25.9 Diastolic pressure (mmHg), mean ± SD 82.5 ± 13.5 Gender, n (%): www.nature.com/scientificreports www.nature.com/scientificreports/ variations in TH concentrations within the statistically normal range may influence disease outcomes is not entirely new 26,56,57 . Nevertheless, a low T3 syndrome does not only reflect AMI status, but it has also been documented in a number of other disorders [58][59][60][61] . Independent of time-course, type and severity, a low T3 state may serve as an adaptive mechanism which reduces metabolic demands by reducing the catabolic processes of the disease 8 . A low T3 syndrome was a frequent finding in patients with cardiac pathology and without a history of thyroid dysfunction, particularly among patients with HF, AMI, and those following cardiac surgery [15][16][17][62][63][64][65] . However, the exact point of occurrence of THs alterations, after an ACS, is not clearly understood 2,66-68 . Timing of TH alterations is still debated topic in the scientific literature. However, most of the studies agree that the first five days of ACS are the most crucial for changes in T3 and rT3. Iltumur et al. 69 observed that patients with complicated MI (caused by ischemia) have a lower total and fT3. Besides, patients with prolonged cardiac arrest showed lower total T3 and fT3 levels than those with shorter one. Furthermore, during the AMI stage, drugs like nonsteroidal anti-inflammatory agents, aspirin, heparin and furosemide (>80 mg/day) might have an effect of displacing T4 and T3 from TH binding sites on TH binding proteins, which modify hormone delivery to the location of its use 70,71 . Our study findings correspond to the findings of Zhang et al. 17 exemplifying that patients with AMI and with first quartile of fT3 levels, are more likely to be older women, with severe HF (Killip class>I), followed by DM. Our study AMI patients also had a higher level of troponin I, lower T3, as well as lower hemoglobin and hematocrit levels. The low T3 pattern pathophysiological role is not well understood, although high mortality among patients with low T3 levels is found in numerous studies 1,12,17,37,63 . Conversely, other studies have not discovered an independent prognostic role for low T3 levels in cardiovascular patients [72][73][74][75] . Our study revealed a decreased length of survival in AMI patients with first quartile of fT3, confirming previous findings. Additionally, we estimate that fT3 levels within the normal concentration ranges was probably due to omitted analysis of TH during the later post-AMI period when greater fT3 downregulations could be observed 2 www.nature.com/scientificreports www.nature.com/scientificreports/ The present study lends support to the theory advanced by other research teams that fT3 represents the biologically active form of TH, so an isolated reduction in its level could constitute a model of abnormal TH metabolism acting as a risk factor for CAD 3,27-29 . Further, subclinical hypothyroidism, characterized by normal serum concentrations of fT4 and elevated TSH showed as a predictor of atherosclerosis and MI risk in elderly women 3,27,77,78 . It is suggested that even within the clinically normal range variations of TH indicate abnormal TH metabolism associated with coronary disease risk and outcomes 24,[27][28][29][30]79 . However, Ertas et al. 28 showed that within the normal range fT3 levels were inversely associated with CAD severity. It was also found that lower fT3 concentrations independently predicted the severity of CAD 29 . Mayer et al., showed that even minor changes of fT4 may relate with severity of HF 30,31 . fT4 serum concentration levels association with coronary disease severity was also examined in Jung et al. 's study 26 . When compared with survivors patients that died within seven days after AMI had a higher fT4 level, thus it is possible to make an assumption that higher levels of fT4 might be associated with increased survival rate 2,25 . Our present and previous studies and those of others, indicate association between fT3 or low-T3 syndrome with elevated NT-pro-BNP levels. This is a traditional predictor of poor prognoses in patients with AMI, indicating that a lower fT3 level would be a predictor of a poor prognosis in CAD and AMI patients 17,23,80,81 . The current study also presented a negative association between fT3, NT-pro-BNP levels and CAD outcomes which was confirmed by others authors [80][81][82][83] . www.nature.com/scientificreports www.nature.com/scientificreports/ There are several well-known TH-pathway genes such as DIO, TSH receptor (THR), and TH transporters (SLCO, MCT), which have been associated with TH levels 84 . Variants in both DIO1 and DIO2 genes were recently reported to alter TH levels in healthy individuals 34,45,85,86 . TH metabolism roles are determined by three iodothyronine deiodinases DIO1, DIO2 and DIO3 encoded by a separate gene 37,38,40,87 . The DIO1, which is responsible for converting T4 into T3, and contributes to the local hypothyroid state in the failing heart 4,12,37 . It was shown, experimentally, that alterations in DIO1 and DIO2 promote cardiac activity of DIO3, converting T4 and T3 to inactive reverse T3 and diiodothyronine (T2) in rats following MI 88 . Altered thyroid homeostasis in patients with cardiovascular disorders could modify cardiac gene expression and contribute to impaired cardiac function 89,90 . A candidate gene study revealed rs2235544 in DIO1 gene was associated with higher fT3 and lower fT4 and rT3 levels in both patients receiving TH replacement therapy and in a large population of healthy individuals. Rare C allele was associated with improved DIO1 function 44,52 . Several studies identified rs11206244 in DIO1, which was also associated with fT4, rT3 and fT3 concentrations 34,91 . Numerous studies disclosed an association between DIO1, DIO2, DIO3 polymorphisms and fT3 and other TH levels 33,34,42,92 . Our data of the same cohort also endorsed that DIO1, DIO2 gene polymorphisms are mainly associated with T3, fT4, fT3/fT4, (ln)rT3 levels, while organic anion transporter polypeptide 1C1 rs1515777-AG minor allele homozygous genotype was associated with a decrease in circulating fT3, fT3/fT4 in CAD patients after AMI 46 . Genetic variations in deiodinases may affect multiple clinical endpoints 36,37,42,93 . It was shown that the development of CAD is the result of complex interactions between numerous environmental factors and genetic variants www.nature.com/scientificreports www.nature.com/scientificreports/ at many loci 94,95 . In our previous study we found that DIO1 rs12095080 was associated with AH, while DIO2 rs225015 was associated with DM, and SNP rs974453-genotypes was associated with STEMI within the OATP1C1 gene 46 . Lee et al. found that cardiovascular mortality was higher in subjects with the rs4977574 GG genotype than in those with other genotypes 96 . The association between four SNPs on chromosome 9p21, CAD, and MI has been replicated several times in multiple populations [97][98][99][100] In patients with MI with ST-segment elevation Szpakowicz et al. revealed association between the rs12526453 of the phosphatase and actin regulator 1 (PHACTR1) gene and 5-year mortality 101 . However, in another study, the DIO2 Thr92Ala polymorphism was not related with thyroid parameters, cognitive functioning and health-related quality of life 102 . In the present study we found a relationship between SNPs in DIO1 gene rs12095080 heterozygous genotype (AG) and cardiac-related mortality. It should be noted that no patients in the cardiac-related death group carried the homozygous mutant GG genotype of this SNPs. Patients carrying rs12095080 heterozygous genotype experienced 2.5 months shorter median survival as compared to AA genotype carriers. Our preliminary analysis shows that G allele could be a favourable variable to investigate for AMI patient's prognosis. To our knowledge, there are no reports showing the importance of fT3 ranges and genetic variability of DIO1 in the long-term outcomes of the patients with AMI. There is evidence that the G variant in rs12095080, identified in the 3' UTR of human DIO1 mRNA, is associated with higher T3/rT3 ratio in serum. This may suggest that some variants in this SNPs may result in increased DIO1 activity 103 . Palmer et al. 104 showed that angiotensin-converting enzyme genotype powerfully predicted mortality in patients after www.nature.com/scientificreports www.nature.com/scientificreports/ AMI. They also showed that the ACE genotype DD was positively associated with the B type natriuretic peptide and was an independent predictor of death and the effects the response to treatment 105 . To our knowledge this study is the first one to examine how concentrations of TH and genetic markers in patients after AMI might contribute to long term outcomes. However, our findings are still exploratory and it would be premature to use them as a basis for risk stratification in patients with CAD. For example, future studies are needed to explore fT3 and gene polymorphism mutual interaction on the underlying cardiovascular mortality mechanisms. Understanding the genetic factors contribution to TH expression that predict cardiac-related mortality may open new markers and treatment targets for management of cardiovascular disease. For example, as suggested by Pingitore et al. 18 by knowing the exact mechanism we might not only measure fT3 concentration in patients after an AMI and patients with multiple CAD risk factors but also treat those with low fT3 and see whether their clinical outcomes improve. The main limitation of this study is that clinical research was performed in a single centre with a limited number of subjects. These results require validation in studies that replicate the model and include a higher number of cases and controls. Additionally, the majority of studied AMI patients had mild to moderate HF and we did not include other risk factors in our study, such as left ventricular ejection fraction or smoking. Thus, the results presented may be limited in their generalizability and may not apply to patients with more advanced HF. Finally, baseline levels of TH were not evaluated in this study, as TH was measured only on admission to the ICU and was not investigated during the later post-AMI period when the hormone concentration decline is lasting 2,66-68 . The strengths of this study include its novelty -the assessment of an impact of the fT3 ranges and TH gene polymorphisms on long-term mortality while controlling for disease severity and other CAD risk factors in patients with AMI. conclusions Lower fT3 level and DIO1 gene rs12095080 as well as higher NT-pro-BNP on admission are associated with cardiac-related mortality after AMI. In a case of DIO1 gene rs12095080, heterozygous AG genotype was significantly associated with a higher risk for cardiac mortality. Conversely, major wild type homozygous AA genotype was linked to better survival within the two year follow-up period. Ethics approval and consent to participate. The study and its consent procedures were approved by the Kaunas Regional Biomedical Research Ethics at Lithuanian University of Health Sciences, Kaunas, Lithuania and conform to the principles outlined in the Declaration of Helsinki. Written informed consent was obtained from each study patient. Data availability The datasets analysed during the current study are available from the corresponding author upon request
2023-02-20T14:55:56.816Z
2020-06-08T00:00:00.000
{ "year": 2020, "sha1": "339880ba905a8e8569f4ef2af9f7dc99479a2e54", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-66006-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "339880ba905a8e8569f4ef2af9f7dc99479a2e54", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
268274921
pes2o/s2orc
v3-fos-license
ADHD prescription patterns and medication adherence in children and adolescents during the COVID-19 pandemic in an urban academic setting Background COVID-19 impacted all students, especially those with attention deficit hyperactivity disorder (ADHD), putting them at risk for disruption to their medication regimen and school performance. Our study aimed to identify if ADHD medication regimens were disrupted through analyzing prescription refills and if telehealth management demonstrated a higher rate of adherence. Methods A total of 396 patients from the General Academic Pediatrics (GAP) clinic at Children’s Hospital of The King’s Daughters (CHKD) were included in the study. Patients were between the ages of 8–18 with a history of ADHD for three or more years that was medically managed with four or more prescription refills between January 2019 and May 2022. A retrospective chart review collected age, sex, race, refill schedule, appointment schedule, and number of telehealth appointments. Data analysis compared the variables and defined “pre-pandemic months” as January 2019 through March 2020 and “pandemic months” as April 2020 through June 2022. Results The total percentage of patients who had their ADHD medications during pre-pandemic months ranged from 40 to 66% versus 31–44% during pandemic months. Additionally, the total percentage of patients who had quarterly ADHD management appointments during pre-pandemic months ranged between 59 and 70% versus 33–50% during pandemic months. The number of months with ADHD prescription refills over the last three years was significantly higher among those who had both virtual and in-person visits than those who had just in-person visits, p < 0.001. Regarding race, Black patients had a lower number of medication refills compared to White patients when controlled for appointment type. They also had a lower number of total appointments, but there was not a significant difference in the number of virtual appointments. Conclusions Since the start of the pandemic, ADHD patients have both refilled their prescriptions and returned to clinic less frequently. This data suggests a need to re-evaluate the ADHD symptoms of GAP patients periodically and return them to a more consistent medication regimen. Telehealth appointments are a potential solution to increase adherence. However, racial inequities found in this study need to be addressed. Supplementary Information The online version contains supplementary material available at 10.1186/s12888-024-05623-4. Background COVID-19 altered the traditional classroom education model from an in-person environment to almost exclusively virtual and hybrid platforms, which placed students with attention deficit hyperactivity disorder (ADHD) at risk for disruption to their education.ADHD is defined by poor attention, distractibility, hyperactivity, impulsiveness, or behavioral problems [1].During months of remote learning, schools had mixed results in student performance, with both improvement and worsening in mathematics and reading [2,3].Students with neurodevelopmental disorders, however, struggled the most, and students with ADHD had difficulty focusing during online classes [2].In addition, adolescents with ADHD were significantly more likely to have a parent report remote learning as "very challenging" than adolescents without ADHD, regardless of whether they had an individualized educational plan (IEP) or 504 plan, which provide school accommodations with or without specialized instruction and guidelines for students with documented disabilities [4,5].Even without the added stressors of the pandemic, children and adolescents with ADHD were particularly susceptible to changes to their education platform because their IEPs, medication routines, and personal schedules were designed specifically for traditional in-person education, not remote or hybrid learning. In patients over five years of age, pharmacologic management with behavioral accommodations is the first line treatment for ADHD in the United States [6].Patients with untreated ADHD may suffer from impaired school performance and are at higher risk of entirely withdrawing from school [7].When treated pharmacologically, children have significantly improved school performance, long-term work ethic, and social outcomes [1].Monitoring ADHD, however, proved challenging during the pandemic due to variable learning schedules that regularly changed.Students with ADHD who were doing remote learning during COVID-19 lockdowns had difficulty completing school assignments and had increases in inattention, impulsivity, and aggression [8,9].On the other hand, they also had improvements in anxiety and self-esteem, which parents attributed to decreased negative feedback at school and more flexible schedules at home [8].Because patients' experiences with ADHD constantly changed during the pandemic, they may have had a greater need to regularly follow-up with physicians for ongoing management. Unfortunately, the pandemic also disrupted medication compliance and ADHD management.U.S. President Donald Trump declared COVID-19 a national emergency on March 13, 2020 [10].According to insurance claims, drug-dispensing totals for pediatric patients from April to December 2020 were 27.1% lower compared to April to December 2019, and ADHD medication dispensing dropped by 11.8% [11].Another study found that there was a 2.84% decrease in dexmethylphenidate refills between March and August 2020 [12].Regarding COVID-19's impact on outpatient care, England saw a 23.5% drop in the number of outpatient appointments for patients under 25 years of age between March 2020 and February 2021, which occurred despite a large rise in phone appointments [13].In a different survey, half of pediatricians stopped evaluating new patients with ADHD, and only 5% still offered in-person services for ADHD management [14].Overall, the decrease in stimulant use and physician management, on top of COVID-19 lockdowns and remote learning, put children with ADHD at greater risk for symptom exacerbation and its educational consequences. All ADHD management was recommended to continue via telephone or a virtual platform when the pandemic began in accordance with American Psychiatric Association telepsychiatry guidelines, and this change brought about an unprecedented demand in telehealth [15].A pediatric center in Washington, D.C. had 1,654 telehealth encounters between January 2016 and March 2020, but between March and June 2020, that number increased to 45,236 [16].England experienced 2.6 million more phone appointments for patients under 25 years of age between March 2020 and February 2021 than the previous three years [14].Despite this shift in appointment type, past studies evaluating ADHD telehealth management demonstrated consistent quality of care when compared to in-person management [17,18].A review from Current Problems in Pediatric and Adolescent Health Care reported 95-100% adherence to the American Academy of Pediatrics assessment guidelines from 2000 when diagnosing ADHD via a telehealth platform, while acknowledging that further evaluation is needed when using the updated AAP guidelines [17].Regarding pharmacological and behavioral therapies, telehealth had comparable results to traditional, in-person care when evaluating for improvements in child behavior, symptoms, and functional outcomes [17].The Children's ADHD Telemental Health Treatment Study (CATTS) even saw improvements in symptoms compared to standard, in-person return them to a more consistent medication regimen.Telehealth appointments are a potential solution to increase adherence.However, racial inequities found in this study need to be addressed.Keywords ADHD, COVID-19, Telehealth, Telemedicine, Adherence, Pediatric, Race care when patients received pharmacotherapy over videoconferencing in conjunction with in-person parental behavior training [18].Lastly, patients have found telephone and virtual ADHD services to be effective and satisfactory, although many of them still prefer in-person management [19].This is consistent with satisfaction surveys evaluating other telemedicine services, including obesity, asthma, other mental health conditions, and subspecialty appointments [20].Thanks to telehealth's demonstrated effectiveness, satisfaction rates, and the sudden demand for distanced care, it became an alternative means for physicians to manage ADHD. Even though there are existing studies that evaluate the impact of COVID-19 on ADHD management and the effectiveness of telehealth in treating ADHD, we found little to no data comparing the two during the COVID-19 era upon literature review.On one hand, patients experienced worsening ADHD symptoms during lockdown after the pandemic began, and fewer total in-person appointments were available [8,[11][12][13][14]. Conversely, telehealth appointments rapidly grew in number and had historically been underutilized but successful at managing ADHD [15][16][17][18][19][20].Furthermore, while several studies examined ADHD symptoms and medication use during lockdown, few continued to follow these variables during the subsequent school years, during which students experienced changes between remote, in-person, and hybrid learning depending on local guidelines and infection rates.In our study, we compared ADHD management throughout the pandemic with pre-pandemic care by quantifying prescription refills and appointment schedules.We also used medication adherence to evaluate the effectiveness of incorporating telehealth into ADHD management during the pandemic.Our goals were to (1) identify the number of patients that discontinued medication management before and during the COVID-19 pandemic between January 2019 and June 2022 and (2) identify trends, if any exist, in ADHD medication demand using prescription refill data during that time related to race, appointment type, and timing relative to the pandemic.We hypothesized that there would be a decrease in the number of refills after the pandemic reached the United States in March 2020, a slow increase starting in September 2020, and a return to baseline refill rates by September 2021.We also expected a higher number of refills among patients with telehealth encounters over patients who were only managed in-person. Materials and methods A retrospective chart review collected age, sex, race, monthly refill schedule, and quarterly appointment schedule of patients followed by the General Academic Pediatrics (GAP) department of the Children's Hospital of The King's Daughters (CHKD).The CHKD GAP clinic is known for being a safety net clinic in Norfolk, VA, with the vast majority of its patients on Medicaid, a US Government sponsored health insurance program for lower income patients, or Medicaid managed care.The practice sees about 23,000 patients annually and is the primary teaching outpatient clinic for pediatric residents and medical students at Eastern Virginia Medical School.Patients were selected if they were between the ages of 8-18, had an ICD-10 diagnosis code of ADHD confirmed before January 2019, and had an ADHD medication refilled a minimum of four separate months between January 2019 and May 2022.A patient's refill schedule was determined by whether they had an ADHD medication refilled at least one time per month between January 2019 and May 2022, and refills were dispensed in 30-day supplies, with the ability to call for renewals for two additional months, per GAP clinic policy.Similarly, the quarterly appointment schedule was determined by whether a patient was seen and managed for ADHD at least one time during defined three-month periods (January-March, April-June, July-September, and October-December) between January 2019 and May 2022.Appointments were considered ADHD management appointments if the physician's note described the patient's current ADHD symptom management and medication use.Telehealth appointments had been available to patients at their or the physician's request if they had been seen in-person at the GAP clinic within the last year.Research assistants were able to search our database for active patients with the ICD-10 codes specific for ADHD and then reviewed the GAP clinic electronic medical record, Cerner Millennium ®.They recorded age, sex, race, monthly refill schedule, and quarterly appointment schedule into a secure, password protected REDCap database, and deidentified the patient data.Data analysis compared the variables and defined "pre-pandemic months" as January 2019 through March 2020 and "pandemic months" as April 2020 through June 2022.Continuous variables were presented as mean, standard deviation, median, Min, and Max.Categorical variables were presented as frequency and percentage.Pearson correlation was used to assess the association between continuous variables.T-test and Man-Whitney tests were used to analyze differences in the number of appointments between White and Black populations.Generalized Liner Model was used to assess the impact of race and type of appointments on the percentage of total refills.All statistical tests were performed using SPSS.26 (Chicago, IL).All statistical tests were twosided, and p < 0.05 was considered statistically significant.Variables were compared using either two-sample t-tests or linear regression.Values of 1 > r ≥ 0.8 were considered strong positive linear correlation, 0.8 > r ≥ 0.4 moderate positive linear correlation, 0.4 > r > 0 weak positive linear correlation, r = 0 no correlation, 0 > r≥-0.4 weak negative linear correlation, -0.4 > r≥-0.8 moderate negative correlation, and − 0.8 > r ≥ 1 strong negative correlation. Results Data was obtained from 475 patients.Seventy-nine were removed because they either refilled their medication three or less times or did not have a complete prescription history available during the 41-month period.Of the 396 charts utilized, 302 patients were male and 94 were female.Additionally, 314 patients identified as Black, 78 as White, one as Asian, and 3 as not disclosed, which mirrored our overall patient demographic (Fig. 1; Table 1). The number of months that ADHD prescription was refilled was significantly correlated with total number of ADHD appointments (r = 0.40, p < 0.001), number of virtual (r = 0.27, p < 0.001), and number of in-person (r = 0.35, p < 0.001) ADHD appointments.The number of months (m) with prescription refills over the last three years was significantly higher among those who had both virtual and in-person visits (m = 22.03) than those who had just in-person visits (m = 15.97)(p < 0.001) (Fig. 2).This remained significant when accounting for racial background.Black patients with both in-person and virtual appointments had a higher number of refills than Black patients with only in-person appointments (m = 20.2 vs. 15.1)(p < 0.001), and the same was true for White patients (m = 26.6 vs. 21.2) (p = 0.022) (Fig. 3).Regarding racial differences, the number of virtual appointments was significantly lower among all Black patients (m = 0.7) than all White patients (m = 1.6) (p < 0.001), but there was not any significant difference in the total number of appointments (p = 0.08) and number of in-person appointments (p = 0.88) between White and Black patients (Fig. 4).The number of refills was also significantly higher among White patients who 2) (p = 0.001; p < 0.001) (Fig. 3).Regarding age, there was a negative correlation between age and number of months that ADHD prescriptions were refilled (r=-0.12,p = 0.012).Also, age was negatively correlated with total number of ADHD appointments (r=-0.30,p < 0.001) and There was not any significant association between age and virtual appointments (r=-0.05,p = 0.29).Regarding sex, the percentage of refills among males who had inperson & virtual appointments was significantly higher than males with only in-person appointments (m = 23.1 vs. 16) (p < 0.001) and females with both in-person & virtual appointments (m = 23.1 vs. 18.0) (p = 0.011) (Fig. 5). The months with an ADHD prescription refill were divided into two groups, pre-pandemic and pandemic, defined in the methods.The total number of patients who had an ADHD prescription refilled in a given month during the pre-pandemic months ranged from 157-260 (40-66%) as opposed to 121-175 patients (31-44%) during the pandemic months (Figs.6 and 7).There was a significant decrease in patients refilling their prescriptions each month during the pandemic compared to the prepandemic (p < 0.001). The quarterly appointment schedule was similarly divided into pre-pandemic and pandemic groups.The total number of patients who had quarterly ADHD management appointments during the pre-pandemic months ranged between 241-276 (59-70%) as opposed to 130-198 patients (33-50%) during the pandemic months (Figs. 8 and 9).There was a significant decrease in the number of patients with quarterly appointments during the pandemic months (p < 0.001). Discussion Our study evaluated the medication adherence of patients with ADHD from one year before the pandemic to over two years into the pandemic.First, we found that medication refills decreased significantly after March 2020, and each pandemic month had a lower refill rate than any pre-pandemic month (31-44% versus 40-66%) (p < 0.001), except for July 2019.The decrease in medication refills between June and August 2019 was expected because many patients often do not take medications during the summer when they are not attending school [21].These findings are consistent with reports from parents of patients with ADHD that their children had worsening ADHD symptoms during lockdown and challenges with remote learning [4,8].Because ADHD medication use relies on a regular daily schedule, such as taking the medication after breakfast or at certain times during the school day, the new dynamic of a remote learning environment during lockdown and COVID-19 surges were not conducive to maintaining a predictable schedule.Regularly changing school schedules necessitated regularly changing medication schedules, which could easily lead to missed doses or discontinuing medications entirely.Additionally, if some were required to stay home and in closer proximity to their children during remote learning, they potentially could provide more individual attention.This increased attention and vigilance may Fig. 4 The distribution of the number of ADHD management appointments in relation to race Fig. 6 The number of patients that refilled their ADHD prescription in a given month between January 2019 and May 2022 Fig. 5 The distribution of the total number of refills in relation to sex and appointment type Fig. 8 The number of patients who had an ADHD management appointment by quarter Fig. 7 The percentage of patients that refilled their ADHD prescription each month between January 2019 and May 2022 have allowed some children to stay on task with schoolwork for a longer time with less reliance on medications [22]. We expected refill rates to increase back to baseline by the start of the 2021/2022 academic year as schools across the country transitioned back to full-time inperson learning, but they remained unchanged.These patients did not necessarily find other coping strategies for their ADHD.While there is little published research examining ADHD symptoms and school performance after the COVID-19 lockdowns, the National Assessment of Educational Progress reported significant test score declines among fourth and eighth grade students in math and reading between 2019 and 2022 [23].In Virginia specifically, Standards of Learning test pass rates in reading, math, and science were lower in 2021/2022 than 2018/2019 [24].Meanwhile, 41.4 million Adderall prescriptions were dispensed in 2021, a 10-20% increase from 2020, which is the opposite of what we found among our patients [25,26].Even before the pandemic started, only 40-66% of patients had prescription refills in a given month.Students with ADHD are already at risk for low adherence given the nature of the condition, and adherence rates varied greatly [27].One Canadian study from 2012 reported discontinuation rates of 19% with longacting stimulants and 39% with short acting, and another study evaluating GPA improvement with stimulant adherence among Philadelphia public school students only had a 20% adherence rate [27,28].The prognosis of ADHD typically extends from childhood into adolescence, and even adolescents who had responded well to treatment usually continue to have significant impairment that extends into adulthood [6,29].In our study, we saw a slight negative correlation between age and number of months that an ADHD prescription was refilled (r=-0.12,p = 0.012), which cannot solely account for the difference in pre-pandemic and pandemic refill rates.We therefore think the unchanged refill rate is less likely due to an improvement in patients ADHD symptoms but rather other factors preventing our patients from adhering to their regimen.In the context of the pandemic, COVID-19 seemed to exacerbate ADHD symptoms and medication non-compliance, and our patient population was not able to recover to pre-pandemic levels. Predictors that favor ADHD medication adherence include two-parent families, higher socioeconomic status, and Caucasian background [30].The GAP clinic is a safety net clinic that primarily serves Black patients on Medicaid.This differential impact arises from the cost of the medication, added financial stressors during the pandemic, and supply chain issues that could prevent a family from refilling an ADHD medication.In our study, 79.3% of the subjects were Black, so our population correlated with a limited rate of adherence that was sustained.In addition to the low monthly pre-pandemic refill rates among all our patients, Black patients had a significantly Fig. 9 The percentage of patients who had an ADHD management appointment by quarter lower number of months with refills than White patients with the same type of appointment (p < 0.001).Distrust in the medical system is also associated with decreased adherence [30].In the context of misinformation about COVID-19 and vaccine safety, another study at the GAP clinic surveyed the parents of pediatric patients and found that 67.5% of 179 respondents were unlikely to vaccinate their child, with 73.1% of Black parents, or 87 parents, reporting "unlikely to vaccinate" [31].This hesitancy could also foster mistrust in other aspects of the patients' healthcare, including ADHD management.Other negative associators of ADHD medication adherence are lack of early follow-up after starting treatment and limited transportation services, while a good patient-physician relationship was associated with increased adherence [32].In the context of our study, the patients who had the greatest number of refills returned to clinic the most (r = 0.40, p < 0.001).However, we also saw only 59-70% of ADHD patients return for follow-up every three months before the pandemic and 33-50% during the pandemic.The Healthcare Effectiveness Data and Information Set recommends two follow-up appointments after the first month of initiating an ADHD medication, one at three months and another at six months, so our patient followup rates were below the recommended guidelines [33].As a whole, our population of patients already had several socioeconomic factors that limited their ability to engage with their healthcare and refill their medications prior to the pandemic. Incorporating telehealth into follow-up plans may be beneficial to increasing medication adherence.Having at least one telehealth management appointment correlated with a greater number of total months with refills, with a median of 27 versus 21 months among White patients (p = 0.022) and 18 versus 12 months among Black patients (p < 0.001) over a period of 41 months.These results are comparable to the CATTS findings that demonstrated telehealth as a successful means to monitor ADHD symptoms and other studies showing improved followup rates when using telehealth [18,34,35].Furthermore, integrating technology into patient care has already been successful in improving medication adherence.One study evaluated the efficacy of using a mobile app to track ADHD symptoms, provide medication reminders, and facilitate patient-physician communication and found that patients using the app took their medication more regularly than the control group [36].Similar results have been found among other groups including hypertension and diabetes patients [37,38].In our patient population, White patients had on average 1.6 virtual appointments while Black patients only had 0.7 (p < 0.001).There was a trending towards significance in the total number of appointments (p = 0.08) but no significant difference in the number of in-person appointments (p = 0.88) between White and Black patients.The difference in the number of telehealth appointments could be due to a lack of reliable smartphone, computer, or Wi-Fi access among our Black patients.Another possibility is that the patients able to do a combination of visit types may have highly organized, resourceful parents or fewer barriers to healthcare, which would also lead to a higher rate of prescription refills.Regardless, telehealth has clear benefits in health management including improved access to appointments and reduced cost, which is especially beneficial for ADHD patients who may have difficulty with in-person appointments [39].For our patient population at the GAP clinic, telehealth also could provide an opportunity for more appointments and patient education.This could increase patient engagement, which in turn leads to increased adherence as demonstrated by patients utilizing medical apps [36][37][38].Improving access to telehealth and encouraging telehealth follow-ups, especially with our Black patients, thus could be conducive to increasing medication compliance among patients with ADHD. We were limited in means to track medication adherence over the period of 41 months we analyzed.While refilling a medication shows intention to use it, patients might not have been taking their medications consistently or could have adjusted their doses without consulting their physicians.These possibilities would change the number of refills they needed due to a potential surplus of medication, so they may not have needed to refill their medication as frequently.Recording the number of months with prescription refills would catch the patients with missed or altered doses but not any patients who may refuse to take their medications even when their parents continue to refill it.Alternatively, surveying patients about their medication habits over three years could lead to inaccurate data due to recall bias.Therefore, analyzing prescription refills was the most reliable and efficient means to measure adherence.We elected not to evaluate the patients' ADHD symptoms through the use of Vanderbilt© screening in relation to their patterns of refills, since this process was also disrupted significantly and would have been difficult to interpret.In addition, since CHKD GAP is a teaching clinic, numerous pediatricians manage our set of patients, and due to different styles of notetaking, we could not standardize symptoms into categories such as mild, moderate, or severe.Regarding differences between patients that did or did not utilize telehealth services, we did not have means available to evaluate for differences between the two populations, such as socioeconomic disparities, because that information was not available to us in the electronic medical record throughout this time. This study quantified the impact and lingering effects of the COVID-19 pandemic on ADHD medication adherence in patients at the GAP clinic, and on a larger scale, identified a strong need to reevaluate symptoms and management among underrepresented youth with ADHD, especially Black patients.It also identified a positive correlation between incorporating telehealth into management and medication adherence.The CATTS study already saw an improvement in ADHD symptoms in rural patients with telehealth appointments compared to traditional in person care [18].Future randomized clinical trials must be performed to determine whether the same is true for Black patients and patients from an urban setting, with close attention to medication use.Prospective studies could also investigate differences between patients with telehealth management versus in-person only management to highlight barriers to care patients might face.In any event, the pandemic worsened symptoms and adherence rates in children and adolescents already at risk for noncompliance, and there is an urgent need for pediatricians to reengage ADHD patients with their condition. Conclusion The COVID-19 pandemic caused an unprecedented disruption to the medical management of children and adolescents with ADHD.Our study found that a significant decrease in prescription refills and follow-up appointments occurred when the pandemic started, which disproportionately affected Black patients, and never returned to pre-pandemic levels.Reasons for decreased adherence are multifactorial, but telehealth appointments are a potential, accessible solution to mitigating these factors.Even though the COVID-19 emergency is over in the US, there is a need for pediatricians to reengage patients with their ADHD management to recover from the pandemic's lasting impact.Further prospective studies investigating differences between in-person follow-up and follow-up augmented with telehealth, including medication adherence, symptom management, and racial differences, are warranted. Fig. 3 Fig. 2 Fig. 3 The distribution of the percentage of months with ADHD prescription refills between January 2019 and May 2022 in relation to race and appointment type Table 1 The demographic information of the participants Fig. 1 The age distribution of the patients had only in-person appointments or both in-person and virtual appointments than Black patients with the same type of appointments (m = 21.2 vs. 15.1;m = 26.6 vs. 20.
2024-03-09T06:17:37.800Z
2024-03-07T00:00:00.000
{ "year": 2024, "sha1": "335ff12a9b62d329c4059546364a4a8ee6452221", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ed3679828b53be934fb632f96adefc68566a20c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253183999
pes2o/s2orc
v3-fos-license
Intermittent fasting and exercise therapy abates STZ‐induced diabetotoxicity in rats through modulation of adipocytokines hormone, oxidative glucose metabolic, and glycolytic pathway Abstract Diabetes is a global, costly, and growing public health issue. Intermittent fasting (IF) and exercise therapy have been shown to improve insulin sensitivity (IS) in large studies, although the underlying processes are still unknown. The goal of this study, which included both nondiabetic and diabetic rats, was to look at the mechanisms of intermittent fasting and exercise in the management of diabetotoxicity. The effects of starvation and honey on the oral glucose tolerance test, insulin tolerance test, adipocytokines, oxidative glucose metabolic enzymes, glycolytic enzymes, food intake, and body weight in rats with streptozotocin‐induced diabetes were also investigated. In the nondiabetic phase, rats were administered an oral regimen of distilled water (0.5 ml/rat), honey (1 g/kg body weight), and interventions with IF, and starvation for 4 weeks while in the diabetic phase, after STZ or citrate buffer injections, interventions with IF, exercise, starvation, and honey treatment began for 4 weeks. At all OGTT and ITT points, there was a substantial rise in glucose in the STZ group. Adipocytokines hormone, oxidative glucose metabolic enzymes, glycolytic enzymes, and body weight were all affected by STZ when compared to starvation and honey, however, IF and exercise significantly reduced these alterations. In diabetic rats, intermittent fasting and exercise enhanced serum adipocytokines levels. These findings imply that adipokines modulate glycolytic/nonmitochondrial enzymes and glucose metabolic/mitochondrial dehydrogenase to mediate the antidiabetic effects of intermittent fasting and exercise. | INTRODUCTION Fasting hyperglycemia decreased insulin secretion, and insulin receptor insensitivity are all symptoms of diabetes mellitus, a metabolic condition (Hudish et al., 2019). Diabetes mellitus is reported to impact more than 100 million individuals worldwide and is one of the world's top five causes of death (Otovwe & Akpojubaro, 2020;Yang et al., 2019). Persistent hyperglycemia in diabetics has been shown to generate excessive reactive oxygen species (ROS) generation in many organs by glucose autooxidation and/or protein glycation (Saddala et al., 2013). In animal models and humans with diabetes, there have also been findings of altered antioxidative enzyme activity and enhanced lipid peroxidation (Kade et al., 2008;Prabakaran & Ashokkumar, 2013;Schmatz et al., 2012). This condition is the result of issues associated with modern lifestyles, such as a high intake of processed foods, a growing geriatric population, decreased physical exercise, and obesity (Bekele et al., 2020). Type 2 diabetes mellitus (T2DM), cardiovascular diseases (CVDs), and fatty liver disease are all more common in people who have metabolic syndrome (Daryabor et al., 2019). Food restriction (FR) is defined as a reduction in food intake while maintaining minimal nutritional levels. In humans with T2DM (Albosta & Bakke, 2021) and animal models, it has already shown improvements in pancreatic beta-cell activity, blood glucose control, and other parameters (Alejandra et al., 2018;Elesawy et al., 2021). Some of the various approaches utilized to create FR is intermittent fasting (IF) (Kunduraci & Ozbek, 2020) and starvation. Hypoglycemia, ketoacidosis, dehydration, hypotension, and thrombosis have all been linked to diabetics who practice IF (Hu et al., 2019). Short-term starvation causes insulin resistance in humans, according to previous research . More interestingly, exercise and intermittent fasting have long been recognized as important non-pharmacological tools for the treatment of diabetes (Corley et al., 2018;Harvie & Howell, 2017;Sampath Kumar et al., 2019;Sutton et al., 2018) and accepted as adjunctive therapy in the management of type 2 diabetes mellitus owing to their ability to improve insulin sensitivity and insulin-stimulated muscle glucose uptake, both of which improve glucose utilization (Ko et al., 2018). As a result, elucidating the mechanisms underlying this type of intermittent fasting and exercise therapy-related improved insulin sensitivity could help researchers to better understand how insulin sensitivity develops in conditions like obesity and type 2 diabetes. | Animal use and handling All animal experiments were conducted in accordance with protocols authorized by the Research Committee on the Ethical Use of Animals (DSUA Care), Reference no REC/FBMS/DELSU/21/121. The researchers employed 80 adult Sprague Dawley rats of similar age (10-12 weeks) and weight (180-250 g) for the investigation. For 2 weeks of acclimation, the rats were kept in a regular habitat with uniform husbandry and photoperiodic conditions (12 h of light and 12 h of darkness) and an ambient room temperature of 280°C-300°C. Throughout the trial, all of the rats were kept in clean wooden cages with unlimited access to water and a standard rat chow diet. The National Research Council's 'Guide for the Care and Use of Laboratory Animals (NRC, 2011) was utilized to ensure that the animals used in this investigation were treated as humanely as possible. | Chemicals Commercially available honey (Golden Glory, Australia) was bought from a local market in Abraka, Delta State. Honey (1 g/kg/day) was diluted with distilled water before being given to the rats by gavage. Streptozotocin (STZ, 99% purity) was supplied by Sigma-Aldrich. All other chemicals used were analytical grade and also were obtained from Sigma-Aldrich. | Induction of diabetes Diabetes was induced by giving a single intraperitoneal injection of low dose streptozotocin (STZ, 50 mg/kg b.w.) in freshly made 0.1 M citrate buffer (pH 4.5). To avoid hypoglycemia, these rats were given unrestricted access to standard rats' chow during the night after being injected with streptozotocin in a solution of saccharose (10 g/100 ml). Diabetes was diagnosed 72 h after STZ injection in rats with a fasting blood glucose level of more than 200 mg/dl. This was done with the One Touch UltraEasy Blood Glucose Monitoring System and a glucometer after blood was expressed from the tail vein. The nondiabetic groups were administered intraperitoneal injection of freshly made 0.1 M citrate buffer (pH 4.5) without STZ. Four weeks after STZ or citrate buffer injections, the treatments began. | Experimental design A total of 66 rats were divided into two phases: nondiabetic and diabetic. The nondiabetic phase is divided into five groups whereas the diabetic phase is divided into six groups consisting of six rats (n = 6) per group. The groups of the nondiabetic phase include Nondiabetic control, Intermittent fasting, Starvation, and honey (1 g/kg body weight) groups. The groups of the diabetic phase include Control, Diabetic control, Diabetic and intermittent fasting, Diabetic and starvation, Diabetic and Exercise, and Diabetic and honey (1 g/kg body weight) groups. | Intermittent fasting intervention The intermittent fasting (IF) group was given absolute food deprivation for 24 h, followed by ad libitum access to rat chow for another 24 h. At noon, the IF group's food was withdrawn or made available. For the duration of the experiment, the IF group had unlimited access to water. Bodyweight change and food intake were tracked throughout the study. | Starvation intervention For 2 weeks, rat chow was withheld from a group of rats to test the effect of starving (Namazi et al., 2016). During the protracted hunger, none of the rats died. | Exercise intervention Individual cages with a running wheel (Accelerator Ltd.) were used to house the exercising animals, who had free access to the wheel for 24 h a day (method of Szalai et al., 2014). The exercising protocol was chosen to isolate the effects of exercising from the additional stress associated with forced exercise regimens. It is defined as a voluntary wheel-running paradigm. The average running distance permitted throughout the exercising time was 4 km/day/ animal for uniformity. | Measurement of body weight and food intake The rats' body weight was measured and documented at the start of the trial, and they were then weighed weekly with a digital weighing scale to see how much they had changed. The pancreas, liver, and heart's relative organ weights were also measured and recorded. Furthermore, daily feed intake was assessed and recorded in percentage. First, the total weight of feed provided per group was subtracted from the weight of daily feed remnants. | Sample collection and preparation Rats were euthanized following an overnight fast under diethyl ether anesthesia at the end of the fourth week (28 days) of treatment. Fasting blood glucose, insulin concentration, glucose tolerance and insulin tolerance tests, glucose intolerance and insulin sensitivity, adipocytokines hormones (adiponectin, ghrelin, resistin, and irisin), oxidative glucose metabolic enzymes, and adipocytokines hormones (adiponectin, ghrelin, resistin, and irisin) were all tested (ICDH, SDH, G6PDH, and LDH). Following that, liver tissues were dissected, cleaned of adherent tissues, washed with physiological saline containing 0.9 percent (w/v) cold normal saline, and pat dried on filter paper. The tissues were homogenized in a Teflon homogenizer (Heidolph Silent Crusher M) and then centrifuged at 10,000g for 15 min at 4°C. The activity of glycolytic/mitochondrial enzymes (G6Pase, F1,6BPase, HKase, and PKase) was evaluated by using a spectrophotometer to measure the absorbance of the samples (Shimadzu UV 1700). | Oral glucose tolerance test (oGTT) and insulin tolerance test (ITT) The procedures from Cummings et al. (2014) were used for oGTT and ITT. | Oral glucose tolerance test (oGTT) The rats were subjected to a 12-h overnight fast in the final week of treatment. Then, using a tail snip, blood was obtained for glucose measurement (time 0) on a glucometer (FreeStyle Potium Neo). The animals were then given a glucose solution of 2 g/kg per body weight via gavage, and blood glucose concentrations were recorded at 0, 30, 60, and 120 min. | Insulin tolerance test After the OGTT, an insulin tolerance test (ITT) was performed 48 h later. The animals were subjected to a 4-h food restriction in this case. The blood was then drawn from the animal's caudal end and used to measure glucose (time 0) using a glucometer (FreeStyle Potium Neo). After that, the animals were given an intraperitoneal injection of ordinary human insulin (Humulin) at a dose of 0.75 U/kg per body weight, and blood glucose levels were monitored at 0, 30, 60, and 120 min. | Determination of adipocytokines (adiponectin, ghrelin, irisin, and resistin) The levels of adipocytokines (Adiponectin, ghrelin, irisin, and resistin) were tested using rat adiponectin, ghrelin, irisin, and resistin ELISA kit, as described by Jiménez-Maldonado et al (2019). Serum samples diluted to 1:500 were used in the experiment. Within the first 30 min after the stop solution was applied, absorbance was measured at 450 nm (MyBiosource, Inc.). The mean absorbance of the samples was computed after they were tested in duplicate. The adiponectin and ghrelin/irisin assays have sensitivity limits of 0.4 ng/ml and 0.4 pg/ml, respectively, and quality control was verified using the kit's standards. | Estimations of oxidative glucose metabolic and glycolytic enzymes in rat serum Using the ELISA approach, the oxidative glucose metabolic status (ICDH, SDH, G6PDH, and LDH) and glycolytic enzyme activities (G6Pase, F1,6BPase, HKase, and PKase) in serum and liver were examined and quantified (R & D systems, USA and Thermo Fisher Scientific, respectively). | Statistical analysis Graph pad prism 8 Biostatistics software was used to examine the data (Graph pad Software, Inc., version 8.0). All data were reported as the mean standard error of the mean (SEM). Following that, a one-way analysis of variance (ANOVA) was used, followed by a post hoc test (Bonferroni) for multiple comparisons. The significance level for all tests was set to p < 0.05. | Effect of intermittent fasting, starvation, exercise and honey on oral glucose tolerance test (OGTT) in naïve, and streptozotocin-induced diabetes in levels in male rats The effect of intermittent fasting, starvation, Exercise and honey on OGTT in naïve and streptozotocin-induced Type 2 Diabetes Mellitus (T2DM) in male rats is shown in Table 1a/ Figure 1a and Table 1b/Figure 1b. This OGTT was used to evaluate glucose metabolism in nondiabetic and diabetic rats. The result of the OGTT of intervention with intermittent fasting, starvation. Exercise in rats shows a decrease in blood glucose levels at all points of the OGTT when compared with that of the nondiabetic control rats as represented in Table 1a and Figure 1a whereas rats treated with honey revealed a significant increase in blood glucose levels at all point of the OGTT relative to intermittent fasting, exercise, starvation, and nondiabetic control rats. The results of the oral glucose tolerance test (OGTT) of streptozotocin-induced diabetic rats showed a significant increase in blood glucose levels at all points of the OGTT when compared with that of the nondiabetic control rats as represented in Table 1b and Figure 1b. To establish that intermittent fasting, starvation, and exercise enhanced glucose metabolism in diabetic rats, OGTTs were conducted in diabetic rats as represented in Table 1b and Figure 1b. At the beginning of intervention with intermittent fasting, starvation, and exercise, the level of blood glucose (indicated by the OGTT) was significantly lower than the diabetic group at 0, 30, 90, and 120 min after glucose loading. Although no significant changes were noticed in blood glucose level at all points of the OGTT when compared with that of the diabetic rats. This finding suggests glucotoxicity due to beta cell destruction, which was ameliorated by intermittent fasting, starvation, and exercise intervention. | Effect of intermittent fasting, starvation, exercise and honey on insulin tolerance test (ITT) in naïve, and streptozotocin-induced diabetes in levels in male rats The effect of intermittent fasting, starvation, exercise and honey on Insulin Tolerance Test (ITT) in naïve and streptozotocin-induced Type 2 Diabetes Mellitus (T2DM) in male rats is shown in Table 2a and Figure 2a. This ITT was used to evaluate glucose metabolism in nondiabetic and diabetic rats. At the beginning of intervention with intermittent fasting, starvation, exercise, the level of glucose (indicated by the ITT) was not different across the groups at 0mins; however, the serum glucose level was significantly higher in the intermittent fasting, starvation, Exercise group than the control group at 30, 60, 90, and 120 min after glucose loading as represented in Table 2a and Figure 2a whereas, at 120 min, blood glucose levels were revealed to decreased in intermittent fasting and exercise group following an increased in rats treated with honey except in starvation were no changes was observed when compared to nondiabetic rats. The results of the insulin tolerance test (ITT) of streptozotocin-induced diabetic rats showed a significant increase in blood glucose levels at all points of the ITT T A B L E 1 (a) Glucose tolerance test on normal and different intervention protocols. (b) Glucose tolerance test on normal, diabetic rats and different intervention protocols F I G U R E 1 (a) Area under the curve (AUC) of glucose tolerance test on normal and different intervention protocols. (b) AUC of glucose tolerance test on normal, diabetic rats and different intervention protocols when compared with that of the nondiabetic control rats as represented in Table 2b and Figure 2b. To confirm that intermittent fasting, starvation, and exercise enhanced glucose metabolism in diabetic rats, ITTs were conducted in diabetic rats as represented in Table 2b and Figure 2b. At the beginning of intervention with intermittent fasting, starvation, and exercise, the level of blood glucose (indicated by the ITT) was significantly lower than the diabetic F I G U R E 2 (a) Area under the curve (AUC) of insulin tolerance test on normal and different intervention protocols. (b) Insulin tolerance test on normal, diabetic rats and intervention protocol group at 0, 30, 90, and 120 min after glucose loading. More specifically, starvation was revealed to exert more decrease in serum glucose level at all points of the ITT when compared to intermittent fasting and exercise. | Effect of intermittent fasting, starvation, exercise and honey on food intake and body weight in naïve, and streptozotocin-induced diabetes in male rats Figure 3A,B show the effect of intermittent fasting, starvation, exercise and honey food intake and body weight in naïve and streptozotocin-induced Type 2 diabetes Mellitus (T2DM) in male rats. As shown in Figure 3A, intervention with starvation and honey significantly (p > 0.001) increased food intake. Also, honey significantly increased body weight whereas starvation significantly decreased body weight relative to nondiabetic control, intermittent fasting, and exercise. Although no changes were observed in food intake and body weight in intermittent fasting and exercise when compared to nondiabetic control animals. Changes in the amount of food intake as well as body weight were measured in the diabetic rats of experimental and nondiabetic control animals and expressed in Figure 3B. The amount of food intake was significantly (p < 0.001) increased whereas body weight was markedly (p < 0.001) decreased in the diabetic and starved rats relative to nondiabetic rats. Intermittent fasting and exercise intervention restored these changes to near-normal. However, starvation and honey intervention with diabetic rats did not shows any significant changes in the amount of food intake. | Effect of intermittent fasting, starvation, exercise and honey on serum adipocytokines hormones (adiponectin, ghrelin, and irisin) in naïve, and streptozotocin-induced diabetes in male rats The effect of intermittent fasting, starvation, exercise and honey on serum adiponectin, ghrelin and irisin activities in naïve and streptozotocin-induced Type 2 Diabetes Mellitus (T2DM) in male rats are shown in Figure 4A Starvation also significantly increased adiponectin, ghrelin, and irisin level but not to the extent of intermittent fasting and exercise when compared to nondiabetic control groups. No statistically significant changes were observed in serum adiponectin; although, a significant decrease was observed in irisin and ghrelin levels in rats treated with honey relative to nondiabetic controls. Furthermore, intermittent fasting and exercise intervention revealed marked significant changes in adiponectin, ghrelin, and irisin when compared with starvation and honey. A statistically significant difference in the level of adiponectin, ghrelin, and irisin was evaluated in the diabetic rats' serum of experimental and nondiabetic control animals and is expressed in Figure 4B. In the post hoc test, the concentration of serum adiponectin (F [5, 30] = 94.99, p < 0.0001), ghrelin (F [5, 30] = 127.6, p < 0.0001) ( Figure 6B), irisin (F [5, 30] = 540.1, p < 0.0001) were markedly (p < 0.001) decreased in the serum of diabetic rats relative to nondiabetic rats. Intermittent fasting, starvation, exercise, and honey intervention restored these changes to near-normal. However, intermittent fasting and exercise intervention with diabetic rats shows more significant changes in the activities of adiponectin, ghrelin, and irisin when compared to starvation and honey. | Effect of intermittent fasting, starvation, exercise and honey on serum oxidative glucose metabolic enzymes/mitochondria dehydrogenase (ICDH, SDH, G6PDH, and LDH) in naïve and streptozotocin-induced diabetes in male rats The effect of intermittent fasting, starvation, exercise and honey on serum oxidative glucose metabolic enzymes/ mitochondria dehydrogenase (ICDH, SDH, G6PDH, and LDH) in naïve and streptozotocin-induced Type 2 Diabetes Mellitus (T2DM) in male rats are shown in Figure 5A,B. As shown in Figure 5A Starvation decreased ICDH, SDH, and LDH levels and increased G6PDH when compared to intermittent fasting and exercise; although no significant changes were observed in G6PDH when compared to honey and nondiabetic control groups. No statistically significant changes were observed in ICDH, SDH, LDH, and G6PDH levels in rats treated with honey relative to nondiabetic controls. More so, intermittent fasting and exercise intervention revealed marked significant Changes in ICDH, SDH, LDH, and G6PDH levels when compared with starvation and honey. Changes in the level of ICDH, SDH, LDH, and G6PDH were evaluated in the diabetic rats' liver of experimental and nondiabetic control animals as expressed in Figure 5B. The diabetic rat serum was revealed to depicted a marked (p < 0.001) reduction in ICDH (F [5, 30] = 69.67, p < 0.0001), SDH (F [5, 30] = 67.77, p < 0.0001), and G6PDH (F [5, 30] = 203.1, p < 0.0001) activities and a marked (p < 0.001) increased in LDH (F [5, 30] = 144.2, p < 0.0001) activity. The changes of ICDH, SDH, G6PDH, and LDH activities were reverted to the normal range in the serum of intermittent fasting and exercise intervention with diabetic rats. Starvation intervention compared with diabetic control rats did not illustrate any marked differences in ICDH and SDH activities but was observed to increased G6PDH and LDH, although this elevation and that of the honey intervention were not to the level of intermittent fasting and exercise intervention. F I G U R E 3 (A, B) Effect of intermittent fasting, starvation, exercise and honey on food intake (a) and body weight (b) in naïve male Wistar rats. Bars represent mean ± S.E.M. (n = 6) (one-way ANOVA followed by Bonferroni post hoc test). (A) * p < 0.05 relative to controls. b p < 0.05 relative to intermittent fasting group; c p < 0.05 relative to exercise group. d p < 0.05, relative to honey group. (B) * p < 0.05, **** p < 0.0001 relative to controls. a p < 0.05, aa p < 0.01, aaaa p < 0.001 relative to diabetic group. b p < 0.05, bb p < 0.01 relative to intermittent fasting group; c p < 0.05, cc p < 0.01 relative to Exercise group. d p < 0.001, dddd p < 0.0001 relative to honey group The effect of intermittent fasting, starvation, exercise and honey on liver glycolytic enzymes/non mitochondrial enzymes (G6Pase, F1,6BPase, HKase, and PKase) in naïve and streptozotocin-induced Type 2 Diabetes Mellitus (T2DM) in male rats are shown in Figure 6A,B. As shown in Figure 6A, intervention with intermittent fasting and exercise significantly ( p < 0.01, *** p < 0.001, **** p < 0.0001 relative to controls. b p < 0.05 relative to intermittent fasting group; c p < 0.05 relative to exercise group. d p < 0.05, dd p < 0.01, ddd p < 0.001 relative to honey group. (B) **** p < 0.0001 relative to controls. a p < 0.05, aaaa p < 0.001 relative to diabetic group. bb p < 0.01, bbbb p < 0.0001 relative to intermittent fasting group; c p < 0.01, cccc p < 0.0001 relative to exercise group. dd p < 0.01, ddd p < 0.001, dddd p < 0.0001 relative to honey group markedly decreased G6Pase, F1,6BPase, Pkase level, and decreased HKase when compared to intermittent fasting and exercise. No statistically significant changes were observed in HKase and PKase level in rats treated with honey relative to nondiabetic controls. Furtherly, intermittent fasting and exercise intervention revealed marked significant Changes in G6Pase, F1,6BPase, Pkase level, and decreased HKase level when compared with starvation and honey. Changes in the level of G6Pase, F1,6BPase, HKase, and Pkase were evaluated in the diabetic rats' liver of experimental and nondiabetic control animals as expressed in Figure 6B. The diabetic rat liver were shown to depicted a marked (p < 0.001) reduction in G6Pase (F [5, 30] = 69.67, p < 0.0001), F1,6BPase (F [5, 30] = 67.77, p < 0.0001) and HKase and PKase (F [5, 30] = 203.1, p < 0.0001) activities shows marked (p < 0.001) increased (F [5, 30] = 144.2, p < 0.0001) activity. The changes of G6Pase, F1,6BPase, HKase, and Pkase activities were reverted more to the normal range in the liver of starvation as well as in the intermittent fasting and exercise intervention with diabetic rats. Honey F I G U R E 5 (A, B) Effect of intermittent fasting, starvation, exercise and honey on serum isocitrate dehydrogenase (ICDH) (a), succinate dehydrogenase (SDH), glucose-6phosphate dehydrogenase (G6PDH) and lactate dehydrogenase (LDH) (b) activities in naïve male Wistar rats. Bars represent mean ± S.E.M. (n = 6) (one-way ANOVA followed by Bonferroni post hoc test). (A) * p < 0.05, ** p < 0.01, *** p < 0.001 relative to controls. b p < 0.05 relative to intermittent fasting group; c p < 0.05 relative to exercise group. d p < 0.05, dd p < 0.01, relative to honey group. (B) **** p < 0.0001 relative to controls. a p < 0.05, aa p < 0.01, aaaa p < 0.001 relative to diabetic group. b p < 0.05, bb p < 0.01 relative to intermittent fasting group; c p < 0.05, cc p < 0.01 relative to exercise group. d p < 0.05, dd p < 0.01, dddd p < 0.0001 relative to honey group intervention compared with diabetic control rats did not illustrate any marked differences in G6Pase, F1,6BPase, HKase, and Pkase activities but were observed to increase G6Pase, F1,6BPase; whereas no changes were observed in the HKase and Pkase. (A) * p < 0.05, ** p < 0.01, *** p < 0.001 relative to controls. b p < 0.05 relative to intermittent fasting group; c p < 0.05 relative to exercise group. d p < 0.05, dd p < 0.01 relative to honey group. (B) **** p < 0.0001 relative to controls. aa p < 0.01, aaaa p < 0.001 relative to diabetic group. b p < 0.05 relative to intermittent fasting group; c p < 0.05 relative to exercise group. d p < 0.05, dd p < 0.01, dddd p < 0.0001 relative to honey group at all points of OGTT and ITT (Abdulwahab et al., 2021;Oza & Kulkarni, 2018). The STZ induces type 2 diabetes, as well as reduced glucose tolerance and insulin resistance, according to the Mahmoud et al. (2017). Although insulin levels may be normal or even elevated in some diabetic patients, most tissues are unable to use glucose, resulting in hyperglycemia. Glucose intolerance is the medical term for this. One of the most prominent procedures for evaluating glucose intolerance is the oral glucose tolerance test (OGTT). It looks into problems with blood glucose regulation or glucose homeostasis. The blood glucose levels of diabetic rats were considerably raised after consumption of glucose during an OGTT in this investigation (Lodhi and Kori, 2021;Germoush et al., 2019). With intermittent fasting, starvation, and exercise intervention, the concentration of blood glucose in diabetic rats was elevated to a peak after 30 min, and then restored to fasting blood glucose ranges after 60 and 120 min. Untreated diabetic rats, on the other hand, exhibited greater blood glucose levels at 30 and even 120 min. When honey-treated diabetic rats were compared to untreated diabetic rats, it was shown that glucose levels remained higher. Intermittent fasting, starving, and exercise were found to assist increase glucose tolerance by lowering glucose absorption from the intestine, enhancing insulin sensitivity, and boosting insulin action on diverse tissues for glucose uptake, according to the findings of Albosta and Bakke (2021) and Dwaib et al. (2021). Weight loss, muscle wasting, excessive hair loss, scaling, cataract, increased food and water consumption, polyuria, dehydration, and other symptoms are all observed in diabetic rats. In this work, the bodyweight of STZinduced diabetic rats was dramatically lowered. Because diabetic rat cells may be unable to use glucose for energy production due to decreased insulin action or secretion, this could be explained by higher consumption of fat and protein. Increased protein catabolism to generate amino acids for gluconeogenesis also leads to muscle waste and weight loss (Srinivasan et al., 2014). In the current study, the body weight of diabetic rats was dramatically lowered. This drop could be due to structural protein breakdown, which contributes to weight gain (Mahajan et al., 2020). In STZ-induced experimental DM, weight loss is associated to increased tissue protein breakdown and muscle degeneration (Mahajan et al., 2020). The amount of food ingested by control and experimental rats was also recorded or quantified on a daily basis in this study. Food consumption rose dramatically in diabetic rats, which could be due to impaired glucose utilization by tissues, resulting in a high amount of glucose excretion through urine, which produces a persistent stimulus to eat more food. In the intermittent and activity groups, diabetic rats were less likely to lose weight and eat more food. This could be due to the fact that intermittent fasting and exercise help to manage blood sugar levels (Albosta & Bakke, 2021;Spezani et al., 2020). By decreasing calorie intake and resetting the metabolism, intermittent fasting can assist to reduce obesity and, as a result, insulin resistance. Furthermore, greater AMP-activated protein kinase (AMPK) activation has been demonstrated to promote healthy aging and a reduction in chronic disease through energy/nutrient depletion (such as caloric restriction) (Burkewitz et al., 2016). Reduced energy intake, such as that obtained through intermittent fasting, should result in long-term reductions in insulin production, as seen in this study, as well as increased levels of AMPK, which is thought to play a role in improved insulin sensitivity and glucose homeostasis, as seen in this study. (Larson-Meyer et al., 2006) discovered that in overweight, glucose-tolerant persons, a 25% reduction in calories, either by diet alone or diet combined with exercise, enhanced insulin sensitivity and reduced cell sensitivity. Several obesity studies, on the other hand, have found that humans have a hard time sticking to a daily calorie restriction for long periods of time (Anton et al., 2017). Intermittent fasting, on the other hand, has a higher compliance rate and has been shown to help obese people improve metabolic risk factors, body composition, and weight loss (Albosta & Bakke, 2021;Anton et al., 2017;Spezani et al., 2020). The shift in the body's main fuel source during fasting from glucose to fatty acids and ketones has been related to these favorable outcomes. (Anton et al., 2017). We measured serum adiponectin and ghrelin levels in diabetic rats to better understand the physiological mechanisms by which intermittent fasting and exercise exert their therapeutic intervention on serum glucose and insulin levels. Adipokines are involved in energy homeostasis and the regulation of glucose and lipid metabolism, immunity, neuroendocrine function, insulin-sensitization, anti-inflammatory, and antiatherogenic function, and cardiovascular function (Duszka et al., 2021;Dwaib et al., 2021;Liang et al., 2021;Di Sessa et al., 2019;Spezani et al., 2020;). In research, adiponectin was found to affect insulin sensitivity in diabetic mice (Saad et al., 2015). Adiponectin levels are low in persons with obesity, type 2 diabetes, and coronary artery disease (Looker et al., 2004;Raji et al., 2004). In this investigation, diabetic rats experienced a considerable drop in serum adiponectin, as previously reported (Ahmed et al., 2012;Mahmoud et al., 2013). Lower serum levels of adiponectin and ghrelin have been associated with insulin resistance, poor insulin sensitivity, and the genesis of obesity and type 2 diabetes Statnick et al., 2000). For 4 weeks, diabetic rats who fasted intermittently and exercised had greater blood levels of adiponectin and ghrelin (Ouerghi et al., 2021;Stensel, 2010). Improvements in glucose tolerance, insulin sensitivity, hepatic glucose production, and peripheral glucose uptake were associated to this (Polito et al., 2020;Stensel, 2010). According to current data, ghrelin may play a function in metabolic syndrome (Ukkola, 2009). In a range of pathophysiological situations, such as obesity, type 2 diabetes, and other metabolic abnormalities, ghrelin concentrations have been demonstrated to be lowered (Barazzoni et al., 2007;Poykko et al., 2003). Insulin has been proven to decrease ghrelin release in healthy normal-weight and overweight adults (St-Pierre et al., 2007;Weickert et al., 2008). Hyperinsulinemia with simultaneous hyperglycemia has no influence on plasma ghrelin at concentrations seen in insulin-resistant patients, according to a prior study, but only at pharmacological insulin doses. Because ghrelin has been found to drive adipogenesis in vitro, the decline in adiponectin could be attributable to a drop in ghrelin levels (Mano-Otagiri et al., 2009). By suppressing gluconeogenesis and boosting lipid oxidation, adiponectin has been demonstrated to increase AMP-activated protein kinase (AMPK), resulting in better insulin sensitivity and glucose metabolism regulation (Yamauchi et al., 2002). Adiponectin also suppresses hepatic gluconeogenesis by lowering the expression of glucose-6-phosphatase and phosphoenolpyruvate carboxylase, lowering hepatic glucose production (Yamauchi et al., 2002). Through these processes, adiponectin and ghrelin contribute to enhanced insulin-induced signal transduction and hence improved insulin sensitivity (Ouerghi et al., 2021;Yamauchi et al., 2002). Irisin, a new adipocytokine, is released, activated, and transported to a variety of tissues and organs to carry out its physiological tasks. It can, for example, improve insulin resistance, boost uncoupling protein-1 expression, convert white fat into brown fat with catabolic properties, increase energy consumption and glucose utilization, and coordinate the treatment of metabolic illnesses like obesity and type 2 diabetes (Jung et al., 2017;Rizk et al., 2016;Xuan et al., 2020). As a result, unlike starvation, exercise and intermittent fasting can improve insulin resistance and have a modest hypoglycemic impact, as revealed in our work. This could be due to exercise increasing irisin secretion in skeletal muscle Sousa et al., 2021;Xuan et al., 2020). Our findings revealed that STZ-induced diabetic rats had lower irisin levels than non-diabetic controls, which was consistent with and similar to the findings of most previous studies in animals and humans when compared to nondiabetic controls (Choi et al., 2013;Elizondo-Montemayor et al., 2019;Liu et al., 2013;Moreno-Navarrete et al., 2013;Yan et al., 2014;Zhang et al., 2016;Xuan et al., 2020). When diabetic rats were compared to nondiabetic control rats, non-pharmacological therapies such as intermittent fasting and exercise were observed to generate an increase in serum irisin. Intermittent fasting/ exercise-related elevated serum irisin has been connected to improved metabolic health, insulin signaling, glucose homeostasis, and other glycemic profile in animal STZmodels of diabetes, making it a prospective target in the management of metabolic diseases. The glycolysis pathway, which starts with hexokinase phosphorylating glucose to glucose 6-phosphate, is the core of cellular metabolism (HK). In energy metabolism, the isoenzyme HK plays a crucial function. In mammalian cells, hexokinases (HKs) are four isoforms of hexokinases that are involved in glucose oxidation (Wilson, 1995). The activity of HK I-III is regulated by the cell's glucose 6-phosphate concentration, which acts as a feedback inhibitor. Insulin, glucagon, and glucokinase regulatory protein regulate the activity of HK-IV, commonly known as glucokinase, which has a low affinity for glucose yet phosphorylates it predominantly (Collier & Scott, 2004). HK-I and HK-IV are expressed more in the liver than the other HKs. Previous research has shown that liver HK is important for glucose consumption and glycogen production (Postic, 2001), and that its activity is decreased in diabetes. The liver activity of diabetic rats was shown to be significantly lower in this investigation. A decrease in insulin sensitivity and an increase in insulin resistance could be to blame. After intervention with intermittent fasting and exercise, the HK activity in the liver of diabetic rats was dramatically increased. Intermittent fasting and exercise may have activated insulin sensitivity for glucose reuptake by the cells, resulting in this rise. Intermittent fasting and exercise enhanced glucose metabolism and glucose homeostasis by boosting HK activity in the liver. Pyruvate kinase (PK) transforms phosphoenolpyruvate to pyruvate and generates ATP. L (liver-type), R (red blood cell-type), M1 (muscle-type), and M2 (muscle-type) are the four isoforms of PK (muscle-type). Yamada and Noguchi (1998) showed that PK-L is expressed greatest in the liver and lowest in the kidneys, pancreatic b-cells, and small intestine, whereas PK-R is exclusively present in red blood cells. PK-M1 is present in the brain, heart, and skeletal muscle, while PK-M2 is found in other tissues (Noguchi et al., 1991). In persons with diabetes, reduced PK activity may be the reason for impaired glucose metabolism and ATP generation. The current study found a considerable reduction in PK activity in the livers of STZ-induced diabetic rats, resulting in decreased glycolysis and enhanced gluconeogenesis. Earlier research had yielded similar findings (Palsamy & Subramanian, 2009;Prasath & Subramanian, 2011;Srinivasan et al., 2014). The PK activity in the livers of diabetic rats was recovered to near-normal levels with intermittent fasting and exercise. The enzyme G6Pase (glucose-6-phosphatase) is essential for glucose homeostasis. Bouché et al. (2004) identified it largely in the liver and kidney, where it aids in glucose production during famine or prolonged fasting, as well as diabetes mellitus. G6Pase is engaged in the glycogenolysis and gluconeogenesis pathways' dephosphorylation step, where glucose-6-phosphate is transformed to free glucose. This enzyme, which is connected to the glucose-6-phosphate transporter, hydrolyzes glucose-6phosphate into glucose and phosphate in the endoplasmic reticulum (Chou et al., 2002). G6Pase is activated by cAMP, whereas insulin inhibits it. Similar to prior investigations, the current study discovered a considerable increase in G6Pase activity in the liver of STZinduced diabetic rats (Palsamy & Subramanian, 2009;Prasath & Subramanian, 2011;Srinivasan et al., 2014). Intermittent fasting and exercise brought G6Pase activity back to near-normal levels in diabetic mice. Fructose-1,6-bisphosphatase (F1,6BP) is a rate-limiting enzyme in the gluconeogenic pathway that dephosphorylates fructose-1,6-bisphosphate to fructose-6-phosphate. It is usually present in the liver and kidney, but it can also be found in the b-cells of the pancreas. In this investigation, the activity of F1,6BP in the liver of STZ-induced diabetic rats was found to be considerably higher. This result is in line with prior findings (Palsamy & Subramanian, 2009;Prasath & Subramanian, 2011;Srinivasan et al., 2014). Increased F1,6BP activity may be a mechanism to initiate endogenous glucose production from glycerol via gluconeogenesis during diabetes (Nurjhan et al., 1992). Intermittent fasting and exercise drastically lowered F1,6BP activity in diabetic rats' livers, restoring glucose homeostasis by limiting gluconeogenesis via gluconeogenic substrates while blocking direct impacts on glycolysis, glycogenolysis, and the citric acid cycle. During anaerobic glycolysis, which occurs both in the cytosol and in the mitochondria, LDH converts pyruvate to lactate to provide energy (Bouché et al., 2004;Kavanagh et al., 2004;Kavanagh et al., 2004). H (hearttype) and M (muscle-type) are the two subunits of LDH, and their synthesis is controlled by two distinct genes. Glucose, insulin, and NADH limit LDH activity, whereas cytosolic ATP, Ca2+, and mitochondrial membrane potential boost it (Ainscow et al., 1999). Reduced LDH activity in tissues may be needed to confirm that glycolysis produces a high ratio of NADH and pyruvate, which is oxidized by mitochondria. In this work, the activity of LDH was observed to be considerably higher in the livers of STZ-induced diabetic rats. Similar findings have been seen in other studies (Palsamy & Subramanian, 2009;Prasath & Subramanian, 2011). Diabetes-related increases in LDH activity may disrupt glucose metabolism and reduce insulin sensitivity. The activity of LDH in the liver of diabetic rats was returned to near-normal by manipulating the ratio of NADH and pyruvate with intermittent fasting and exercise. As a result, the process of glucose (pyruvate) oxidation in the mitochondria is improved. G6PDH is a pentose phosphate pathway regulator that creates NADPH, which is needed to restore reduced glutathione from oxidized glutathione. According to an earlier study, NADPH produced by G6PDH is essential for the generation of reactive oxygen species (ROS) such as superoxide and nitric oxide radicals in hepatic and extrahepatic tissues, as well as their eradication by catalase and glutathione peroxidase (GPx) (Park et al.,2006). Glutathione levels have been associated to reduce oxidative stress and G6PDH activity (Dora et al., 2021;Nóbrega-Pereira et al., 2016). In this work, the activity of G6PDH in the liver of diabetic rats was found to be considerably lower. This finding is in line with earlier research (Palsamy & Subramanian, 2009;Prasath & Subramanian, 2011;Srinivasan et al., 2014). The reduced activity of G6PDH could possibly contribute to the advancement of diabetes complications. With intermittent physical exercise, G6PDH activity in diabetic rats was considerably increased to near-normal levels. Furthermore, both intermittent fasting and exercise treatments boosted hexokinase and pyruvate kinase activity in the diabetic rats' livers while lowering glucose-6-phosphatase and fructose-1,6-biphosphatase. Diabetes increases the rate of glycogenolysis and gluconeogenesis, resulting in higher hepatic glucose production (Raju et al., 2001). Hexokinase activity was found to be lower and glucose-6-phosphatase activity was shown to be higher in previous studies, resulting in lower liver glycogen and hyperglycemia (Ahmed et al., 2012;Grover et al., 2000). Increased insulin production with a matching rise in insulin resistance, which activates the glycogenolytic and gluconeogenic pathways, is another mechanism contributing to a decrease in liver glycogen (Mahmoud et al., 2015;Pari & Murugan, 2005). The drop in SDH activity generated by STZ-induced oxidative stress suggests a decrease in succinate to fumarate conversion, reflecting a decrease in oxidative metabolism. The synthesis of fumarate is increased when phosphoenolpyruvate is diverted during a stressful scenario, resulting in SDH product inhibition (Rajeswarareddy et al., 2012). SDH activity may be reduced in diabetic rats' tissues due to enzymatic failure caused by lipid peroxidation activation. This could be owing to an excess of free radicals created in response to the harmful effects. Diabetic rats on a non-pharmacological intermittent fasting and exercise program had higher SDH activity than diabetic rats on a pharmacological intermittent fasting and exercise regimen. The antioxidant-boosting benefits of intermittent fasting and exercise could be to blame for this increase (Allen et al., 2020;Nurmasitoh et al., 2018;Shahandeh et al., 2013). Furthermore, higher SDH activity in diabetic rats during intermittent fasting and exercise suggests that the TCA cycle is more efficient at using energy-producing intermediates. Isocitrate dehydrogenase (ICDH) catalyzes the oxidative decarboxylation of isocitrate to -ketoglutarate, which requires either NAD+or NADP+to create NADH and NADPH, respectively (Rajeswarareddy et al., 2012). NADPH is necessary for the operation of the NADPH-dependent thioredoxin system and the regeneration of reduced glutathione (GSH) by glutathione reductase, both of which are vital in the protection of cells against oxidative damage (Rajeswarareddy et al., 2012). As a result, during oxido-nitrergic stress, ICDH could operate as an antioxidant. By providing NADPH for GSH synthesis, ICDH protects mitochondrial, and cytosolic oxidative damage (Rajeswarareddy et al., 2012). As a result of ICDH damage, the equilibrium between oxidants and antioxidants may be disrupted, resulting in a prooxidant state. In STZ-induced diabetes rats, the activity of isocitrate dehydrogenase (ICDH) was assessed, and it was found to be considerably lower in the diabetic group than in the nondiabetic control group. Rajeswarareddy et al. (2012) published similar findings, indicating that diabetic group mitochondrial ICDH activity was lower than nondiabetic control group. The glycation of ICDH can prevent it from performing its function. Glycation aids the inactivation of ICDH by reactive oxygen species. After non-pharmacological treatments with intermittent fasting and exercise, the activity of ICDH was normalized when compared to the diabetes control group. This could be attributed to intermittent fasting and exercise's antioxidant-enhancing or mediating activity in reducing diabetes complications. | CONCLUSION In conclusion, our findings show for the first time that nonpharmacological therapeutic regimens such as intermittent fasting and exercise improve insulin sensitivity and glucose tolerance in STZ-induced type 2 diabetic rats by maintaining insulin signaling and glucose homeostasis, whereas starvation had more hypoglycemic effects, resulting in increased weight loss. The honey-treated rats show higher diabetes-related symptoms. Intermittent fasting and exercise boosted peripheral glucose absorption, decreased hepatic glucose production, regulated glucose metabolic enzymes, and raised the activity of liver glycolytic enzymes in diabetic rats. In diabetic rats, intermittent fasting, and exercise enhanced serum adipocytokines levels. These findings imply that adipokines modulate glycolytic/nonmitochondrial enzymes and glucose metabolic/mitochondrial dehydrogenase to mediate the antidiabetic effects of intermittent fasting and exercise.
2022-10-29T06:17:52.952Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "bc162cad54350b414e406172fb51403f29c371c0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "df1d8101e460ba7cc6df1f4529cb602cf415c6be", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
94413095
pes2o/s2orc
v3-fos-license
Long-range three-body atom-diatom potential for doublet Li${}_3$ An accurate long-range {\em ab initio} potential energy surface has been calculated for the ground state ${}^2A'$ lithium trimer in the frozen diatom approximation using all electron RCCSD(T). The {\em ab initio} energies are corrected for basis set superposition error and extrapolated to the complete basis limit. Molecular van der Waals dispersion coefficients and three-body dispersion damping terms for the atom-diatomic dissociation limit are presented from a linear least squares fit and shown to be an essentially exact representation of the {\em ab initio} surface at large range. I. INTRODUCTION Progress in the field of ultracold molecules has been rapidly growing for the past decade, as many atomic and molecular research groups have turned towards the study of either production or dynamics of ultracold diatomic molecules. The success behind the study of ultracold molecular formation lies in the use of photoassociation and Feshbach resonances (see Jones et al. 1 and Köhler et al. 2 for recent reviews). Using a combination of photoassociation and STIRAP 3 (stimulated Raman adiabatic passage) the formation of vibrational ground state KRb 4,5 and Cs 2 6 molecules have been reported. With this prospect of ground vibrational diatoms in mind we focus our study to the interaction effects of a colliding lithium atom with that of a v = 0 singlet lithium diatom. Knowledge of both the long-and short-range 7 interaction potential can be used to calculate inelastic three body rates 8 as well as investigate Efimov collisions. 9 The assumption of a vibrational ground state diatom greatly simplifies the physics of the interaction, however the complexity of calculating long-range interactions for even the v = 0 range of motion of the diatom is significant. Given this we work within the frozen diatom approximation, which entails freezing the diatom at the calculated equilibrium bond length for the entire potential energy surface, which is analogous to vibrationally averaging the diatom over the course of the collision. The long-range interaction is not strongly effected by this approximation due to the small linear changes seen in the diatomic polarizability as the diatom undergoes small oscillations. Here then the polarizability averaged over the vibrational motion is just the polarizability evaluated at r e . The validity of the rigid rotor approximation for short atom-diatom collisional distances was found to be good for distances greater than 10Å. We structure this work into the following three parts, first is the discussion of the ab initio calculations. Here we discuss the methodology involved in choosing the appropriate basis set which provides both the optimal values for the spectroscopic constants as well as the best atomic static polarizability. Further improvement to the interaction energy is shown to come from accounting for basis set superposition error through a counterpoise correction 10 and from extrapolating the counterpoise corrected energies to the complete basis limit. Next we present a summary of the long-range van der Waals interaction for the tri-atomic dissociation limit. An account of the many-body terms arising from third and fourth order Rayleigh-Schrödinger perturbation theory that contribute to the three-atom C 6 and C 8 van der Waals coefficients is given. Finally we present the long-range analytic van der Waals atom-diatom interaction energy of Cvitaš et al. 11 and our fitted non-additive van der Waals coefficients. II. AB INITIO CALCULATION The long-range potential energy surface for the 2 A ′ Li 3 state has been calculated within a frozen diatom approximation for collisional angles near the C 2v geometry, which corresponds to a lithium diatom in the X 1 Σ + g state colliding with a single 2 S state lithium atom. The long-range potential energy surface was calculated for atom-diatom distances ranging from 10Å to 100Å, a region we consider to be both outside of any consideration of charge overlap yet still well within the very-long-range limit where the retarded potential starts contributing. The collisional angle sampling of θ = 60, 70, 80 and 90 degrees (where θ = 90 • corresponds to C 2v geometry in Jacobi R, r, θ coordinates) is both sufficient for an accurate fit and consistent with our previous work on the near equilibrium geometry potential energy surface. 7 All electronic energy calculations in this work were done correlating all electrons using spin restricted coupled cluster theory with singles, doubles and iterative triples 12,13 (RCCSD(T)) as implemented in the MOLPRO 2008.1 14 suite of ab initio programs. Cold scattering calculations require exceptionally accurate interaction potentials in order to properly predict cross-sections and scattering lengths. To provide this accuracy we apply a series of corrections to the RCCSD(T) energy which account for deficiencies within the basis set. Additionally we correlate all electrons in the RCCSD(T) energy so as to account for the core-core and core-valence (CV) contributions. The inclusion of CV correlation energy has been shown to account for roughly 0.002Å for multiple bonds and several hundred cm −1 to atomization energies. 15 To properly correlate all electrons within an ab initio calculation necessitates the use of a CV consistent basis set. 15,16 We have examined the use of the four and five zeta cc-pVnZ basis sets from Feller 17 and the CV consistent CVnZ basis sets of Iron et al. 16 In Table I is a comparison between calculated spectroscopic constants from the above basis sets and the experimental values, it can be seen that the five zeta basis sets provide a marked improvement in terms of both r e and D e . The calculated polarizability is also used as a benchmark calculation in addition to r e and D e . The accuracy of which is a strong indicator of the accuracy of the atomic static polarizability is a strong indicator of the accuracy of long-range molecular dispersion interactions. The first improvement to the trimer potential energy surface was to calculate the basis set superposition error (BSSE) through a counterpoise calculation. 10 The interaction energy of the trimer is then where E ij atom is the energy of the resulting atom when atoms i and j within the trimer are replaced with dummy centers. Accounting for BSSE the CVQZ basis provides a correction of 5.27 cm −1 and the CV5Z basis has a correction of 1.05 cm −1 . Across the total long-range surface the total BSSE correction varies no more than 1%, finally converging to a constant value at interaction regions greater than 20Å. This suggests that the majority of the BSSE corresponds to the diatom and not the atom-diatom interaction itself. Still this counterpoise correction does not fully account for all of the dissociation energy of the Li 2 diatom. We further improve upon the accuracy of the potential energy surface by extrapolating the counterpoise corrected energies to the complete basis set (CBS) limit. We use the CBS limit extrapolation formulation of Helgaker et al. 18 This extrapolation scheme was applied to the CVQZ and CV5Z counterpoise corrected interaction energies for the final potential energy surface calculation. We calculate the static polarizability for the ground state lithium atom and singlet lithium diatom with the static field method 19,20 by calculating the RCCSD(T) energies in the presence of an electric dipole field (given here as E(F )). The static polarizability is given by the finite field gradient and reported in Table II. The dispersion energy between two monomers at long-range (no charge overlap) can be expressed as 21 where α i is the static polarizability for the ith monomer and V is a characteristic excitation energy of the molecule ? . Due to the important contribution of the long-range tail of the electron wave function to the molecular polarizability, the effects of adding diffuse functions can be significant. In the calculation of the polarizability discussed above, a set of even tempered diffuse functions were added to the CVnZ basis sets and found to contribute little to the extrapolated polarizability. A further test on the effect of adding diffuse functions was performed by evaluating a CBS extrapolated single point energy calculation at 10Å with the even tempered diffuse functions discussed above. The results showed that in the CBS limit the difference between the standard and augmented CVnZ basis sets amounts to less than half a wavenumber. Because the long-range interaction depends explicitly upon the monomer static polarizability, it is clear from the reported atomic static polarizability in Table II that to the CBS limit as discussed above. We expect that our calculated diatomic long-range interactions will be suitably accurate given the precision of the calculated spectroscopic values and static atomic polarizability. III. LONG-RANGE VAN DER WAALS POTENTIAL We now examine the trimer potential energy surface at the three atom dissociation limit, which is expanded analytically in terms of the two-body dispersion interactions with the addition of a purely three-body interaction potential. The three-body contribution to the interaction energy is well known to be strong for Li 3 , for both the doublet 7 and quartet 23 state. Thus it is important to accurately include such effects in any model of the longrange interaction. By using perturbation theory to examine this expansion it is possible to express the generalized dispersion coefficients in terms of known diatom and triatomic constants. Our goal in this section is to overview the essential theory of three-body atomic interactions, which can then be specialized to the case of atom-diatom interactions within the frozen diatom approximation previously discussed. The tri-atomic dissociation interaction potential can be described in terms of the diatomic van der Waals interaction potential V d (r ij ) and non-additive many-body potential V 3 (r) as where r = (r 12 , r 13 , r 23 ) are the three internuclear vectors. Long-range dispersion interaction potentials (excluding retardation and orbital overlap effects) between two S-state atoms are described using the multipole expansion where the C 6 and C 8 coefficients are respectively the dipole-dipole, quadrupole-quadrupole and dipole-octopole expansions 21 of the inter-atomic electrostatic Hamiltonian V ij . To obtain the leading terms of the non-additive potential V 3 (r) in Eq. 5, the analogous expansion method used for the diatom van der Waals interaction can be used. Here, Rayleigh- Schrödinger (RS) perturbation theory is applied to the total inter-atomic interaction Hamil- where H x is the atomic Hamiltonian for the x'th atomic, then expanded by multipole moments. The desired contributions to the non-additive potential V 3 (r) arise from the first many-body terms in third order RS perturbation theory. The bipolar expansion of the many-body terms from third order RS perturbation theory leads to a summation of purely geometric factors with interaction constants dependent on the atomic species 24,25 , The interaction constant can be expressed with the Casimir-Polder integral of the dynamic 2 l i polarizabilities 26 over complex frequencies With the dynamic polarizability defined as 27 and the 2 l oscillator strength defined as the sum over i goes over all the electrons in the given atom and E n is again the excitation energy for state n. The geometric factors W l 1 l 2 l 3 (r) have been reported by a number of authors 11,24-26 and will not be reproduced here. The first term in the third order expansion can be identified as the well known Axilrod-Teller-Muto triple-dipole 28 term. Additionally, it has been identified that the quadruple dipole term Z 1111 , from fourth order perturbation theory, has a contribution to the van der Waals dispersion coefficients of consideration here. This term has no exact expression in terms of the dynamic polarizabilities 11 , but it can be approximated using a Drude oscillator model in the case of three S state atoms. Using the corresponding Drude oscillator approximation for the C 6 van der Waals dispersion coefficient, the Z 1111 term can be approximated as 29 Using the values C 6 = 1393.39 and Z 111 = v abc /3 = 56865 from Yan et al. 27 , this provides the approximate value Z 1111 = 7735638 a.u. for the quadrupole dipole term. In the other asymptotic limit of the trimer where the system dissociates to a diatom and separated atom, the van der Waals type interaction can again be expressed as a series of multipole terms of the diatomic and atomic polarizabilities. Using the Jacobi coordinates R, r and θ to describe the atom-diatom system, where r is the diatomic internuclear distance, R is the diatomic center of mass to colliding atom distance and θ is the angle between r and R, for the asymptotic limit of R ≫ r the interaction potential in the absence of damping and exchange is 21 Where D e is the interaction energy of the diatom, and the dispersion coefficients are defined in terms of Legendre polynomials as C 6 (r, θ) =C 0 6 (r) + C 2 6 (r)P 2 (cos θ), To obtain the analytic form for the atom-diatom van der Waals coefficients in terms of the tri-atomic terms, we transform the internuclear r ij coordinates to Jacobi coordinates through the transformations in the limit of R ≫ r. The contributions to C 6 (r, θ) and C 8 (r, θ) can be found by transforming the W l 1 l 2 l 3 (r) and r −n ij coefficients and then expanding in a power series with respect to r/R. This expansion has been completed by Cvitaš et al. 11 for the contributions to C 6 (r, θ) and C 8 (r, θ). The resulting terms included in these van der Waals coefficients are and In the atom-diatom dissociation limit the effects of charge overlap damping in the r In this work we chose to follow Rérat and Bussery-Honvault 30 's implementation of the Tang and Toennies damping functions, where each anisotropic contribution in terms of r has the following mapping Working within the frozen diatom approximation the charge overlap damping can be simply modeled as a constant fitting parameter, F n , evaluated at the diatomic equilibrium bond length. Inserting the fitting parameter into Eqs. 19-23 provides the following As evaluated in Eqs. 26-30, the fitting parameter F n no longer directly correspond to the Tang and Toennies damping function as has been noted. 11 We have performed a linear least squares fit of Eq. 12, with the definitions given by Eqs. 13-14, to obtain the van der Waals coefficients given in Table III Fitting to these values we obtain the same dispersion coefficients as above due to the nature of the least squares fit, with the values for the damping parameters given in Table III. Plotted in Fig. 1 is a comparison of the ab initio surface and the fitted van der Waals potential. As can be seen the fit is very accurate, which is confirmed by the RMS surface fitting error of 10 −3 cm −1 . This analytical long-range expansion can be used in conjunction with our previous work 7 on the ground state surface of 2 A ′ Li 3 to evaluate scattering properties of the Li+Li 2 rigid rotor system. The results of which we leave to future work. IV. CONCLUSIONS We have calculated ab initio an accurate long-range 2 A ′ Li 3 surface for the dissociation to the Li [ 2 S]+ Li 2 [X 1 Σ + g ]. The lithium diatom was taken to be rigid rotor with the bond length constrained to r = r e , the calculated equilibrium bond length in table I. The surface was calculated at the RCCSD(T) level of theory, correlating all electrons. To accurately describe the CV interaction, the CVQZ and CV5Z basis sets of Iron et al. 16 were used; the final counterpoise corrected interaction energies were then extrapolated to the CBS limit. At this level of theory, the atomic and diatomic dipole polarizabilities are shown to be in good agreement with published experimental and theoretical results, which is an important component to long-range interactions. Using the expansion for the atom-diatom many body van der Waals potential (Eq. 12 and 13) we calculate the non-additive interaction coefficients by fitting to the calculated ab initio surface. The resulting repulsive contribution of the three-body interaction to the total interaction energy is found to be up to 33% of the total energy. The fitted van der Waals coefficients were found to be consistent with the existing literature on the related 4 A ′ state. 11
2012-01-06T07:02:37.000Z
2011-05-05T00:00:00.000
{ "year": 2012, "sha1": "81d54baea8b47118db06dbd1ead4774f3178350e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1105.1090", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "81d54baea8b47118db06dbd1ead4774f3178350e", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
234660561
pes2o/s2orc
v3-fos-license
Effect of Pulse Consumption on Obesity and the Metagenome : Grain legumes, which are commonly referred to as pulses, are staple foods in many parts of the world, but are infrequently consumed in most economically developed countries where the obesity pandemic is prominent. However, even in low pulse consuming countries such as the United States, there are sub-groups of individuals who consume large amounts of pulses. Systematic reviews of population studies indicate that pulse consumers have a lower risk for developing obesity. To determine whether these population-based findings could be modeled in preclinical systems in which such findings can be deconstructed, we used rat and mouse models of dietary induced obesity and reported that lipid accumulation was inhibited. In this study, we examined the relationship between inhibition of fat accumulation and changes in the gut associated microbiome in male C57BL/6 mice fed either a high fat diet with casein as the protein source or that diet formulation in which one of four pulses (chickpea, common bean, dry pea, or lentil) was substituted to provide 70% dietary protein with the remainder provided by casein. The seeds of each pulse were soaked, cooked, and then freeze-dried and milled; the resulting powder was used for diet formulation. Mice were ad libitum fed over the 17-week duration of the feeding trial. Cecal content was obtained at necropsy and immediately snap frozen in liquid nitrogen. Extracted genomic DNA was processed for 16s rRNA sequencing on an Illumina system. Significant differences were observed between each pulse and the high fat control diet in microbial phylogenetic diversity ( p < 0.001) and accumulation of lipid in adipose depots ( p < 0.01). Differences among pulses were also observed in both metrics. Microbiome alpha and beta diversity metrics, differences in abundance for each detected taxon among treatment groups and their relationships to changes in lipid accumulation in adipose storage depots are reported. Introduction The occurrence of obesity, a global pandemic, increases the risk for a number of aging associated chronic diseases including non-alcoholic fatty liver disease, type-2 diabetes, cardiovascular disease, and certain types of cancer [1].In these disease states, insulin resistance, chronic inflammation, and oxidation mediated cell damage are known to contribute to disease pathogenesis [2][3][4][5][6].Deregulation of interconnected cell signaling pathways among tissues underlies the metabolic dysfunction that is observed.While the importance of environmental effects on gene expression are well established, there has been a lack of compelling evidence that an individual's diet exerts effects on the signaling networks that impact disease outcomes including mortality.This has led to increased reliance on prescribed drugs, while limiting the attention given to food-based interventions in chronic disease prevention and control [7].However, emerging evidence now supports significant impacts of diet that are mediated by its interaction with the gut microbiome that result in reduced risk for chronic diseases [8][9][10][11][12].As an example of this, recent meta-analyses indicate that consumption of grain legumes, i.e., common bean, chickpea, dry pea, and lentil, also referred to as pulses [13], is associated with improved weight management relative to populations in which the consumption of pulses is low [14,15].Given the complexity of obesity, we recently sought to determine whether the impact of pulses on body weight observed in prospective clinical studies could be reproduced under the controlled conditions that it is possible to achieve with the use of preclinical rodent models [16].In polygenic models for obesity in both rats and in mice, pulse consumption was shown to be antiobesogenic [16][17][18]. It is widely recognized that food components play a major role in establishing and maintaining the gut microbiome.Of the wide array of bioactive constituents in foods, dietary fiber is a key determinant.While there is no requirement for dietary fiber, the level currently recommended (14 g/1000 kcal) is achieved by less than 25% of the US population and the average short fall, exceeds 50% [19].The problem can be traced to low food quality relative to dietary fiber content; particularly, a lack of grain legumes, i.e., pulse crops in the diet.Our recent publications document that the most commonly consumed pulses, dry bean (Phaseolus vulgaris, L.), chickpea (Cicer arietinum L.), dry pea (Pisum sativum L.), and lentil (Lens culinaris L.) have 2-3 times more fiber per 100 kcal edible portion than other commonly promoted dietary fiber sources, e.g., cereal grains [20,21].Whether or not these pulses have equivalent effects on the composition and function of the gut microbiome is not known. The objectives of the work reported herein were: (1) to compare the effects among pulses, i.e., chickpea, common bean, dry pea, and lentil, on the metagenome within the cecum, and (2) to investigate relationships between the effects of these pulses on the microbiome relative to the accumulation of lipid in adipose depots. Experimental Animals and Design Details of the feeding study have previously been reported [18].Briefly, NCI C57BL/6NCrl male mice (21-28 days of age) were obtained from Charles River Laboratories NCI (Frederick, MD, USA).Upon arrival, the mice were fed a purified high fat diet.Mice were housed in solid bottomed polycarbonate rodent cages and maintained on a 12 h light/dark cycle at 27.5 ± 2 °C with 30% relative humidity.All mice had ad libitum access to diet and distilled water.All animal studies were performed in accordance with the Colorado State University Institutional Animal Care and Use Committee (protocol 18-7746A).At 5 weeks of age mice were randomized to their treatment groups.Mice were either continued on the high fat (HF) formulation (control diet) or were fed the HF diet formulation to which common bean, chickpea, dry pea, or lentil was added.The formulation of the experimental diets and the rationale for the concentration of pulses has been published.The experimental duration was 17 weeks.At necropsy, inguinal subcutaneous and abdominal visceral adipose tissue were harvested and weighed.Content of the cecum was harvested and snap frozen in liquid nitrogen until it was processed for genomic DNA extraction. Statistical Analyses Data were evaluated ANOVA, PERMANOVA, regression analysis, or multivariate analysis techniques.The Benjamini-Hochberg method was used to adjust p-values to control the false discovery rate.Data analyses were conducted using Systat, version 13.0 (Systat Software, Inc., San Jose, CA, USA), CLC Genomics Workbench version 20.0.4 (Qiagen Bioinformatics, Redwood City, CA, USA) and RStudio version 1.1.456(RStudio, Boston, MA, USA) running R version 3.6.3(The R Foundation for Statistical Computing, Vienna, Austria) and SIMCA v15 software (Sartorius Stedim Biotech, Umea, Sweden). Effects of Pulses on Adipose Depot Mass Previously, we have reported that feeding diets containing 70% of dietary protein from chickpea, common bean, dry pea or lentil reduced visceral and subcutaneous fat pad mass of male C57BL/6 mice [18].In the analysis presented in Figure 1, all individual fat pad masses for an animal across treatment groups were subjected to unsupervised principal components analysis.The relative clustering of pulses in terms of their adipose tissue mass in comparison to the positive control (high fat obesogenic diet) and negative control (low fat non obesogenic diet) is shown in the threedimensional scores plot.Given that each principal component assigned to an animal is an unbiased measure of its adipose mass, each animal's first principal component value (PC1) was subjected to ANOVA.The overall pvalue was highly significant (p < 0.0009) and post hoc analysis confirmed that all pulses were significantly different from either control, but not different from one another. Effect of Pulses on Cecal Microbiome It is recognized that the composition of microbiome varies throughout the length of the intestinal tract, particularly as it relates to conditions in the gut lumen relative to the growth of facultative and obligate anaerobes which are important groupings of commensal microorganisms.Therefore, we decided to evaluate the impact of pulse consumption on microbial composition of the luminal content of the cecum.In the mouse, the cecum interfaces with the ileum, i.e., the distal segment of the small intestine that plays a central role it metabolic signaling, and the ascending colon with which it abuts and that plays key roles in water reabsorption and short chain fatty acid metabolism.The focus on the cecum is in marked contrast to other reports of the impact of pulses on the fecal microbiome [28][29][30][31], which are translationally important, but may not capture information relevant to metabolic effects of the microbiome in regions of the gut that are anerobic. Microbial abundance data (level of genus) from the 16s rRNA analyses of DNA extracted from cecal luminal content, were subjected to unsupervised principal components analysis in Simca in the same way the adipose depot mass data were evaluated in order to permit an opportunity to compare the clustering of data by diet group.The three dimensional scores plot is Figure 2. Figure 2. Principal components analysis of the effect of feeding pulse diets on the abundance of microorganism (level of genus). Using the same strategy as described for adipose depot mass, each animal's first principal component for microbial abundance was evaluated by ANOVA.The model fit had a R 2 of 0.76, overall effect of diet, p < 0.001.Post hoc pairwise comparisons among treatment groups revealed that the unbiased principal component for the positive control (high fat, obesogenic) was significantly different from all other diet groups but differences among pulses and between each pulse and the negative control (low fat, non-obesogenic) were not significant. Since there were overall effects of diet group on the principle components for adipose mass and microbial abundance (genus level), the relationship between these variables was examined via regression analyses (Figure 3).Overall, these analyses support a relationship between cecal microbial composition and adipose tissue mass, using unbiased measures of both variables.This opens the door to using the loading values from the principal components analyses to identify microorganisms (genus level) that contribute to this relationship. Figure 3. Regression analyses evaluating the relationship among principal components for adipose depot mass and microbial abundance (cecal luminal content).All regressions were statistically significant (p < 0.01), except for the regression analysis shown in panel D; lower confidence limit: LCL; upper confidence limit: UPL; lower prediction limit: LPL; upper prediction limit: UPL. Another focus of our comparative analyses of the effects of pulses on the microbiome in the luminal content of the cecum employed metrics commonly used in the evaluation of metagenomic data. Using the taxonomic profiling algorithm in the CLC Genomics Workbench, the impact of pulse consumption on species abundance is shown in Figure 4. Major shifts in percent abundance of microbial species were observed when pulses were compared to either the negative or positive dietary control group.Differences among pulses were also apparent.Of particular note, the changes in Akkermansia muciniphila that were reported by us and determined via qPCR using specific primers [17,18], are consistent with the differences observed in Figure 4.The observed pulse induced increase in A. muciniphila are consistent with health promoting effects of pulses.Gut levels of A. muciniphila have been reported to be inversely associated with obesity, diabetes, and inflammation [32][33][34][35][36][37].At the level of phylum, pulses, collectively, induced an increase in bacteria in the phylum Bacteroidetes vs. Firmicutes (p < 0.001 for all).Given the controversial nature of the literature about whether an increase in the Bacteroidetes to Firmicutes ratio is consistent with health benefits [38][39][40][41][42][43], the importance of this observed requires further investigation.We also evaluated the effect of pulse consumption on phylogenetic diversity using CLC Taxonomic Profiler (Figure 5).Unlike other metrics that have been presented thus far, statistically significant differences in phylogenetic diversity were observed.The rank order from lowest to highest was: lentil ≤ dry pea < common bean < chickpea.The lowest phylogenetic diversity was observed for the positive control (high fat, obesogenic).While the commonly held view is that higher phylogenetic diversity is prognostic for a healthy gut, there is no consensus on this point [9].However, a phylogenetically diverse microbiota, gives rise to an immense metabolic potential.The microbiome consists of the genes that the cells constituting the microbiota harbor.A human microbiome collectively contains on the order of 3 million non-redundant genes; whereas, the human genome is comprised of approximately 20,000 genes [44].Unsurprisingly, the gut microbiota executes essential functions that the body itself is incapable of performing.These functions include promotion of gut maturation, education of the immune system, protection against viral and bacterial pathogens, influence on brain activities and bodily.Figure 6.The effect on beta-diversity was also assessed While the information in Figure 6 is similar to that presented in Figure 2, the statistical approach (Bray-Curtis) used to generate this figure is commonly used in the evaluation of metagenomic data.The hierarchical cluster analysis of these data showed that the positive control (high fat, obesogenic diet) was on a branch distinct from all other diet groups. The metagenomic data were also evaluated for predictions of function using PICRUSt.That output was subject to unsupervised principal components in Simca using the same strategy described in the presentation of Figures 1 and 2. The scores plot from that analysis is Figure 7.The ANOVA of the principle components for each animal from the PCA supported the assessment that there was an overall effect of diet group on predicted functional activity categorized via Kegg defined metabolic pathways.The post hoc analysis indicated that pulses differed from the control groups and among one another.This supports the hypothesis that all pulses are not created equal in terms of microbial populations whose cecal colonization they support. Figure 1 . Figure 1.Principal components analysis of the effect of feeding pulse diets on adipose tissue mass. Figure 4 . Figure 4. Effect of pulse consumption on microbial abundance (species level) using CLC Genomics Taxonomic Profiler. Figure 5 . Figure 5.Effect of pulse consumption on phylogenetic diversity. Figure 7 . Figure 7. Effect of pulse consumption predicted function of the microbiome.
2020-12-24T09:13:12.292Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "122ea8a22b31309c6868224a0619af43fe3ac9f6", "oa_license": "CCBY", "oa_url": "https://sciforum.net/manuscripts/7009/manuscript.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4ce059d10ab4e072ffa0a83441e6c4edc680f3d8", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
6292549
pes2o/s2orc
v3-fos-license
A Cross-Site Intervention in Chinese Rural Migrants Enhances HIV/AIDS Knowledge, Attitude and Behavior Background: With the influx of rural migrants into urban areas, the spread of HIV has increased significantly in Shaanxi Province (China). Migrant workers are at high risk of HIV infection due to social conditions and hardships (isolation, separation, marginalization, barriers to services, etc.). Objective: We explored the efficacy of a HIV/AIDS prevention and control program for rural migrants in Shaanxi Province, administered at both rural and urban sites. Methods: Guidance concerning HIV/AIDS prevention was given to the experimental group (266 migrants) for 1 year by the center of disease control, community health agencies and family planning department. The intervention was conducted according to the HIV/AIDS Prevention Management Manual for Rural Migrants. A control group of migrants only received general population intervention. The impact of the intervention was evaluated by administering HIV/AIDS knowledge, attitudes and sexual behavior (KAB) questionnaires after 6 and 12 months. Results: In the experimental group; 6 months of intervention achieved improvements in HIV/AIDS related knowledge. After 12 months; HIV/AIDS-related knowledge reached near maximal scores. Attitude and most behaviors scores were significantly improved. Moreover; the experimental group showed significant differences in HIV-AIDS knowledge; attitude and most behavior compared with the control group. Conclusions: The systematic long-term cross-site HIV/AIDS prevention in both rural and urban areas is a highly effective method to improve HIV/AIDS KAB among rural migrants. Introduction The prevalence of the human immunodeficiency virus (HIV) infection in China is increasing each year, and the number of infected individuals reached 370,393 by the end of October 2010 [1]. Sexual intercourse is the main channel of HIV transmission [1]. Indeed, in China, 63.9% of HIV patients were infected through sexual intercourse, including 46.5% through sexual intercourse, and 17.4% through homosexual intercourse [2]. In China, about 80% of HIV-infected patients are from rural areas [3]. In Shaanxi, an economically important province in Western China, the rate of HIV infection has been rapidly rising. In 2010 alone, 462 new cases were identified, bringing the total prevalence in the province to 2,131 cases. Among these, 80% are between 20 and 49 years old, and most of them are from the low social classes, i.e., farmers, migrant workers and unemployed citizens [4]. With the massive influx of rural population seeking work and business into urban areas, the numbers of rural migrants in large cities have rapidly increased. In 2010, the total number of migrants in China reached 221 million, of which about 72% were rural migrants. It is estimated that more than 3-5 million people will migrate from the countryside to cities and towns in the next 30 years in China [5]. The most recent statistics report that 1,851,017 people migrated to Shaanxi Province in 2009 [6]. These rural migrants have specific demographic and socioeconomic characteristics, such as lower educational levels, and residing a long distance from their hometown. In addition, these migrants suffer from a number of social conditions and hardships (isolation, separation from family, marginalization, barriers to services, etc.) [7][8][9][10]. These conditions often lead to premarital or extramarital sex occurring frequently in this population [10][11][12]. Therefore, rural migrants might be a population at high risk of HIV infection [13][14][15], but there is a lack of international studies assessing this subject. To control the rate of HIV/AIDS infection in the general population, effective behavioral intervention programs are provided to migrants [16]. Among these, community-based models are widely used [17][18][19], and their efficacy has been confirmed in preliminary investigations in China [20][21][22]. Community-based models are flexible and do not cost much, making them particularly useful for rural migrants who work long shifts for low wages. However, the coordination of the Community Health Agency (CHA) provision of HIV/AIDS prevention has not yet been established in northwestern China. Currently, HIV/AIDS prevention efforts are carried out by limited number of people from the Center of Disease Control (CDC), which conducts short-term, small-scale and superficial activities [23]. In addition, the seasonal migration of rural migrants plays a significant role in the impact of HIV/AIDS prevention. It is necessary to construct a management network of HIV/AIDS prevention for the migrants, which unfortunately is still missing. However, a family planning network for migrants has already been established, which includes HIV/AIDS prevention efforts [15]. The objective of the present study was to measure the efficacy of a HIV/AIDS prevention and control program for rural migrants in the Shaanxi Province. Participants were divided into an experimental (EG) and a control group (CG). We analyzed their HIV/AIDS-related knowledge, attitude and behavior (KAB) before, during and after the intervention at both rural and urban sites. We hypothesized that our coordinated intervention at their home village and place of work greatly improve participants' HIV/AIDS-related KAB. Results of the present study might also be applicable to migrant populations elsewhere in the world. Study Design The design of our intervention is shown in Figure 1. A quasi-experimental study was conducted between March 2009 and April 2010. In the study, six villages were selected, based on the pre-study assessment of KAB on HIV/AIDS. A year-long multicenter prevention intervention was conducted at Xi'an and the home villages of those participants assigned to the EG. The CG received no intervention, other that the general intervention provided in the general population in China. The effects were evaluated after 6 and 12 months of intervention. The KAB scores of the EG and CG were assessed and compared ( Figure 1). Study Site We chose one of the poorest counties in China: Lantian County in Shaanxi Province [24]. Most of the young adults from the Lantian County work in Xi'an, the capital city of the Shaanxi Province [21]. They regularly return home for long holidays or for planting and harvesting seasons [21]. Participants Inclusion criteria were: (1) 18-49 years old; (2) household registration in rural Lantian; (3) working in a fixed location in downtown Xi'an for at least three months; (4) normal development and language; and (5) willing to participate in this study. Rural migrants with similar geographical or blood relationships tend to work together. Therefore, 293 migrants in three villages of Lantian County (Mandao, Mengyan and Guizhang) who work in five fixed communities in Xi'an were assigned to the experimental group (EG). The remaining 300 migrants, from other villages of similar background (Songjiamiao, Songjia and Qingyangzhuang, all from Lantian County), were assigned to the control group (CG). The protocol was reviewed and approved by the Human Research Ethics Committee of the Xi'an Jiaotong University College of Medicine, and all participants provided written informed consent. Interventions The education was tailored to each individual on the basis of the assessment of one's knowledge about HIV and the capacity to accept the knowledge, and the professional intervention was implemented by the entire intervention group for each individual. The intervention was carried out at least once a month to ensure that the knowledge and methods were taught step by step. Each intervention was based on the examination of the contents of the previous intervention. The content was not repeated if it was mastered, and misunderstood content was corrected; meanwhile, new content was taught. The intervention team included the CDC staff, family planning workers, community health workers and assistants, all with their specifics tasks. The CDC staff was mainly there to train and guide the other members of the intervention team, to provide HIV/AIDS-related knowledge and skills, and to perform HIV testing. In China, family planning workers are in charge of the birth system; their tasks were to perform physical examinations and to assist the community health workers in carrying out HIV/AIDS-related knowledge publicity, and in teaching condom use methods. Community health workers were the implementers of the HIV/AIDS-related knowledge education, and performed counseling psychology, basic physical examinations, education on sexual partner's communication, and condom use teaching. The other team members were assistants, and were in charge of following up migration population, evaluating intervention's effect, and assisting the community health workers to carry out health education. Framework of Intervention The 12-month intervention program includes four aspects ( Figure 2). (1) The family planning network of rural migrants provides an information-exchange platform. Prevention education is conducted by coordinating services and education between the city (migration destination) and the rural area (migration source); (2) A collaborative intervention group was established between the cities and villages, including staff from the CDC, FPD and CHA. Community-based intervention was also implemented; (3) In order to ensure smooth communication and effective implementation, five professionals (co-instructors) were sent to the five communities in Xi'an. These co-instructors were responsible for objective tracking, health education and intervention evaluation. In addition, when the migrants returned home during the planting and harvesting seasons, three of the five professionals went to three villages to help with the work;(4) To avoid duplication, the "HIV/AIDS Prevention Management Manual for Rural Migrants" (PMMRM) was drafted and distributed to each migrant to maintain a standardized and systematic intervention [21]. Members of the intervention team conducted a dynamic and systematic management in accordance with the PMMRM. HIV/AIDS Prevention Management Manual for Rural Migrants The PMMRM is divided into three parts. (1) Basic information, including the name, age, home address, telephone numbers. The contact information for village health clinics and urban community health centers are also included; (2) The intervention procedure, divided into assessment, survey, implementation, and evaluation. The assessment framework is guided by the KAB theory, including the KAB and its influencing factors, physical assessment (focusing on the reproductive system) and laboratory examination results. The questions based on physical assessment also include the knowledge, attitude, behavior and physical characteristics. The signatures from investigators and surveyed subjects were required to confirm the time, location, content, methods, and names of investigators and migrants in order to supervise the intervention group; (3) The appendix, which contains the basic skills for preventing HIV/AIDS, using texts and pictures. Intervention Methods The project meeting was held in the village during the Spring Festival 2009, when migrants returned home. The participants included: co-instructors, rural migrants and their family members. During the meeting, the research purpose, significance and methods were introduced. The participants provided written informed consent, and the PMMRM was distributed. Intervening staff recorded the date at which migrants left for Xi'an, their employer's address, and their contact information. Before the migrants left, the staff of the migration source area reminded them to take the PMMRM, and informed the intervening staff of the migration destination area. Within 1 week of the migrants arriving in Xi'an, the city's intervening staff contacted them. Subjects in the EG groups met the intervening staff once a month. Participants in the CG received no specific HIV/AIDS education, except what they may have learned from general education provided to the general population on billboards, pamphlets or other promotional material. Measures The effects of the intervention were measured by the questionnaire of KAB on HIV/AIDS for migrants [25][26][27]. Based on the "HIV/AIDS knowledge, attitude scale" of the World health Organization (WHO) and an extensive literature review, the questionnaire was revised in order to consider the migrants' low education level and the AIDS-related behavioral characteristic sheet. Participants completed all questionnaires by themselves; if they were unable to do so, an investigator explained all questions and filled the questionnaires based on the participant's responses. Questionnaires The questionnaires included demographic data (gender, age, education, and marital status), and four statements about HIV/AIDS (25 items: 2 about basic knowledge, 8 about transmission routes, 9 about transmission misconceptions, and 6 about prevention and treatments). All items had three possible responses: "True, Don't know and False". Positive statements were scored as "True = 2, Don't know = 1 and False = 0", and negative statements are scored as "True = 0, Don't know = 1 and False = 2". The total score ranges from 0 to 50. The higher the score, the better the knowledge level of HIV/AIDS is. Attitude: The HIV/AIDS attitude questionnaire consists of 3 dimensions (attitude to AIDS: 7 items; attitude to the related behavior: 6 items; attitude to the infected patient: 13 items) and 26 questions with a 4-point scale ranging from "agree" to "disagree". There are 11 positive items scored as "agree = 3, undecided = 2, indifferent = 1, disagree = 0" and 15 negative items reverse-scored. The total possible score ranges from 0 to 78. Higher score shows more positive attitude. Sexual behavior: the behavior questionnaire consists of 16 items, in which 8 are related to sexual behavior, 4 to selling blood, and 4 to drug abuse. Reliability and Validity of the Questionnaire The English version of the questionnaire was back-translated and discussed repeatedly by linguists. Before use, the questionnaire was assessed thrice by three experts, who majored in epidemiology, communicable diseases and nursing, respectively, to ensure the format and content validity of the questionnaire. Furthermore, 50 subjects were pre-surveyed. The Cronbach's alpha coefficient of the knowledge questionnaire was 0.79, 0.78 for the attitude questionnaire, and 0.81 for the whole questionnaire. Power and Statistical Analysis In the present study, sample size determination was based on the rate of change in high-risk sexual behaviors after intervention. It was reported that the high-risk sexual behavior of rural migrants changes from 0.3% to 18.2% after comprehensive intervention [1,28]. Thus, based on the initial results of our survey, the condom use rate in each high-risk sexual behavior of migrant workers was expected to increase from 4.5% to 14.5%. Therefore, using a 90% confidence and an α of 0.05, the sample size required for each group of this study was 225 migrants (total of 450). Taking into account a 30% drop-out rate, the required sample size was 586 migrants (293 migrants/group). Statistical analysis was performed using SPSS 16.0 for windows (SPSS Inc., Chicago, IL, USA). All variables were initially analyzed descriptively, including means and standard deviation for the quantitative data, and frequencies and percentages for the categorical variables. The difference in knowledge or attitude between the EG and CG was assessed by independent sample t-test. The chi-square test was used to detect the difference of behavior on HIV/AIDS. P-values < 0.05 were considered statistically significant. Figure 1 shows the participants' flowchart: 293 EG migrants and 300 CG migrants completed the baseline survey. After 6 months intervention, 8 migrants from the EG were lost to follow-up due to change of employer. In the following 6-months an additional 19 migrants were lost to follow-up. After 12 months, the EG consisted of 266 participants and the CG of 263 participants. Results There was no significant difference in age, gender, marital status, and education between the EG and CG (p > 0.05). Most rural migrants were males (53.2% and 56.0% in EG and CG), married (91.8% and 92.3% in EG and CG), and had completed junior middle education level (68.3% and 66.3% in EG and CG) ( Table 1). Table 2 shows the HIV/AIDS knowledge scores of participants. Before the intervention, there was no significant difference in the total score and each dimension score between the EG and CG (p > 0.05). Aside from the relatively high score on transmission route knowledge, the total score and the other dimension scores were low. The score for misconceptions about transmission route knowledge was especially low. There were significant differences between the EG average knowledge score before and after 6 or 12 months of intervention (p < 0.05). During the intervention, the total score and each dimension score gradually increased, and the scores after 12 months were close to the maximum value ( Figure 3). The total score and dimension scores of the EG were significantly higher than those of the CG after 12 months of intervention (p < 0.05). In the CG, total score, basic knowledge score and transmission route score after 12 months were significantly higher than those before the intervention (p < 0.05), but there was no significant increase in the scores for the categories "misconceptions about transmission route" and "prevention-treatment" after the intervention period (p > 0.05). Table 3 shows the scores about attitude on HIV/AIDS. Before the intervention, there were no significant differences in the total attitude score and each dimension score between the EG and CG (p > 0.05). After 6 months, the EG total score and sub-attitude scores relating to behavior changes were significantly increased (p < 0.05). However, the scores of attitude to AIDS and infected patients did not change (p > 0.05). After 12 months, the total score and each sub-score of the EG were significantly higher than those before intervention or those of the CG (p < 0.05); the scores also gradually increased with prolonged intervention time (Figure 4). In the CG, the total score and each sub-score after 12 months of intervention showed no significant difference compared with those before intervention (p > 0.05). HIV/AIDS Behavior Before the intervention, only one participant sold blood or accepted voluntary HIV counseling and testing (VCT). Therefore, the comparison of HIV/AIDS behavior focused on sex-related behaviors. Excluding the subjects who had no sexual activity during the last 3 months, there were 274 (pre-intervention), 280 (post-6 months intervention), and 266 (post-12 months intervention) subjects in the EG, and 286 (pre-test) and 260 (post-test) in the CG in whom sex-related behaviors were compared. Table 4 shows the comparison of HIV/AIDS-related sexual behaviors. Before the intervention, there were no significant differences between the EG and CG (p > 0.05). After 6 months, there were significant changes in only 2 items in the EG ("communicate with sexual partner" and "test rate of STD") (p < 0.05). After 12 months, we found that most behaviors (commercial sex, use of condom, reason for using condom, etc.) were significantly improved (p < 0.05), and that only one item ("number of sexual partners") did not improve significantly (p > 0.05). For each item within the CG, there was no significant difference between pre-and post-intervention (p > 0.05). Discussion We explored the efficacy of a HIV/AIDS prevention and control program for rural migrants in the Shaanxi Province by analyzing the HIV/AIDS-related KAB variations resulting from interventions at both rural and urban sites. Comparison between the EG and CG highlighted that the intervention achieved a significant improvement of participants' HIV/AIDS-related KAB. There are many studies on HIV/AIDS prevention in rural migrants, but most interventions are cross-sectional and one-sided [29,30]. In the present study, intervention evaluation was carried out at different times at both the migration source and destination, in order to ensure the continuity of the education. Abbreviations: EG = experimental group. CG = control group. Data are presented as (mean ± SD). * p < 0.05; ** Comparisons between different pairs of measurements. Secondly, the independent AIDS-PMMRM was provided for each migrant. Based on systematic assessment, each intervention was implemented in accordance with migrant's physical and psychological characteristics and lifestyle, to ensure that each participant receives the appropriate educational content. This was done by tailoring the intervention to the participants according to his sexual behaviors (low-vs. high-risk), to his literacy, to his technology accessibility (compact disc vs. brochure) and to his availability. In addition, studies showed that HIV/AIDS prevention knowledge from different sources will produce different results, and that people are most willing to accept the prevention knowledge from medical professionals [31,32]. In this study, a professional team administered the intervention. The staff from the CDC was mainly responsible for instructing interventions with professional knowledge, and the staff of the CHA and family planning services conducted direct interventions due to their strong social abilities. The role of community organizations in HIV/AIDS prevention has been recognized [33]. With repeated intervention, the knowledge of the participants increases. Meanwhile, the capability of the medical staff also increases. Compared with the EG, migrants in the CG had improved HIV/AIDS "basic knowledge" and "transmission routes" after 12 months, while the other aspects ("misconceptions about transmission routes" and "prevention and treatment") were not improved. These results indicate that the HIV/AIDS propaganda administered in urban and rural areas have a certain impact, but the breadth and depth of the effects are insufficient as a consequence of complex factors, such as time, place, method, educators, etc. It also suggests that exploring a scientific, systematic and feasible intervention program is necessary. The level of HIV/AIDS knowledge directly affects the attitude to AIDS, and participants with a lower cognitive level have more serious discriminatory attitudes towards those suffering from HIV/AIDS [12,34]. Moreover, attitude is influenced by personal experience, values, and emotions, and changing these attitudes takes a long time. In the present study, the influencing factors on psychology and society are accounted for during the circular assessment and intervention. For instance, in the assessment of the migrants at the migration source, we observed that "subjunctive community" (composed by migrants' social network) [35] and the "key person" play a very important role in the ideas of the group.The researchers purposely arranged for the "key person" to communicate with the HIV-positive migrant, and encouraged them to exploit the "subjunctive community" in peer education. Simultaneously, in rural migration destinations, education is promoted by social and family support.The information exchange and emotional communication among family members has an important effect on individual perceptions, beliefs, and behaviors [36].Intervention of family members can not only provide migrants with a more powerful support system, but also acts to reduce the possibility of HIV transmission from migrants to their family members. After 6 months of intervention, the participants recognized that condoms are a simple and efficient mean to prevent HIV/AIDS, and they learned some skills for communicating with their sexual partners. The reason why the detection of STDs increases is that more migrants are tested. After 12 months, the rates of condom use within the EG were significantly increased. Also, more migrants used condoms for the purpose of disease protection. The incidence of participation in commercial sex also decreased, but the number of sexual partners did not change. We concluded that participants' understanding of sexual health and the extent to which they trust their sexual partners influenced their sexual behavior. This conclusion is in accord with previously published research [37]. Commercial sexual partners have been recognized as the most dangerous population in the spread of HIV/AIDS, while stable partners are safer as a consequence of comprehensive understanding and trust [15,38,39]. The results achieved may also depend upon intervener's skills. A previous study showed that an 18 months practice was needed to improve health prevention staff's skills [19]. In our study, the conductors implemented the HIV/AIDS prevention mode for only 1 year. Their intervention skills would therefore need to be improved, mainly because sexual behavior is a complex behavior influenced by social, emotional and situational factors. After 12 months, the frequency of VCT in the EG were significantly increased. Regular tracking and information collection showed that those who participated in VCT were mainly those who have sold blood or transfused blood, while drug users and people with dangerous sexual behavior still rarely participated in VCT. This means that participation in VCT may be related to the routes of infection. If the route of infection involves what is traditionally considered to be "immoral behavior", infected individuals are less willing to participate in VCT, fearing discrimination and apathy. In the present study, we selected rural migrants aged 18-49 because: (1) migrants aged 18-49 represent 69.9% of all migrants in China [6]; and (2) a report on the 2010 situation of the AIDS epidemic in the Shaanxi province showed that more than 80% of AIDS-infected people and patients are 18-49 years old [4]. Finally, a previous survey showed that in migrants aged 18-49, 62.3% had a high-risk sexual behavior. However, more studies are needed to assess the complete population of migrants in China and in other countries. The present study suffers from some limitations. First, the home villages of the participants were relatively close to their working place. Considering that most people in the Lantian County worked in Xi'an city and also in order to facilitate the survey, the Lantian County and Xi'an city were selected as survey locations in this study. Xi'an is the capital city of Shaanxi Province while Lantian County is in the territory of Shaanxi Province. Previous studies have demonstrated that rural migrants working in places farther from home have the higher incidence of high-risk behavior compared with those working closer to home [13][14][15]. Therefore, in the present study, the incidence of high-risk behavior in the migrants was clearly lower than that reported in migrants working in Shanghai, Beijing, or Guangzhou. Second, the dropout rate of participants was high. Since the network platform about the information on migrants is not fully developed, five co-instructors were sent to track the migrants. However, during the year of intervention, about 10% dropout still occurred in the EG and CG. Hence, an appropriate migrants' management network has to be established before the program for HIV/AIDS prevention for the migrants may be implemented to the whole population. Finally, the long-term effects of the intervention need further evaluation. After one year of intervention, the attitudes and behaviors of migrants toward AIDS were improved. However, whether these knowledge, attitudes and behavior could be retained or improved after one year needs to be verified in a follow-up study. Conclusions We observed that the cross-center HIV/AIDS prevention and management model could improve rural migrants' HIV/AIDS-related knowledge, attitude and behavior . Further studies would further illuminate the extent of this improvement, examine how it may be maintained, and study the impact of this intervention on the HIV acquisition rate of participants. In addition, this model could be used elsewhere in the world.
2015-09-18T23:22:04.000Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "278beff0ee39ce28c392584e23f7a6d90053147a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/11/4/4528/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "278beff0ee39ce28c392584e23f7a6d90053147a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222231778
pes2o/s2orc
v3-fos-license
Immune-Activated Regional Lymph Nodes Predict Favorable Survival in Early-Stage Triple-Negative Breast Cancer Immune response and immunotherapy play important roles in triple-negative breast cancer (TNBC). However, it is difficult to judge whether cancer is “immune-inactivated” or “immune-activated” by the carcinoma itself. The immune reaction of the microenvironment or the host to the tumor might be more informative. We assumed that clinically enlarged but pathologically negative regional lymph nodes served as an indicator for early immune response to tumors. First, we identified women with pN0 breast cancer disease from the current Surveillance, Epidemiology, and End Results database, and we compared the cN1 patients of breast cancer-specific survival (BCSS) with cN0 patients. Then, we extracted total RNA from 36 paired large (defined as minimum diameter more than 15 mm in size) and small lymph nodes (defined as maximum diameter less than 5 mm in size) from 12 TNBC, 12 HER2-enriched, and 12 luminal-like patients and performed RNA sequencing to explore the gene expression and cellular landscape of large nodes compared to small ones. Among 692 women with pathologically confirmed node-negative disease, cN1 patients unexpectedly had a better BCSS compared with cN0 in TNBC (adjusted hazard ratio 0.148, 95% CI, 0.040–0.546, P = 0.004) but not in other subtypes. Further transcriptome sequencing of 12 paired enlarged and small negative nodes from TNBC patients revealed that increased immune activation signaling (e.g., interferon-gamma response pathways) and abundant immune cells (activated dendritic cells, CD4+ and CD8+ T-cells) were more frequently observed in enlarged nodes. Our data implied that early immune activation in regional lymph nodes in TNBC might affect survival. INTRODUCTION Among females, breast cancer is the most commonly diagnosed cancer and the leading cause of cancer death, followed by colorectal and lung cancer for incidence, and vice versa for mortality (1). Considering its heterogeneous biological nature, breast cancer can be clinically stratified into three main subtypes: luminal-like, human epidermal growth factor receptor 2 (HER2)-enriched, and triple-negative breast cancer (TNBC), according to the status of three critical receptors: estrogen receptor (ER), progestogen receptor (PR), and HER2 (2). That molecular information, in coordination with clinical pathological information, was used to predict the outcomes of patients and helped to make the therapeutic decisions (3). Immunotherapy in combination with chemotherapy has shown promising efficacy across many different tumor types (4). The treatment of several kinds of malignancies with immune checkpoint inhibitors (against programmed death receptor-1/ligand-1 [PD-1/PD-L1]) has changed the treatment panorama (5,6). In TNBC, which is a difficult-to-treat disease with a high unmet therapeutic need, the IMpassion130 clinical trial has recently granted an accelerated approval for atezolizumab, an antibody targeting PD-L1, for patients with PD-L1-positive advanced TNBC (7). Judging a breast carcinoma as immunereactive or immune-unreactive is still difficult. For instance, in IMpassion130, PD-L1-positive status was defined as PD-L1 expression on tumor infiltrating immune cells of 1% or more, indicating that it is important to take the cancer stroma or microenvironment into consideration (7). In other words, it is difficult to judge "immune-inactivated" or "immune-activated" by the carcinoma itself; the reactions of the microenvironment or host to the tumor might be more informative. Regional lymph nodes, which provide the clues for initial tumor metastasis, are among the most important prognostic determinants. There are two main types of evaluation for regional lymph node status: clinical and pathological. The clinical assessment gives the estimation of lymph nodes preoperatively according to the physical examination and imaging modality and, thus, is crucial for the following surgical decision-making. Pathological evaluation, based on the findings during or after surgery with detailed pathological information, gives the most precise assessment of lymph nodes to direct adjuvant treatment and the prediction of survival outcomes. Sometimes there is inconsistency between these two types of estimates, usually in the cases in which clinical assessments underestimate the extent of the disease (8). However, there is another segment of the population whose negative pathological results of lymph nodes go against the positive clinical ones (9), where much uncertainty still exists about their clinicopathological features and prognosis. Considering that regional lymph nodes are parts of the host's immune system, we hypothesized that clinically enlarged but pathologically negative regional lymph nodes might serve as an indicator for early systemic immune response to tumor and that immune activation probably resulted in an improved survival outcome of breast cancer patients. To test this hypothesis, we conducted the present study. Study Population The current Surveillance, Epidemiology, and End Results (SEER) database consists of 18 population-based tumor registries, covering approximately 34.6% of the United States population. The SEER program collects data on patients' demographics, tumor characteristics, the first course of treatment, and survival outcomes. SEER * stats 8.3.5 and Nov 2017 submission with the years of diagnosis varying from 2010 to 2015 was used to generate the patient list. Since the adjusted American Joint Committee on Cancer (AJCC) lymph node categories for breast cancer in the SEER database did not separate clinical from pathologic information, we mainly used the code "CS Regional Node Evaluation" (coding 0, 1,5,9), which derived the staging basis (clinically or pathologically) for lymph node category, to extract patients with clinical lymph node (cN) information. Because of the absence of data on HER2 status of patients diagnosed before 2010, we identified eligible patients according to the following criteria: diagnosed between 2010 and 2015, female, aged between 18 and 70 years, breast cancer as the first cancer diagnosis, microscopically confirmed infiltrating ductal carcinoma, unilateral, pT1-T2, cN0-N1, surgery performed, and regional lymph node examined to be pathologically negative (pN0). Patients with cN2 status were excluded, as they might be receiving neoadjuvant chemotherapy. Although it is the best method to establish clinical node stage using fine-needle aspiration, in clinical routine, not all the patients would undergo fine-needle aspiration and the SEER databased did not provide such information. To ensure the accuracy of pathological lymph node assessment, the number of lymph nodes dissected in therapeutic surgery was at least 10 for each patient. Subsequently, patients with unknown data on race, tumor grade, ER and PR status, as well as HER2 status, were excluded. As a result, we identified 692 breast cancer patients who satisfied our research purpose from the SEER database. Eligible patients were classified as luminal-like (ER and/or PR-positive, any HER2 status), HER2enriched (ER and PR-negative, HER2-positive), and TNBC (ER, PR, and HER2-negative) subgroups. Enlarged and Small Lymph Node Samples The preferred cutoff of size should be 10 mm for cN1 and cN0, because nodes are generally considered to be normal if they are less than 10 mm in diameter. However, some investigators suggest that nodes larger than 15 mm should be considered abnormal (10). In the current study, we used the extreme value of nodes size. Larger size of nodes (more than 15 mm) might present a higher likelihood of immune response while the smaller nodes (less than 5 mm) might represent non-activated ones. We selected 36 paired enlarged (defined as minimum diameter more than 15 mm in size assessed by node biopsy) and small lymph nodes (defined as maximum diameter less than 5 mm in size) from 36 patients with operable invasive ductal carcinoma, with 12 pairs of luminal-like tumors, 12 HER2-enriched, and 12 TNBC. Patients underwent surgeries at the Department of Breast Surgery, Fudan University Shanghai Cancer Center. All patients were screened for the size of axilla nodes by ultrasound before surgery. During node biopsy, we dissected two nodes (one large and the other small) and incised parts of their medullas for frozen section examination and subsequent RNA extraction if the tumor was proved to be pathologically negative. The remaining part of the two nodes, as well as the remaining nodes obtained by axilla lymph node dissection, were sent to the Department of Pathology. All lymph nodes were pathologically confirmed to be negative. Once any one of the nodes was diagnosed as positive for tumor metastasis, the case was excluded. RNA extraction was performed until the full pathological examination of nodes was finished and the immunohistochemistry results for ER/PR/HER2 were available. This study was approved by the Institutional Ethics Committee of Fudan University Shanghai Cancer Center. All patients signed informed consent forms. RNA Extraction, RNA Sequencing, and Transcriptome Data Analysis An RNeasy mini kit (Qiagen, Hilden, Germany) was used for purification of total RNA from lymph node tissue. The total RNA samples (1 µg) of extraction of lymph nodes were treated with VAHTS mRNA Capture Beads (Vazyme, Nanjing, China) to enrich polyA+ RNA before constructing the RNA-seq libraries. RNA-seq libraries were prepared using VAHTS mRNA-seq v2 Library Prep Kit for Illumina Xten (Vazyme, Nanjing, China) following the manufacturer's instructions. Briefly, polyA+ RNA samples (approximately 100 ng) were fragmented and then used for first-and second-strand cDNA synthesis with random hexamer primers. The cDNA fragments were treated with DNA End Repair Kit to repair the ends, then modified with Klenow to add an A at the 3' end of the DNA fragments, and finally ligated to adapters. Purified dsDNA was subjected to 12 cycles of PCR amplification, and the libraries were sequenced by Illumina sequencing platform on a 150 bp paired-end run. Sequencing reads from RNA-seq data were aligned using the spliced read aligner HISAT2, which was supplied with the Ensembl human genome assembly (Genome Reference Consortium GRCh38) as the reference genome. Gene expression levels were calculated by the FPKM (fragments per kilobase of transcript per million mapped reads). The calculation of abundance of cell types in lymph nodes was conducted using the xCell tool 1 , which can infer the abundance of 64 immune cells and stromal cells based on RNA-seq and microarray data. Gene Set Enrichment Analysis (GSEA) was performed using the GSEA software (v3.0) and the Molecular Signature Database (v7.0). Hierarchical clustering was performed using Euclidean distance as the clustering distance and the average linkage method. The heatmap described the differently expressed coding genes, which were defined as | log2(Fold change, FC)| > 1.0 and P-value < 0.05, genes with unknown function (for example, LOC100996401) were also excluded. 1 http://xCell.ucsf.edu/ Statistical Analysis The endpoint for survival analysis was breast cancer-specific survival (BCSS), calculating from the date of diagnosis to the date of breast cancer-specific death. Patients who died of other causes were censored. Age of diagnosis and tumor size were converted into categorical variables. Grade I and grade II patients were merged because of limited numbers of patients, and races of Asian/Pacific Islander and American Indian/Alaska Native were combined for a similar reason. Comparison of clinicopathological characteristics was conducted between patients with cN0 or cN1 disease by χ2 test or Fisher's exact test if necessary. The Kaplan-Meier method was applied to plot survival curves, with the log-rank test to compare univariate survival differences. A Cox proportional hazard model was used for multivariate analysis and to calculate hazard ratio (HR) with 95% confidence interval (CI). All these statistical analyses were performed using SPSS 23.0 (IBM Corp, Armonk, NY, United States). Comparisons of xCell scores between large and small lymph node groups were conducted by paired t-test using GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, United States). The heatmaps were generated using the MORPHEUS tool (software.broadinstitute.org/morpheus/). Statistical significance was determined with two-sided P < 0.05. Baseline Characteristics of Patients First, we compared the survival of patients with clinically positive nodes (cN1) with that of patients with clinical negative nodes (cN0) in pathologically confirmed node-negative (pN0) TNBC. Theoretically, cN1 had larger and palpable nodes, while cN0 tended to be small and undetectable by imaging tests (according to AJCC staging system). We identified women with pN0 breast cancer disease from the current SEER database. Among them, we compared the cN1 patients of BCSS with cN0 patients. A total of 692 eligible patients were selected from the SEER database, including 359 (51.9%) patients with cN1 and 333 (48.1%) patients with cN0 disease. The median follow-up time was 55 months. The basic information on patients' clinicopathological variables by cN status in the whole and different subgroups (luminal-like, HER2-enriched, and TNBC) is shown in Table 1. Effect of Clinical Node Status on BCSS in Different Subtypes in pN0 Cases Given that all the patients had the same pN0 stage, it was expected that there should be no significant difference of BCSS between cN0 and cN1. The results were consistent with this logical expectation in the whole pN0 group (Figure 1A, P = 0.081), as well as in the luminal-like subgroup (Figure 1B, P = 0.463), and the HER2-enriched subgroup (Figure 1C, P = 0.504). Contrary to our expectations, cN0 cases exhibited worse BCSS than cN1 cases in the TNBC subgroup ( Figure 1D, P = 0.001). Multivariate analysis after adjusting for other confounding factors reconfirmed the findings in TNBC ( of 0.15, with 95% CI: 0.04-0.55, P = 0.004). It is quite anomalous that cN0 had an unexpected worse survival compared with cN1. The potential explanation might be that the cN1 cases in the present study probably had immune-activated lymph nodes with larger size but actually pathologically negative. Based on body examination or imaging tests, physicians might treat larger nodes as metastatic ones and classify them as cN1. Differentially Expressed Genes in Large and Small Lymph Nodes To investigate the potential molecular events behind enlarged but pathologically negative nodes, we extracted total RNA from 12 paired large (defined as minimum diameter more than 15 mm in size) and small lymph nodes (defined as maximum diameter less than 5 mm in size) from 12 TNBC patients and performed RNA sequencing. Each patient provided 1 pair of 1 large node and 1 small node if available. We chose paired samples from one patient to reduce the interindividual heterogeneity. The heatmap (Figure 2A) described the differentially expressed coding genes. The large node group displayed up-regulation of genes involved in innate and adaptive immune responses compared with the small node group. The heatmap showed in Figure 2A included two types of genes, one related to immune activation and another related to T cell receptor and Ig repertoire. The former class was mainly enriched in enlarged LN and the latter class seemed to express higher in small LN. The most differentially expressed genes are shown in Figure 2B, mainly including immune-related genes such as IL21, CCL17, AOC1, CCL22, and IFNA5. GSEA analysis unveiled the enriched inflammatory and interferongamma response pathways in enlarged nodes ( Figure 2C). We also performed the same analyses in additional 24 pairs of large and small nodes from 12 luminal-like patients and 12 HER2-enriched patients, respectively. In the HER2-enriched subtype, the results were similar to the findings in TNBC, but the expression intensities of immune-related genes were less than those in TNBC (P < 0.05 for IL21, CCL17, and CCL22). The luminal-like subtype seemed to have poor immunogenicity. GSEA analysis did not indicate an adequate and enriched immune-reaction pathway based on the limited differentially expressed immune-related genes in this type (data not shown). Differential Immune Cell Abundance in Large and Small Lymph Nodes We further explored the cellular landscape of large nodes compared to small ones from patients with TNBC using the xCell tool to enumerate cell subsets from transcriptome data. Cell subsets that showed significant difference between two groups (P-value < 0.05) and FC > 2 in the heatmap ( Figure 2D) were chosen. Large nodes were infiltrated with more immune cell subsets, while small ones were infiltrated with more stromal cell subsets. The FC values of cell subsets are illustrated in Figure 2E by rank, with significantly increased dendritic cells, especially activated dendritic cells, CD4+ T-cells, and CD8+ T-cells. Paired comparisons of immune and stromal scores, which were estimated by cell abundance between the two groups, indicated an up-regulated immune response in large nodes ( Figure 2F). DISCUSSION In the current study, we investigated the immunogenomic portrait of clinically enlarged but pathologically negative lymph nodes. Immune activation might be the probable mechanism for the fact that cN1 patients had improved survival compared with cN0 patients in the TNBC subtype. Lymph node status is one of the most important predictive factors in breast cancer. Clinical assessment for lymph nodes is conducted to provide preoperative information of axilla and aids surgical decision-making. However, clinical examination is not reliable enough due to its limited sensitivity and specificity (11), resulting in the critical need for pathological assessment. The discrepancy that exists between these two kinds of evaluations has been widely reported, and most studies have mainly focused on the false-negative results of clinical evaluation while ignoring the false-positive situation (9,12). Sacre et al. compared the clinical assessments to pathological findings and found 29% false-positive cases in addition to 45% false-negative cases, implying that the false-positive population deserved equal attention for its notable amount (9). However, much uncertainty still exists regarding the clinicopathological features and survival outcomes of patients with so-called "false-positive" nodes, leading to the purpose of the current study. Immunotherapy is an evolving therapeutic option with recent encouraging results across multiple tumors (4). However, in breast cancer, a limited response to this novel class of treatment has been seen for poor immunogenicity in breast cancer (13,14). Subsequently, several studies that focused on the anti-tumor immune response in the setting of molecular stratification uncovered certain immunogenic subtypes of breast cancer. For example, tumor-infiltrating lymphocytes, which provide insight into hosts' anti-tumor immunity, were present at the highest levels in TNBC with favorable prognosis (15,16). Identifying those high immunogenic subgroups with clinicopathological biomarkers could help to select potential candidates for immunotherapy in breast cancer. We searched in the SEER database for early-stage breast cancer cases with available clinical and pathological lymph node information and found 359 cN1&pN0 patients. Survival analysis on BCSS showed that cN1&pN0 patients had improved BCSS compared with cN0&pN0 only in the TNBC subtype, which has been considered as the most immunogenic subtype of breast cancer due to its high genomic instability and mutation burden (17). Compared to other breast cancer subtypes, TNBC is the subtype most relative to TILs infiltration and PD-1/PD-L1 expression and has different subcategory related to "hot tumor" or "cold tumor, " which brought clinical predictive and prognostic difference. Considering that infiltrated immune cells present in tumor microenvironment could migrated from lymphoid organs, it was not difficult to understand that heterogeneous immunity might also be seen in regional lymph nodes in TNBC and bring better clinical value than other subtypes. We previously classified TNBC into three heterogeneous clusters involving one "immune inflamed" cluster characterized by the infiltration of adaptive and innate immune cells, suggesting the possibility of administrating immune checkpoint inhibitors in this segment of patients (18). A potential explanation of our survival results was that the enlargement of nodes represented reactive hyperplasia, which was a regional response of hosts' systemic anti-tumor immunity. The immune response involves multiple organs and tissues across the body. However, most studies have focused mainly on the local immune response in the tumor or peritumoral microenvironment, regardless of the systematic immune dynamics. Recently, several studies showed their interests in systemic anti-tumor immunity. Spitzer et al. found that patients responded poorly to immunotherapy when the migration of immune cells from the secondary lymphoid organs to tumor environment was suppressed, implying that the immune response was systemic (19). Regional lymph nodes, as the closest lymphoid tissue to a tumor, displayed vital and complex immune responses during tumor regression. The proliferation and activation of lymphocytes in regional nodes, which might appear to be clinically "pseudo-positive" enlarged nodes, actually served as an indicator for the activated immune system. Our study investigated the immunological portrait of enlarged negative lymph nodes compared with small lymph nodes. The results showed that enlarged lymph nodes were infiltrated with more immune cells rather than stromal cells. Further investigation into immune cell types revealed that the total and subcategory of dendritic cells (aDC, iDC, cDC, and pDC) were significantly enriched in enlarged lymph nodes. Additionally, CD4+ T-cells including Th1, Treg, CD4+ memory T-cells also had high abundance in enlarged lymph nodes. However, though up-regulated infiltration of CD8+ naïve T-cells, activated CD8 cells were absent in large lymph nodes, which was consistent with non-differential genes relative to cytotoxic response between large and small lymph node. Thus, the enlarged regional lymph nodes might represent strong ability to present antigen and facilitate the activity of other immune cells when targeting tumor cells in the future. Our study might have clinical implication. The cN1&pN0 status, which presents relatively high immunity and predicts better survival in TNBC, is expected to be paid more attention and serves as convenient evaluation method of immunity to preliminarily select patients suitable for immunotherapy. Our study had several limitations. First, the case numbers in different subgroups were insufficient for a powerful statistical analysis. Second, though cN2&pN0 were technically excluded, we could not rule out the possibility that there might be a few down-staged cN1&pN0 patients who were administered neoadjuvant chemotherapy in the SEER database. For those patients, the pathologically negative results of lymph nodes may be due to chemotherapy. However, mingling cN2 patients who received neoadjuvant chemotherapy would probably compromise the survival of cN1&pN0 patients, meaning that the survival of cN1&pN0 would be even better if cN2 cases were eliminated. Taken together, we revealed that clinically enlarged but pathologically negative regional lymph nodes might serve as an indicator for early systemic immune responses to tumors. The elevated expression of immune-related genes and activated immune pathways in regional lymph nodes might confer a survival advantage in TNBC.
2020-10-10T13:07:20.720Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "bb37ce861b14ef704f31d57e87896a86f5543f55", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.570981/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb37ce861b14ef704f31d57e87896a86f5543f55", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1261105
pes2o/s2orc
v3-fos-license
Comments on"Capacity with explicit delay guarantees for generic sources over correlated Rayleigh channel" We address a major flaw in the abovementioned paper, which proposes to calculate effective capacity of random channels by the use of central limit theorem. We analytically show that the authors are incorrect in finding the effective capacity by first taking the limit of cumulative random process rather than taking the limit of moment generating function of the same process. We later quantify our results over a correlated ON-OFF process. I. INTRODUCTION In [2], the authors inspired by the effective bandwidth theory have developed a dual effective capacity theory to analyze random and time-varying wireless channel under a probabilistic delay constraint. The so-called effective capacity provides a way to figure out the maximal constant arrival rate that can be sustained by the stationary ergodic service process, at the target Quality of Service (QoS) exponent. The authors in [1] claim that the method given in [2] cannot be applied to general channel models such as time-correlated Rayleigh fading, and it is only parametrized with respect to the so-called QoS exponent. Hence, they provide an alternate method of calculating the effective capacity which only involves the mean and variance of the cumulative service process. We analytically show that the effective capacity calculated by the method in [1] II. BRIEF SUMMARY OF EFFECTIVE CAPACITY THEORY For stationary ergodic arrival and service process, the queue-overflow probability is shown to be asymptotically decaying with exponential rate where Q(∞) is the queue length at stationary state, and θ is called the QoS exponent. The smaller θ is, the looser QoS guarantee achieves, which is reflected in the slower decay rate. On the contrary, faster decay rate with larger θ guarantees stringent QoS performance. Let c(τ ) be the instantaneous service rate of the queue in terms of bits that can be served in a finite length slot τ , and C(t) = t τ =0 c(τ ) be the maximum aggregate number of bits that can be served during [0, t]. Wu and Negi developed the concept of effective capacity [2], which is defined as the function of Gartner-Ellis (GE) limit of service process and the QoS exponent θ > 0, i.e., where α C (θ) is the GE limit of the service process. GE limit of service process is defined in terms of the logarithm of moment generating function of cumulative service process C(t). III. COMMENTS ON THE ANALYSIS IN [1] In Section III of [1], the authors claim that regardless of the distribution of instantaneous service process, c(τ ), the effective capacity, E C (θ), is that of a Gaussian random variable with mean and variance depending on the service process. We analytically show that in general this is not true, since the authors made a mistake while taking the GE limit of the cumulative service process. For ease of exposition, we follow [1], and develop our arguments for the case of uncorrelated wireless channel, where c(τ ), ∀τ , are independent and identically distributed (iid) random variables. For this case, GE limit of service process can be found from (3) as follows: = lim where (4) is obtained from the assumption that c(·) is iid. Note that (5) shows that GE limit of cumulative service process is simply the limit of the sum of logarithm of moment generating function of the instantaneous service process c(·). On the contrary, Soret et. al., states in Section III of [1] that as t → ∞ central limit theorem can be applied, and C(t) can be considered as a Gaussian random variable with average t · m c and variance Note that the moment generating function of this Gaussian random variable is given as M C (θ, t) = e θtmc+ θ 2 2 tσ 2 c . Even though, this argument is correct on its own, the derivation of effective capacity based on this argument appears flawed. Soret et. al., give the logarithm of moment generating function of cumulative service process which is found to be a Gaussian random variable as:α Clearly, (6) and (5) are not equal to each other for all instantaneous service processes c(·), even when t → ∞. We demonstrate this with a simple example in the following section. The problem with the argument made by Soret et. al., is that the authors first take the limit of C(t) which is not only inside both the logarithm and expectation but also appears in the exponent of Euler's number. In addition, after moving the limit to the exponent of Euler's number, they still take the limit of moment generating function of Gaussian random variable divided by t. In summary, (6) is the result of 4 the following mathematical statement: In general, α C (θ) andα C (θ) are not equal to each other for all θ, and c(·). In fact, a straightforward observation shows that these two quantities are equal to each other only when instantaneous service process c(·) is iid Gaussian random variable with mean m c and variance σ 2 c . IV. A NUMERICAL EXAMPLE: ON-OFF PROCESS One may argue that even if α C (θ) andα C (θ) are not exactly equal to each other,α C (θ) represents a good approximation to α C (θ) whose derivation is quite complex for a large variety of wireless channels. In the following, we demonstrate that this is in fact not true, and the quality of approximation relies on the channel parameters as well. For this purpose, we consider a time-correlated and slotted channel model, namely ON-OFF channel. The main reason we consider ON-OFF channel model is that it is analytically sufficiently simple so that exact closed form solution of α C (θ) can be obtained. Meanwhile, ON-OFF channel still displays time-correlation among channel states similar to more complicated channel models such as Rayleigh channels. Note that the authors in [1] argue that their approach is applicable not only to Rayleigh channel model but also to all other possibly correlated channel models. By assuming ON-OFF channel model, we can explicitly and inarguably show that their method provides a good approximation to exact effective capacity under a limited range of QoS exponent θ, and channel rates. ON-OFF channel is modeled as follows. In ON state, the user can send r bits/slot, and in OFF state the user is not allowed to transmit any bits. The transition probability from OFF(ON)-state to ON(OFF)-state It is easy to determine that the stationary probability of being in ON-state is given as π ON = 1−λ 2−λ−µ . GE limit of markov modulated process is found in [3]. Let Q denote the transition probability matrix for an irreducible and aperiodic general N -state markov modulated process, and let r i be the service rate of each state i, 1 ≤ i ≤ N . Then, where R = diag(r 1 , r 2 , . . . , r N ) is a diagonal matrix of service rates and ρ(A) is the spectral radius of matrix A. For ON-OFF source (8) simplifies to where a(θ) = λ + µe rθ , Note that effective capacity of ON-OFF channel can be determined by inserting (9) into (2). Meanwhile, in order to calculateα C (θ), we first need to calculate the mean and the variance of cumulative service process. Soret et. al., states that over a block of length k 1 , the mean and variance of cumulative service process is given as For ON-OFF process described earlier, According to the method given in [1],α C (θ) can be determined as We compare effective capacity calculated with α C (θ) andα C (θ) with respect to θ and r. The state transition probabilities of ON-OFF source are arbitrarily chosen as λ = 0.2, and µ = 0.6. Note that we 6 achieved similar results for other values of λ, and µ. In Figure 1, we observe that effective capacity calculated with α C (θ) andα C (θ) match well when the range of θ is in [0, 0.4]. However, one can easily notice that these two lines deviate from each other as θ increases. When θ is larger, i.e., QoS guarantees are stricter, thenα C (θ) gives incorrect negative effective capacity values. Effective capacity of ON-OFF source with respect to transmission rate r when θ = 0.6, λ = 0.2, and µ = 0.6. In Figure 2, we compare the effective capacities with α C (θ) andα C (θ) for varying values of transmission rate r. Again, we clearly observe that the approximation given in [1] is only correct for small values of r. V. CONCLUSION In this work, we analytically show that in [1] the authors make a major flaw in finding the effective capacity by first applying the central limit theorem to the sum of iid random variables c(τ ) with mean m c and variance σ 2 c , and then taking the GE limit of the resulting cumulative random process. Next, over a correlated ON-OFF channel we numerically verify thatα C (θ) is not a good approximation to α C (θ) either, since the quality of approximation depends to a great degree on the channel parameters. Besides, given the channel parameters the proposed effective capacity expression of [1] tends to follow the exact solution only in a limited range for QoS exponent θ and negative capacity values are achievable as seen in Figure 1. The same observation is also true for the transmission rate r as shown in Figure 2. Based on these observations, we caution the researchers on the applicability of the approach in [1], and recommend them to verify that the channel and QoS parameters are chosen so that the approximation is correct.
2011-12-21T12:51:26.000Z
2011-12-21T00:00:00.000
{ "year": 2011, "sha1": "486fef602a91e34773fd4fedce39f61c72dab54b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "486fef602a91e34773fd4fedce39f61c72dab54b", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
258949580
pes2o/s2orc
v3-fos-license
Effect of Fluorescent-Producing Rhizobacteria on Cereal Growth Through Siderophore Exertion Despite soil having an abundance of iron (Fe), it is unavailable for proper plant growth and development. One of the mechanisms plants use to deal with iron deficiency is the uptake of iron by chelating phytosiderophores. Pseudomonas fluorescence can produce pyoverdine-type siderophore and has potential application in agriculture as an iron chelator. Therefore, bacterial isolates collected from different areas of district Faisalabad were screened for their fluorescent, siderophore production and indole acetic acid equivalents. After selecting efficient strains from a screening test, they were evaluated for improving wheat and maize production under field conditions. The results showed that out of 15 isolates, 7 were found to have significant plant-beneficial microbial traits. Efficient strains promoted grain yield by 24.2% and 20.2%, plant height by 30.9% and 23.7%, total grain weight by 25.3% and 13.4% over control in wheat and maize, respectively. Similarly, significant improvements in the number of grains per cob/spike were also observed. Analyses of grain iron contents depicted 67% increase as compared to control in for maize. Therefore, based on the results, it is concluded that bio-fortification of cereal crops through fluorescent producing siderophoric microbes is an effective strategy favorable for plant growth and development through nutrient solubilization/mobilization. Introduction Micronutrients play a crucial role in the growth and development of living organisms. Intensive cereals production to feed the rising population and increasing demand for synthetic agrochemicals (chemical fertilizers & pesticides) is the present-day concern in the agriculture sector (Please give a reference). Soil inhabiting plant growth promoting rhizobacteria (PGPR) has a vital role in sustainable cereals production. Utilizing beneficial bacteria (PGPR) is an eco-friendly strategy for enhancing cereals production by modulating the root system architecture, imposing systemic resistance, and producing certain allelochemicals. Moreover, the production of primary and secondary metabolites improved plant growth, nutrient uptake, quorum sensing, and defence against phytopathogens ; (Alori, Babalola, & Prigent-Combaret, 2019). The PGPR potential to improve the growth of plants like maize, rice, wheat, etc., has been reported previously (Adjanohoun et al., 2011;Gopalakrishnan et al., 2013;Islam et al., 2014). Many strains of Pseudomonas fluorescence belonging to PGPR are known to enhance plant growth by secreting a soluble greenish fluorescent pigment under UV light called fluorescence (Vacheron et al., 2016). Pseudomonas fluorescence is known to produce siderophore "pyoverdine", a biotechnologically significant iron chelator with great potential for application in the agricultural and medicinal sectors (Joshi et al., 2018). Low molecular weight (<1500 Da) siderophores are Fe 3+ chelating agents which deliver free iron to the cells by interrelating with membrane receptors (Johnstone and Nolan, 2015;Saha et al., 2016). Siderophore-producing bacteria are classified into three main classes: Hydroxamates, Catecholate, and Carboxylates, depending on the Fe-chelating category . By binding with the iron tightly, siderophores reduce the bioavailability of iron for plant pathogens and facilitate the killing of phytopathogens (Beneduzi et al., 2012;Ahmed and Holmström, 2014;Herlihy et al., 2020). Moreover, the development of the Fe-siderophore complex is affected by the concentration of divalent or trivalent cations such as Cd 2+ , Ni 2+ , and Al 3+ in soil, which competes with Fe for binding sites in siderophores, thereby reducing the chances of Fe-binding (Herlihy et al., 2020;Gorshkov and Tsers, 2022). Iron, the second most prevailing metal on earth, is the imperative element for the production of all living microbes because it catalyzes the enzymatic mechanism, electron relocation, oxygen metabolism, and DNA and RNA synthesis ((Aguado-Santacruz, Moreno-Gómez, Jiménez-Francisco, García-Moya, & Preciado-Ortiz, 2012). Iron exists in an aqueous solution in two interconvertible states between divalent (Fe 2+ ) and trivalent (Fe 3+ ) ionic forms (Buziashvili & Yemets, 2022). The stability of these two states in the soil is regulated by pH, aerification, salinity, and biotic matter (material that originates from living things) content. The existence of oxygen and pH results in speedy oxidation, and the Fe 2+ ion is oxidized to Fe 3+ (Colombo, Palumbo, He, Pinton, & Cesco, 2014). Ferrous (Fe) ions translocation in plants occurs in two ways. As H + is released, Fe chelate reductases are activated, reducing Fe 3+ to Fe 2+ ions, which are subsequently translocated into the intracellular cavity of the epidermis cells. Alternately, Fe 3+ ions are first bound to phytosiderophores and ejected into the rhizosphere ; Kobayashi and Nishizawa, 2012;. However, the siderophore's leading role is to ensure the bioavailability of iron-by-iron mobilization (Schwabe et al., 2020). Phytosiderophores function as highaffinity chelators, and the Fe 3+ phytosiderophore complex is transferred into the cells by the Fephytosiderophore Yellow stripe 1 (in maize) or Yellow stripe-like proteins (in other grasses) . Arthrobacter is the most abundant bacterial genera present in rhizosphere of gramineous crops (Cavaglieri, Orlando, & Etcheverry, 2009). The concentration of Fe is low in cereals, and approximately 2 billion people globally have iron deficiency (Please put a reference here). Iron is deficient in the soils of Pakistan because of its calcareous nature with alkaline pH and higher contents of carbonates (Zulfiqar et al., 2020). The nutritional value of Fe in food grains should be enough not only to meet the adults' dietary iron requirements but also to fulfil the Fe deficiency (Please give reference). Iron (Fe) deficiency can be compensated by the diverse option available such as genetically modified crops, food fortification, chemical fertilizers, nutritional diversification, and agronomic biofortification (García-Bañuelos, Sida-Arreola, & Sánchez, 2014); ; ; . Agronomic biofortification is a new way to mitigate micronutrient malnutrition (Bouis et al., 2011;Benkeblia, 2020;Roriz et al., 2020). Likewise, the biofortification of plants with free-living microbes is a promising strategy for improving the production of food crops (Glick, 2012). Cereals with iron biofortification improve iron's bioavailability, reducing iron malnutrition . Wheat and maize are rich sources of proteins and micronutrients (Zhao et al., 2020). Therefore, this study was planned to observe Pseudomonas fluorescence and then biofortification of cereal crops with these microbes to identify their role in crop growth and grain enrichment of iron. Materials And Methods Collection of Rhizobacteria: Fifty rhizobacterial isolates from wheat, maize, millet and sorghum were collected from the Faisalabad district. The isolates were preserved in a cold storage box to reduce the microbial action and taken to the laboratory for further characterization and analysis. Isolation and purification of collected rhizobacteria: To isolate bacteria dilution plate technique was used. Briefly, around 10 g soil sample was dissolved in 99 mL of deionized water and shaken for 5-10 minutes to immerse Rhizosphere bacteria. Then 1 mL from this soil solution was taken and poured into 250 mL conical flask containing 99 ml of autoclaved deionized water to make 10 -2 dilutions. The procedure was repeated to get a 10 -6 dilution. A 100 microliter from each dilution of soil bacterial solution was taken using a sterilized nozzle and dropped on LB medium (Bertani, 1951). Afterwards, the proper dispersal of bacteria on the agar plate was made with a spreader. These plates were incubated at 28+2 0 C for 24-48 hours. After proper growth, further purification and repeated streaking of colonies were done on LB agar medium to get pure growth. The procedure was repeated twice or thrice to get purified strains. Finally, 15 pure strains were preserved in broth at -20 o C in eppendorf tubes in the freezer for further characterization (Kapoore et al., 2019). Detection of rhizobacteria for the fluorescent pigment: King's B medium was used for the detection of fluorescent pigment by microbes (King et al., 1954). The efficacy of this medium was examined by supplementing it with extra iron. The optimal composition of the medium per litre contained 20 g of Bacto peptone (Difco), 1.5 g of dipotassium hydrogen phosphate, 1.5 g of magnesium sulphate heptahydrate, 15 mL of glycerol and 1.5 g of agar with 5 µmol and 50 µmol iron supplementation. After sterilization of medium, the purified bacterial colonies were inoculated on petri-plates containing solidified media, and after 48 hours, production of fluorescent pigment was observed under UV light. Estimation of siderophore production-assay: Qualitative estimation: Chrome Azurol S (CAS) plates containing medium with lower iron contents were used for screening isolates. For spot inoculation of various bacterial isolates, 5 places were marked at an equal distance on the plates (CAS-agar medium). The plates were kept in the incubator for proper growth of bacteria at 28 + 2 0 C for 48 hrs. The presence of halo zones around the colonies was used as an indicator of siderophore production. The whole process of estimation followed the method described by Milagres et al. (1999). Quantitative estimation: CAS-shuttle assay was employed to quantify siderophore (SP) production (Kotasthane et al., 2017). An aliquot of culture filtrate measuring 0.5 mL was put into a test tube along with 0.5 mL of CAS reagent for the quantification assay. Afterwards, 0.5 mL of CAS reagent was added in an uninoculated blank. Ultimately, the colour was changed, and the intensity of this change was measured colourimetrically. The percentage (%) of siderophore units was calculated using the following equation Kotasthane et al. (2017). % Siderophore unit (SU) = Ar -As/Ar × 100 As = Sample absorbance at a wavelength of 600 nm Ar = Reference/blank absorbance at a wavelength of 600 nm Growth promoting trait: Auxin biosynthesis was measured through IAA equivalents using L-Tryptophan (Trp) and L-Tryptamine (Trt) as hormone precursors. Basic medium ingredients used for general-purpose media included glucose (0.375 g), di-potassium hydrogen phosphate (0.125 g), magnesium sulphate heptahydrate (0.025 g), iron sulphate heptahydrate (traces) and ammonium sulphate (0.125 g) in 250mL of distilled water. Around 10 mL of broth was taken in test tubes and sterilized. The isolates were inoculated along with 1% precursors at 28 degrees for 24 hours in an incubator. The concentration of IAA equivalents was measured on a spectrophotometer at 540 nm after centrifuge and colour development with the Salkowski reagent (Brick et al., 1991). Assessing the efficiency of fluorescent-producing microbes under controlled conditions: For screening, a germination experiment was carried out in a growth chamber. Bacterial isolates were selected based on the growth efficiency character under axenic conditions in the growth chamber. For the lab screening experiment, 7 isolates (top-performing isolates) were chosen on the basis of microbial characteristics. An inoculum of particular bacterial strains was prepared for the germination test in volumetric flasks of (250 mL). Then, the flask was kept in an incubator for 3 days with continued shaking. Surface sterilization of seeds was done using 3% hypochlorite solution (for 2-3 minutes). Before dipping in selective bacterial inoculums, the seeds were washed three times with deionized water and then spread on moist filter paper sheets. Sheets were kept moist and covered with polythene in a growth chamber under proper light and temperature. Growth parameters were recorded after (10) days of seed sowing. Agronomic traits were also measured, including root length, shoot length, and shoot fresh weight. Assessing the efficiency of fluorescent-producing microbes under Field conditions: The field study was conducted to analyze the effectiveness of siderophoreproducing bacteria to chelate insoluble iron and their ultimate impact on cereals (wheat and maize) growth and yield during 2021-22. Wheat variety Galaxy 2013 and maize variety Malka 2016 was used in the field experiment. In control (un-inoculated) treatment, seed coating was carried out using a mixture containing 10% sugar solution + sterilized broth + sterilized peat. Seed inoculation of experimental units was done with peat containing siderophore (SP) producing bacteria plus sterilized 10% sugar (as sucrose solution) at a 10:1 ratio. The recommended dose of NPK fertilizers (110-46-25 kg/ha and 120-90-60 kg/ha for maize and wheat, respectively) was applied at sowing. Wheat was sown manually with seeds dibbled at 6 inches depth in six rows having 9 cm plant-to-plant distance and 30 cm spacing between rows. On the other hand, maize seeds were dibbled on 5 ridges (7 feet*13 feet) at a plant-toplant distance of 12 inches. The soil used for the experiment was free from salinity and sodicity hazards and deficient in organic matter, while phosphorus, potassium, and iron contents were sufficient. Sex treatments (T1=control, T2=SB1, T3=SB2, T4=SB3, T5=SB4, T6=SB5, T7=SB9 and T6=SB10) were applied using Randomized Complete Block Design (RCBD) with three replicates... The parameters (grain weight, plant height and grain yield) were recorded at harvesting yield. The grain analysis for iron was done through a wet digestion process using a di-acid mixture, and contents were determined using atomic absorption spectrophotometer. Statistical analysis was performed using Statistix v. 8.1 (Steel et al., 1997). Results And Discussion Siderophore-producing microbial-mediated biofortification is an emerging approach to overcoming malnutrition. The PGPR can fortify Fe in the soil rhizosphere by SP production and Fe solubilization. The present study was conducted to isolate and purify siderophore-producing bacteria that can produce fluorescent pigment and improve the growth and yield of (wheat and maize). Biochemical Characterization: Results of fluorescent pigment production under UV light showed that the ability of rhizobacteria to excrete fluorescent pigment depends on the synthetic media used for the growth of microbes (Table 1). The microbial characteristics determination showed that out of 50 collected strains, 15 produced a higher amount of siderophore and fluorescent pigment when examined under UV light. All the strains showed a positive response to pigment production except control, indicating a resemblance to typical characteristics of Pseudomonas fluorescence. The maximum fluorescent pigment was observed in SB10, followed by SB9 when supplemented with 50 µmol of iron chloride (Table 1). Siderophores improve iron nutrition through iron chelation in the rhizosphere. These iron-chelating siderophores also reduce the availability of iron to pathogens. During the qualitative test, siderophore detection was marked by transparent halo zones around colonies. 7 isolates out of 15 were found capable of producing siderophores. In quantitative measurement through a spectrophotometer, the maximum colour change was observed in SB10 (72.0%) followed by SB5 (64.5%). Likewise, the maximum siderophore unit percentage ranged from 33.8% to 72.0% (Table 1). Similar results were reported by Kumari et al. (2021) with a maximum production of siderophores (46.2 (SU %) with SB10. Moreover, results regarding IAA equivalents indicated that precursors induced the biosynthesis of auxin in all strains (Table 1). Maximum auxin biosynthesis with Ltryptophan was observed in SB3 (6.38 µg/mL), and the same strain produces more auxin biosynthesis when augmented with L-Tryptamine (7.35 µg/mL). .58 ++ Efficiency character essay: Inoculation with siderophore and fluorescent pigment-producing bacteria manifested a positive response on maize seeds germination when the essay was conducted under controlled axenic conditions (Fig. 1). Data describes that all the strains are statistically on par with each other in increasing shoot length but statistically significant as compared to control. The maximum shoot length of maize was observed in strain SB10 (29.5 cm) and SB9 (28.6 cm). The maize plant has an embryonic root system consisting of primary, seminal and post-embryonic. Similarly, inoculation with selected bacterial isolates improved root length in SB9 and SB10 compared to other treatments. SB3, SB 4, and SB 5 were on par with each other for improvement in root length. Germination test assay showed increased shoot/root length and weight in inoculated seeds over uninoculated (control) seeds. Satish et al. (2020) also reported improvement in the growth of plants; after having contact with soil microbiota with the roots of the plant. Similarly, Kaur et al. (2020) reported that the inadequacy of essential micronutrients (vit. A, Zn, and Fe) can be improved by inoculating seeds with specific microbes. He et al. (2020) observed that some wheatlinked microbes, primarily the microbes of rhizospheric soil, produce SP and other metabolites, which increase the solubility of Fe in the soil. The current scenario may imply that an increase in physiological traits might be due to increased phosphorus solubilization, solubilization/uptake/translocation of iron, auxin and phytohormone production (Yavarian et al., 2021;Etesami, 2020;Mushtaq et al., 2021;Delaporte-Quintana et al., 2020). The addition of plant growth regulators (PGRs) to plant growth-promoting rhizosphere bacteria (PGPRs) showed improvements in chlorophyll content, leaf area, sugar content, oxidative stress, and reduction in peroxidation of lipids . The findings of the current study are also in line with the observations of Ekin (2019). Findings of the field experiment: Biofortification of wheat (Triticum aestivum L.) through seed inoculation with siderophore-producing bacteria is an alternative approach to fulfilling desired micronutrients deficiency in the human diet in rural areas (Ehsan et al., 2022;Riaz et al., 2020). Radzki et al. (2013) reported that SP with low molecular weight binds the Fe and transports it into root cells via protein membrane. Field experiment results revealed that inoculation with ironcomplexed siderophore-producing bacteria improved yield and yield attributes. Data in figure 2 showed the effect of siderophore-producing bacteria on the plant height of both crops. Maximum height in maize (255 cm) was observed where T6 was used as inoculum. In the case of wheat, the maximum height (110 cm) was found with the T7 strain. A significant increase in grain yield of cereals has been shown in figure 3 by siderophore-producing microbes. The maximum maize yield (5.93 t/ha) was found in T7 (SB9), while in wheat, T8 boosted the yield, i.e. 3.48 t/ha. The application of microbial inoculants, along with a recommended dose of mineral fertilizers improved grain weight in both crops. The increase in grain weight of maize was in the range of 32.6-37.1 g and for wheat (26.7-33.9 g), which is statistically significant over control (Figure 4). These outcomes are in harmony with the results of Yadav et al. (2020) and Singh et al. (2020). Like the current study, Khalid et al. (2015) found a 13-18% increase in wheat grain yield, 12-16% plant shoot, 6-11% root length, and 34-60% chlorophyll contents with the inoculation of siderophore producing Pseudomonas in wheat. The interaction of plant-microbe is the basic factor contributing to the improvement in productivity, plant health, and soil fertility, as already observed in the case of potato (Solanum tuberosum L.) by Mushtaq et al. (2020). Kabiraj et al. (2020) found that bacterial inoculants can significantly increase agronomic parameters, which helps alleviate the cost of production and environmental pollution. Microbes are proven to be cost-effective, efficient, more promising, and sustainable approaches that can contribute to plant development Results regarding iron contents in grain are shown in figure 5. Both crops showed positive responses to iron biofortification, with iron contents in the range of 12.35%-28% (wheat) and 40.3%-67.7% (maize). Yield contributing parameters like spike length (wheat) and cob length (maize) are given in figure (6). Results indicated that SB8 promoted the development of the productive part of cereal crops with a 25% increment in spike length of wheat and a 33% increase in cob length of maize as compared to control. All the strains significantly increased the cob length of maize compared to the check treatment. But in the case of spike length, SB8, SB6 and SB2 showed remarkable impact compared to other strains in developing the productive part of wheat crop. Similarly, number of grains per cob/spike was recorded (Fig 7). Data revealed a significant effect of fluorescent-producing PGPR having siderophore-producing ability in increasing the number of grains per spike/cob. T1 T2 T3 T4 T5 T6 T7 T8 (g) Maize Wheat Maximum grain count/spike was found in SB8 (75.6), followed by SB6 (74.6) and SB7 (74.5). Mushtaq et al. (2021) reported that microbes promoted nutrients concentration, plant physiological processes, plant development, yield and growth using various (direct or indirect) methods such as hormonal production, including (cytokinin, gibberellins, and auxin IAA). Similar findings were reported by (Ehsan et al., 2022). Field studies demonstrated around 24 % increase in wheat grain yield, 30% increase in plant height, 25% increase in grains/spike and iron contents after inoculation with siderophore-producing bacteria. Such an increase in germination and yield attributes provides a baseline to test these siderophore and iron solubilizing PGPR for other cereal crops. Furthermore, around 60% increase in grain Fe contents and 13% increase in thousand-grain weight (TGW), 33% increase in cob length and 20% increase in grain yield of maize were observed in inoculated plants compared with uninoculated control. Likewise, a combination of six strains, including Bacillus subtiles & Pseudomonas fluorescence, showed high siderophore production and increased antioxidants activity that reduced fungal infection in maize and improved its yield (Lopez-Reyes et al., 2017;Ghazy and El-Nahrawy, 2021). Zarei et al. (2022) reported that the combination of four strains of Pseudomonas fluorescence significantly increased yield traits of sweet corn and canned seed yield by reducing harmful effects and improving crop productivity. Conclusions The findings of this study perceived that biofortification of cereals through seed inoculation with siderophore (SP) producing microbes having fluorescent producing character can increase the solubilization of insoluble (Fe) and bring about improvement in growth and development of the cereals in alkaline calcareous soil.
2023-05-29T15:09:46.652Z
2023-05-26T00:00:00.000
{ "year": 2023, "sha1": "9acfc7d8d91253695424966fd63354f72df20002", "oa_license": "CCBY", "oa_url": "https://joarps.org/index.php/ojs/article/download/168/109", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cb1637382e5455e2ba7e108bdd615e1ea95a6c4b", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
52075738
pes2o/s2orc
v3-fos-license
Main Risk Factors Association with Proto-Oncogene Mutations in Colorectal Cancer Objective: Although several factors have been shown to have etiological roles in colorectal cancer, few investigations have addressed how and to what extent these factors affect the genetics and pathology of the disease. Precise relationships with specific genetic mutations that could alter signaling pathways involved in colorectal cancer remain unknown. We therefore aimed to investigate possible links between lifestyle, dietary habits, and socioeconomic factors and specific mutations that are common in colorectal cancers. Methods: Data were retrieved from a baseline survey of lifestyle factors, dietary behavior, and SES, as well as anthropometric evaluations during a physical examination, for 100 confirmed primary sporadic colorectal cancer patients from Northwest Iran. Results: High socioeconomic status was significantly associated with higher likelihood of a KRAS gene mutation (P < 0.05) (odds ratio: 3.01; 95% CI: 0.69–13.02). Consuming carbohydrates and alcohol, working less, and having a sedentary lifestyle also increased the odds of having a KRAS mutation. Conclusion: Although research has not yet described the exact relationships among genetic mutations with different known risk factors in colorectal cancer, examples of the latter may have an impact on KRAS gene mutations. Main Risk Factors Association with Proto-Oncogene Mutations in Colorectal Cancer Roya Dolatkhah 1 *, Mohammad Hossein Somi 2 , Reza Shabanloei 3 , Faris Farassati 4 , Ali Fakhari 5 , Saeed Dastgiri 1,6 (Kin et al., 2013). However, few studies have examined the relationship among genetic mutations and nutritional factors including processed meat, tea, coffee, tobacco, and alcohol consumption with lifestyle in CRCs. (Cummings and Bingham, 1998;Kratz et al., 2007;Kamal et al., 2012). In addition, there is evidence that a sedentary lifestyle and prolonged immobility increases the risk of CRC (Samowitz et al., 2006;Slattery et al., 2007). However, the molecular mechanisms underlying these relationships with the development of CRC are not completely understood. Socioeconomic status (SES) can be extremely related to habits, behaviors, and decisions in life. The results of previous studies of SES and the risk of developing CRC have shown that higher socioeconomic classes have higher CRC risks (Wu et al., 2006;Aarts et al., 2010;Boyle et al., 2014). Activation of Kirsten rat sarcoma viral Oncogene (KRAS), resulting in the activation of Mitogen activated protein kinase (MAPK) occurs early in several cancer progression (Borras et al., 2011;Dolatkhah et al., 2015). In our previous work we showed that KRAS mutation had significantly associated with poor prognosis in CRCs (Dolatkhah et al., 2016).The purpose of this study is to investigate the following research question: In a Persian population at 2 urban hospitals what is the risk of genetic mutations associated with socioeconomic and lifestyle factors, and also dietary habits in CRC patients? Patients and Design We considered 147 patients with suspected CRC during colonoscopy from 2 most referent general hospitals (Imam Reza and Sina) of Tabriz, Northwest of Iran with Azeri ethnicity. The diagnosis of CRC was histologically confirmed by an expert pathologist. According to the inclusion and exclusion criteria reported in this research study, we included any cases with confirmed primary and sporadic colorectal cancer, with any morphologic types. We excluded those patients with any dysplastic polyps, secondary CRC, or hereditary CRC disease, and 100 patients were eligible for our investigation with confirmed colorectal cancer. All 100 patients were enrolled in this study for 2 years between 2013 and 2015. Ethical Approval The study protocol was approved by Ethics Committee of Tabriz University of Medical Sciences (Permit Number: 5.74.1235). Informed consent was obtained from all individual participants included in the study. Molecular Tests Sampling and testing were done as described in previous reports (Dolatkhah et al., 2016;Dolatkhah et al., 2017). Briefly, neoplastic and non-neoplastic tissue samples, measuring 2-4 mm, were obtained from the colon and rectum of patients during colonoscopy. These were sent to a molecular laboratory, coded, and stored at −80°C until testing. For molecular testing, we extracted genomic DNA from the tissue samples according to the manufacturer's guidelines (CinnaGen Company), and used them for PCR reactions with primers encompassing KRAS exon 2 (codon 12, 13) and BRAF exon 15 (codon 600) (Borras et al., 2011). Sanger sequencing was performed after purification with a PCR Product Purification Kit (MBST, Tehran, Iran), and sequencing was done with forward primers of PCR amplification, using an Applied Biosystems Genetic Analyzer (ABI 4-capillary 3130 Genetic Analyzer). Measures Data were retrieved from baseline survey of lifestyle factors, dietary behavior, and SES, as well as anthropometric evaluations during a physical examination of CRC patients. The lifestyle assessment measured 8 standard variables: smoking, drinking alcohol, eating breakfast, sleep time, physical exercise, work time, sedentary behavior, and life experiences .The lifestyle data were obtained during interview and by standard questionnaire according to references, and has been validated previously (Sarrafzadegan et al., 2009;Xu et al., 2012). Alcohol consumption was assessed as never, low, moderate, and high levels of drinking of any type of alcohol per week (Xu et al., 2012). Diet was assessed in the context of the 5 main food groups, using Data Into Nutrients for Epidemiological Research (DINER) to evaluate the subjects' 7-day,24-hour dietary behaviors (Welch et al., 2001), and was validated during our previous study (Dolatkhah et al., 2016). The participants' intake of vegetables and fruits; meat, including red and white meat, fish, and processed meat; bread, including white, whole grain, and oat bread; other carbohydrates such as rice, macaroni, and sweets; and caffeine such as black tea, green tea, and coffee, were assessed as defined meals per day described as either weight or volume for each food groups. The SES information was obtained using a researcher-designed questionnaire, which was validated during our previous study (Dolatkhah et al., 2016), and contains questions about residency, occupation, education, and income satisfaction. The response options designed as Likert scales and each variable scored from best to worth status. Subjects' educational were surveyed in categorical levels ranging from illiterate to college diploma or higher. Income satisfaction surveyed as monthly subjects' income, because the income amounts is not a good determinant for this topic in country. It was determined by patients' response to the question, "What is your income satisfaction level based on your total monthly family income from all sources?" The response ranged from satisfied to without satisfaction or without income. Patients were surveyed for occupation in categorical levels ranged as full time, part time, retired, and unemployed. The residency status of patients was defined as private or rental home, or unstable residence. The total scores for all criteria for each individual indicated the SES. The cut-points used for SES were considered as very good for 1-5, good for 6-10, moderate for 11-15, and bad for >15 scores. Analytical Model All analyses were performed using IBM SPSS, Version 19.0 (IBM Corp., Armonk, NY, USA). Descriptive statistics (mean, standard error and percentage) were calculated to reflect the CRC patients' characteristics. Logistic regressions models were performed to assess the likelihood of presence of mutation in health-risk behaviors including lifestyle, dietary, and socioeconomic factors. In the first step of logistic regression, single association of each variable was analyzed separately, and the presence of mutations was considered to be dependent variables. Adjusted odds ratios (ORs) with 95% confidence intervals (CI) were constructed in the contexts multivariate analyses, with adjustment for age, sex, body mass index (BMI), marital status, and other mentioned variables. To obtain a more accurate regression analysis for some variables as work, sleep, smoking and alcohol consumption, and other lifestyle habits, we categorized these variables as acceptable numbers of each group. For example, circadian sleep variable was categorized as normal sleep pattern (6-8 h sleep) or abnormal sleep pattern (<6 or >8 h sleep). Study Participants Descriptive Characteristics Of the 100 patients included in the study, 65 were men and 35 were women. The mean age of patients was Risk Factors Associated with Specific Mutations in CRC Twenty six (26%) cases had heterozygote mutant KRAS of which 16 patients had mutations in codon 12, nine in codon 13, and in one patient it was detected in codon 10, which was reported previously (Dolatkhah et al., 2017). BRAF mutations were not detected in the amplified exon in any of the studied cases. Males smoked more (43.8% of males vs. 11.1 % of females) and alcohol consumption, which has been mentioned by 19% of patients, was more by men (28% and 2.8% respectively). Males had more sporting and working times, and had better life satisfaction according to SES questionnaire. The use of black tea is historically and culturally very common among our population; so 95% of the patients in the study reported drinking black tea, with a frequency of at least 3 cups/day in 52% of them. In total, 78 of our patients were married, and most of them (88%) had a private home. Most of the patients were not satisfied with their income or had no income (52%), of whom the majorities were female. SES was very good or good in 40% of patients, 59% of them had a moderate SES, and only 1 patient reported a bad SES (Table 1). Analytical Statistics Related to KRAS Mutation Fully adjusted model regression analysis showed that sedentary lifestyle had a direct impact on mutation, so that patients with more often or always sedentary, was more likelihood of KRAS gene mutation (OR: 1.46; 95% CI: 0.75-2.86). Patients who worked <6 h/day were more likely to mutation, with an adjusted OR of 2.11 (95% CI: 0.81-3.66). Alcohol consumption was associated with higher likelihood of KRAS gene mutation (OR: 1.85; 95% CI: 0.80-4.29) ( Table 2). Patients consuming low red meat (but not processed or grilled meats) had more likelihood of mutation (OR: 2.29; 95% CI: 1.15-4.55) , and carbohydrates intake, such as rice, macaroni, and sweets, at least once-daily increased the likelihood of mutation about 4 times (OR: 3.56; 95% CI: 1.13-11.21). The likelihood of KRAS gene mutations in the patients who drank at least 3 cups of black tea daily were 17% higher , but this was not statistically significant (OR:1.17; 95% CI: 0.56-2.43) ( Table 3). Fully adjusted regression analysis showed that more highly educated patients (high school or university education) had decreased odds of a KRAS mutation (OR:0.80; 95% CI:0.31-2.05). None of the above-mentioned socioeconomic variables had Discussion To the best of our knowledge, this is the first study in country seeking to identify the relationship between lifestyle and nutritional factors, and SES with the occurrence of specific gene mutations involve in CRC. However, we tried to get more realistic evidences from SES of CRC patients in this survey that was a valuable and interesting point in this study. The respective questionnaires have contained questions which were derived from studies of relevant references, and have been already validated (Dolatkhah et al., 2016). The small sample size in this study were among the main constrains, due to technical difficulties of collecting fresh tissue samples from newly diagnosed CRCs. Also we just performed the double sequencing for KRAS positive samples, to ensure the accuracy of the tests, because of limitations of time and budget. In a case study conducted on 108 patients suffering from sporadic CRC, a relationship among high unsaturated fat levels, low calcium levels, and an increase in KRAS gene mutations was reported (Bautista et al., 1997;Slattery et al., 2000;Weijenberg et al., 2007;Naguib et al., 2010;Brandstedt et al., 2014). In another study conducted by Martinez et al., there was no relationship between various mutations and nutritional factors (Martinez et al., 1999). In a study on a large number of nutritional factors, using 1428 CRC patients, a significant relationship between vegetable consumption and KRAS gene mutations was found. Those who consumed fewer vegetables suffered more mutations. High carbohydrate and refined grain intakes were also related to a higher risk of KRAS gene mutations, particularly at codon 13. In addition, in this study, it was observed that the consumption of processed meats was related to certain mutations of KRAS gene (Slattery et al., 2007). Although there were conflicting findings about the association of red meat intake and the risk of colorectal cancer (Potter, 2017), but it is clear that cooking methods and the type of meat (proceed meat) are the major factors which associate with the risk of any mutation in CRC. Also, important measurable nutrients found in meat (especially Iron and fat) have recently received considerable attention in association with specific mutations involve in molecular pathways of colorectal carcinogenesis. Gilsing et al., (2013) specifically observed a dose-response relation between heme Iron intake and activating G>A mutations in KRAS gene, and an overall G>A mutations in APC gene. By adjusting the association for total meat intake we found contradictory results in our survey. Patients with very low intake of red meat had more likelihood of mutation than patients who had at least one meal of meat per day (OR: 2.29; 95% CI: 1.15-4.53). This may be because of the different cooking methods which are common in Iran (not processed or grilled meats) and different types of meat consumption in the country (with mostly lamb and sheep , and lesser beef, veal, and pork). The relationship between anthropometric factors and mutations of KRAS was investigated in a study conducted in Sweden in 2014. High weight-to-height ratio and BMI and obesity were related to increased odds of KRAS gene mutations. Interaction analysis showed a significant difference between men and women in terms of the BMI KRAS mutation, where men showed a stronger relationship (Brandstedt et al., 2014). Because KRAS gene mutations occur at the primary stages of CRC carcinogenesis, it seems that obesity biologically influences the risk of mutations related to tumor status. In our study, there was no meaningful relationship found between the BMI level and KRAS gene mutation occurrence, but each unit of BMI increase was associated with a 3% higher likelihood of KRAS gene mutation. The potential relationship among BMI, overweight, and genetic mutations demands further research. According to epidemiological population-based study results, a clear relationship has been found between the levels of tobacco and alcohol consumption and the risk of CRC. However, few studies have been conducted on the molecular relationship among smoking, alcohol use, mutations, and CRC pathogenesis. Smoking has been linked to an increased occurrence of CpG island methylator phenotype high (CIMP-high) tumors, and along with alcohol consumption and DNA mutation, epigenetic changes in CRC. In addition to the fact that alcohol consumption (particularly beer) has been related to increased CRC risk, it has been recognized that high levels of alcohol consumption and poor diet in terms of folate, B12, and B6 affects DNA methylation abnormalities and increases the likelihood of CpG island methylator phenotype cancers (Samowitz et al., 2006;Slattery et al., 2007). Based on logistic regression analysis results in the present study, alcohol consumption (even at medium or low levels) was associated with about double the likelihood of KRAS gene mutations. On the other hand, smoker CRCs had 28% higher odds of the KRAS gene mutation compared to non-smokers. A sedentary lifestyle with low activities will also increase BMI. A close relationship has been observed between the stage and grade of CRC and increases in the hours spent in sedentary leisure activities over 6 h/day (Boyle et al., 2014;Yu et al., 2014). However, it appears that there has not been any comprehensive study on the relationship between sedentary lifestyle and genetic mutations. Only one study in China has investigated the relationship between lifestyle and the levels of certain molecular biomarkers related to mRNA. The study showed that a sedentary lifestyle has a direct relationship with the levels of these biomarkers, where more hours without mobility were associated with higher levels of these biomarkers (Yu et al., 2014). The most interesting finding of our analysis was that the amount of sedentary lifestyle activities had a direct effect on the odds of KRAS gene mutations, so that a low-activity or completely sedentary lifestyle increases the likelihood of KRAS gene mutation by about 50%, in line with previous observations (Boyle et al., 2011). According to our results those patients who worked more than 6hours per day had 2 times lower likelihood of KRAS mutations. Therefore, the data suggests that programs to control and prevent CRC should emphasize increasing physical activity and keeping body weight within a healthy range (Boyle et al., 2011;Yu et al., 2014). The present research found different relationships between SES and CRC molecular pathways compared to previous studies. We found that patients with very good or good SES had 3 times higher likelihood of KRAS gene mutation. This study and other research have suggested that there are greater CRC risks among people in higher socioeconomic classes, due to the increases in encountering risk factors such as a sedentary lifestyle, alcohol and tobacco consumption, high-calorie diets, and more red meat consumption (Faggiano et al., 1997;Rohani-Rasaf et al., 2013). Further studies are warranted regarding whether this exposure could explain the higher chances for genetic mutations among these individuals. In conclusion, it is critical to plan CRC screening and prevention programs based on the most prominent risk factors for the disease and on these factors' interactions with genetic mutations. For CRC, these factors have been identified as including a person's lifestyle, dietary habits, environment, and cultural factors. Most deaths resulting from CRC can be prevented. The prevention strategies for this form of cancer include an appropriate diet, increasing physical activity, and attaining and staying at a healthy weight. Of course, research has not yet described the exact relationship among genetic mutations, such as those affecting KRAS and BRAF, environmental and nutritional risk factors, and the stage and prognosis of CRC. Lastly, based on this study's support for a relationship between the frequency of KRAS gene mutations and factors such as race, ethnicity, geographic location, lifestyle, and diet, future studies with larger sample sizes are necessary to further explore these concepts. Author contributions RD Substantial contributed to conception and design of the study, acquisition of data, analysis and interpretation of data, drafted and wrote the article, did the final approval of the manuscript, agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. MHS Substantial contributed to conception and design of the study, drafted and revised the article, did the final approval of the manuscript, agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. IAK, RS, and FF: substantial contributed to conception and design of the study, revised the article, did the final approval of the manuscript, agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. AF and AD Substantial contributed to conception and design of the study, analysis and interpretation of data, revised the article, did the final approval of the manuscript, agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Disclosure The abstract of this paper was presented at the IARC @ 50 , Global Cancer Occurrence , Causes, and avenues to Prevention 2016, as a poster presentation with interim findings. The poster's abstract was published in "Poster Abstracts" in IARC Abstract Book, available from: www. iarc-conference2016.com. Conflicts of interest The authors report no conflicts of interest in this work.
2018-08-25T21:43:14.175Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "90d0d481d5ab0187ad7d0dc7c5fbdd1d635e05ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "90d0d481d5ab0187ad7d0dc7c5fbdd1d635e05ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247628716
pes2o/s2orc
v3-fos-license
Synthesis of a biomimetic zwitterionic pentapolymer to fabricate high-performance PVDF membranes for efficient separation of oil-in-water nano-emulsions Oily wastewater from industries has an adverse impact on the environment, human and aquatic life. Poly(vinylidene fluoride) (PVDF) membrane modified with a zwitterionic/hydrophobic pentapolymer (PP) with controlled pore size has been utilized to separate oil from water from their nano-emulsions. The PP has been synthesized in 91% yield via pentapolymerization of four different diallylamine salts [(CH2=CHCH2)2NH+(CH2)x A−], bearing CO2−, PO3H−, SO3−, (CH2)12NH2 pendants, and SO2 in a respective mol ratio of 25:36:25:14:100. Incorporating PP into PVDF has shown a substantially reduced membrane hydrophobicity; the contact angle decreased from 92.5° to 47.4°. The PP-PVDF membranes have demonstrated an excellent capability to deal with the high concentrations of nano-emulsions with a separation efficiency of greater than 97.5%. The flux recovery ratio (FRR) of PP-5 incorporated PVDF membrane was about 82%, which was substantially higher than the pristine PVDF. Oily wastewater is one of the potential contributors to environmental pollution, and it has become a significant concern owing to its adverse impact on the ecosystem. The major contributor to oil pollution is the industries that are not limited to petrochemicals, electroplating, mining, and gas/oil production units 1 . The high demand for oil needs the oil's rapid offshore movement, and nature has witnessed several deadly oil spills. For instance, the Deepwater Horizon oil spill in the Gulf of Mexico has triggered the alarm of the difficulty of the oil-water separation 2 . Owing to its high significance, the development of advanced technologies and methods for the reclamation of the water from the oil-contaminated water has become an area of deep concern. The conventional methods such as adsorption, air floatation, centrifugation, and gravity settling might require high energy and are inefficient in separating oil from water 3 . To separate oily emulsions, the membranes are considered effective in separating the oil from the waterbased on the size-sieving. More control on the pore size of the membranes is required in the separation of the nano-emulsions as they consist of nano-sized oil droplets which easily pass through the rough membranes. Several super-selective materials and surfaces have been reported for the oil/water separation. These materials have been used in the forms of meshes 4 , cotton 5 , foams 6 , sponges, woven/non-woven fabrics 7 . Most of these designed materials effectively separate the floating oils but may face challenges separating the emulsions. Based on the dispersed phase diameter, the different nature of the oil/water mixture can be defined. If the diameter is more than 150 μm, it is termed as free oil and water mixture, while a term of dispersion is used when it is in the range 20-150 μm. The emulsions are generally defined when the diameter is < 20 μm 2 . Due to the complex nature of the emulsions, conventional techniques such as skimming, and gravity separators are ineffective for separating the emulsions. Chemical emulsion breaking may be effective, but high operation costs, significant Materials. 2,2′-Azobisisobutyronitrile (AIBN) (≥ 98%) was purchased from Fluka Chemie AG and crystallized from chloroform-ethanol. Dimethylsulfoxide (DMSO) (≥ 99.5%), N,N-dimethylacetamide (DMA), Bovine Serum Albumin (BSA), and surfactant TWEEN ® 80 were purchased from Sigma-Aldrich. All water used was of Milli-Q quality. A Spectra/Por (Spectrum Lab., Inc.) membrane (MWCO 6000-8000) was used for dialyses. Monomers 1 33 , 2 34,35 , 3 36 , and 4 37 were synthesized. The Alfa Aesar-44080 poly(vinylidene fluoride) was purchased with the molecular weight of 350 KDa 38 . Synthesis of PP 5. To a solution 1 (1.83 g, 10 mmol), 2 (3.28 g, 14.4 mmol), 3 (2.19 g, 10 mmol), and 4 (1.98 g, 5.6 mmol) in DMSO (11 g) in an RB flask was absorbed SO 2 (2.56 g, 40 mmol). Initiator AIBN (180 mg) was added to the solution and stirred at 65 °C for 24 h. The thickened mixture (turbid in water) was dialyzed against distilled water for 36 h, during which the polymer solution became turbid and separated as a thick gel at the bottom of the dialysis tube. The whole mixture was freeze-dried to obtain 5 as a white solid (10.3 g, 91%). Note that a HCl unit is depleted from monomer 2 during dialysis to give the zwitterionic motifs in the repeating units of PP 5. It is calculated that 283.0 mg of PP 5 contains repeating units 0. 25 Preparation of oil in water emulsion. Oil-in-water emulsions were prepared by adding diesel (1 g) into water (1 L) in the presence of surfactant Tween-80. The mixture was kept under vigorous stirring at 600 rpm at room temperature for 12 h. It was furthermore sonicated for 2 h. The average size of the oil-in-water emulsions was found about 92.71 nm with the help of the Malvern Zetasizer (Fig. S1). The oil/water separation, water flux, and the antifouling performances of the pristine and PP 5-PVDF membranes were accomplished by fitting the membranes into the dead-end filtration unit. The nitrogen cylinder was used to adjust and supply the appropriate trans-membrane pressure. The membranes were pre-compacted by pure water at a pressure of 8 bar for 1 h. The oil residue in the permeate was determined by the UV-Vis spectrophotometer. Results and discussion Synthesis and characterization of PP 5. We set out to synthesize a new zwitterionic pentapolymer (PP) 5 ( Fig. 1) containing a variety of chelating motifs CO 2 − , PO 3 H − , SO 3 − and NH 2 using cyclopolymerization protocol to pursue our planned modification of PVDF membrane for separation of the nano-emulsions. The work is inspired by the zwitterionic phosphatidylcholine headgroups, which contain cationic and anionic groups and are present in the cell membrane phospholipid bilayer. Due to this zwitterionic behavior of the cell membrane, it has fouling-free behavior. Naturally occurring zwitterions, very common in cell membranes, proteins, etc., motivated us to synthesize PP 5 which has pH-responsive zwitterionic motifs conferring charge neutrality, high hydrophilicity, strong dipole pairs, etc. A tightly and stably bounded water layer near zwitterionic polymers via strong electrostatically induced hydration, imparts the antifouling property of zwitterionic polymers by increasing the energy barrier for the adsorption of foulants 39 . AIBN-initiated cyclopolymerization of monomers 1, 2, 3, 4, and SO 2 afforded PP 5 in an excellent yield of 91% (Fig. 1). Since the reactivity ratios of SO 2 and the diallyl monomers are almost zero, the PP will have the monomers in random distribution alternated by SO 2 . At such a high conversion, the feed ratio is expected to match the monomer incorporation ratio. As such, the ratio of monomers 1, 2, 3, and 4 incorporated in the polymer was taken as 25:36:25:14, respectively. Note that an HCl molecule was depleted from the monomer 2 repeating unit during dialysis to give the zwitterionic phosphonate motifs in the polymer. Treatment PP 5 with NaOH is expected to generate anionic polyelectrolyte 6, which has numerous ligands involving N and O, whereby their Log[basicity constants], i.e., the pK a s of their conjugate bases, range from − 2.1 to 10.5 36,40,41 (Fig. 1). The amino-carboxylate, -phosphonate, and -sulfonate motifs provide several chelation centers. The pH-responsiveness of the polymer thus provides a greater latitude to trap various metal ions. The 1 H and 13 C NMR spectra of PP 5 and monomer 1 is shown in Figs. S2 and 2, respectively. The absence of alkene protons in the range 5-6 ppm and alkene carbons at 127-128 ppm confirms the absence of any residual www.nature.com/scientificreports/ double bond in the polymer, which suggests degradation chain transfer as the termination process involving allylic hydrogen of the monomers 42 (Fig. 2). The carboxylate signals at δ182 ppm and 31 P signals at δ5.72 ppm confirm the incorporation of monomers 1 and 2 in 5. The elemental analysis supports the incorporation of SO 2 and SO 3 -. IR bands 1296 and 1125 cm −1 were attributed to the SO 2 groups in PP 5 43 . The absorptions at 1413 and 1648 cm −1 were assigned to the stretching vibrations 44 of COO -. A weak band at 1717 cm −1 could be attributed to the CO 2 H owing to the equilibration: -NH + (CH 2 ) 3 CO 2 − ⇋ -N(CH 2 ) 3 CO 2 H. The band at 747 cm −1 is assigned to methylene chain (CH 2 ) 12 . The IR spectrum indicates the presence of the sulfonate group by its characteristic bands at 1180 and 1033 cm −143 . The band at 1205 cm −1 can be assigned to the stretching frequency of P=O 34 . It was difficult to obtain molar mass of PP 5 using GPC, since the presence of CO 2 − , PO 3 H − , SO 3 − and NH + motifs force the polymer to stick to the column materials. A similar observation was found earlier 45 . TGA analysis of PP 5. The TGA curve of PP 5 shows a weight loss of 6% up to 200 °C owing to the removal of moisture (Fig. S3). A sharp loss of 23% in the range 220-275 °C is attributed to the decomposition releasing SO 2 ; the polymer is calculated to have 22.6 wt% SO 2 . Another major loss of 27% in the range 275-470 °C is accounted for the loss of some of the pendants. The residual mass of 36% remaining at 800 °C could be attributed to some nitrogenous and phosphorous derivatives. The polymer thus remains very stable up to 220 °C. Solubility behavior. The water-insolubility of PP 5 is attributed to the effects of interactions of the zwitterionic motifs, which force the polymer backbone to adapt globular conformation 30,46 . The polymer was found to be insoluble in 1 M HCl and 1-5 M NaCl, but soluble in the presence of KBr. CSC value of KBr was determined to be 3.44 M. The stronger binding ability of Br -(as compared to Cl − ) to the NH + centers can disrupt the zwitterionic interactions and thereby leading to extended conformation and thus promoting water-solubility. A sample of PP 5 (120 mg) was found to be water-soluble in water (12 mL) in the presence of NaOH (15 mg, 0.375 mmol). The sample (120 mg) is calculated to have repeating units 0.106 mmol 1, 0.153 mmol of 2 (-HCl), 0.106 mmol of 3, 0.0594 mmol of 4 (vide supra). Therefore, upon the NaOH treatment, the pH-responsive equilibrium: 5 ⇋ 6 would reside on the right side, whereby the anionic motifs lead to extended conformation and water-solubility. www.nature.com/scientificreports/ Viscosity data. The viscosity plot of PP 5 in the presence of NaOH in 1 M NaCl is shown in Fig. S4. The intrinsic viscosity was found to be 0.0549 g dL −1 The low viscosity may be attributed to intramolecular hydrophobic association involving (CH 2 ) 12 NH 2 pendants, which helps to coil up the polymer backbone thereby reducing hydrodynamic volume. Characterization of PVDF membranes. FTIR and Thermal analysis. The pristine PVDF and PP 5 zwitterionic polymer doped PVDF were qualitatively investigated with the help of FTIR spectroscopy. The band that appeared at 825 cm −1 was assigned to the C-F stretching vibration of PVDF. The PVDF backbone C-C-C asymmetrical stretching vibrations were observed at 877 cm −1 . A prominent absorption band appeared at 1175 cm −1 was assigned to the -CF 2 symmetrical stretching of PVDF 47 . The fluoro compound generally exhibited strong absorption in the range of 1000 cm −1 to 1400 cm −1 . The characteristics of PVDF β-phase absorption bands observed at 604 cm −1 and 1249 cm −1 are the fingerprint of the β-phase of the PVDF 48 . The various absorption bands that appeared at 501 cm −1 , 1077 cm −1 , 1160 cm −1 , and 1388 cm −1 are the characteristic peaks of the absorption band of the α crystalline phase of PVDF 49 . The PVDF is a fluoro compound, and it has shown strong absorbance in that region, which was evident from the FTIR of the pristine PVDF (Fig. 3A). Due to the -C-H's symmetric stretching, the absorption band was observed at 2887 cm −1 , whereas the asymmetric stretching was assigned to the absorption band at 2977 cm −149 . In the case of the PP 5-containing membranes such as M-1 to M-3, all the characteristic peaks of the PVDF membranes have been observed (Fig. 3). Most of the polymer's absorption bands have appeared very close to the absorption band of the PP 5 and merged with the absorption band of the PVDF. However, the region in which the amino group has been absorbed remained the critical factor in recognizing the successful incorporation of the PP 5 into the PVDF. For instance, in the case of M-1, the two partially separated absorption bands appeared at 3246 cm −1 , and 3340 cm −1 , and these bands appeared owing to the primary amines present in PP 5. However, these bands were entirely absent in the pristine PVDF (Fig. 3A). The thermal stability of the various PVDF membranes was evaluated with the help of thermogravimetric analysis (TGA) in the temperature range 20-800 °C, whereas the temperature was changed 10 °C/min. The pristine or controlled PVDF membranes have shown a sharp loss in weight at 430 °C. The PP 5-PVDF membranes have shown the decomposition temperature in the range 450-456 °C (Fig. S5). TGA has demonstrated that incorporating PP 5 into the PVDF membranes does not compromise their thermal stability. It is according to the literature that fluoropolymers are more thermally stable compared to polymers consisting of hydrocarbons. The high strength of the PVDF is attributed to the C-F bond's high dissociation energy 50 . Morphological and elemental analysis of PVDF membranes. The surface morphologies of the membranes produced from the 20% PVDF in DMA were scanned with the help of the field emission-scanning electron microscope. Figure 4 Fabricated pristine and mixed matric PVDF membranes were furthermore evaluated to understand the morphology of the skin layer and the base of the PVDF membranes. The best efforts have been made to preserve the morphology while tearing the membranes for the cross-sectional view. The cross-sectional morphology was preserved by dipping the different membranes into the liquid nitrogen, thereby making the membrane brittle www.nature.com/scientificreports/ and easily breaking into two pieces by applying a small force. In the cross-sectional view, irrespective of the membranes types, it has been observed that the membranes exhibited an asymmetric structure. All membranes have consisted of the top skin layer, and finger-like projections started from the immediate base of the skin layer (Fig. 5). A typical spongy base was observed in the membranes; however, the length of the dense spongy base varied from membrane to membrane. It is interesting to discuss how the difference in spongy and finger-like structure has been produced in the various membranes after introducing the zwitterionic polymer PP 5. The high mutual diffusivity of the polymer containing DMA and the water resulted in the formation of the asymmetric structure, and this fact is well known 51 . In the case of the pristine PVDF, the short finger-like projections have been observed, and the rest of the part consisted of the spongy structure. In the case of the mixed matrix membrane, a substantial change in the cross-section of the membranes has been observed. The finger-like projections were grown in breadth and continue in length towards the bottom of the membrane as the amount of the PP 5 was increased in the membranes from M-0 to M-3. The dense skin layer was immediately formed as the polymer dope cast solution immersed into the coagulation bath owing to the rapid out-diffusion of the solvent resulting in the instant solidification of the external membrane surface. After that, the nonsolvent inward diffusion enhanced, which caused the membrane coagulation and generation of the finger-like projection, which initiated immediately after the skin layer and projected downwards to the base. Interestingly, this effect was more pronounced in the mixed matrix PVDF membranes than the pristine PVDF membrane. This behavior in the mixed matrix PVDF membranes can be explained due to the presence of the hydrophilic groups 52 . The water was more attracted inward during the phase inversion process due to the presence of hydrophilic PP 5, resulting in the enhanced rate of mass transfer between water (nonsolvent) and solvent. The fast mass transfer was due to the presence of sulfonate, phosphonate, carboxylate, and quaternary ammonium motifs, resulting in longer finger-like projections. From the cross-section, it is clear that a spongy structure is present in all of the PVDF membranes. The spongy structure was formed as the polymer solidified, which slowed down the mass transfer between the water and solvent. The spongy structure was more prominent and covered almost more than half of the cross-section in the case of the pristine PVDF membrane. The hydrophobic nature of the PVDF slows down the process, which caused the more dense coverage with the spongy structure. The spongy structure decreased as the hydrophilic PP 5 increased in the PVDF. In the case of M-2 www.nature.com/scientificreports/ and M-3, the spongy structure substantially decreased, the presence of PP 5 facilitated and kept the sufficient movement of the water and solvent during the solidification of the polymer. This has caused the reduction in the spongy component in the cross-section of the PP 5-PVDF membranes. It has been shown that incorporating the various concentration of the PP 5 has a significant impact on the subsurface geometry of the membranes. As we discussed, PVDF is a well-known hydrophobic material. Sometimes, it is a disadvantage to using it for wastewater treatment as it may show resistance in the permeation of the water. Usually, hydrophilic moieties are used to improve the hydrophilicity of the PVDF membranes 53 . However, the interaction of hydrophilic moieties with the PVDF membranes is poor due to the substantial difference in their surface energy. The PP 5 zwitterionic polymer is more compatible with the PVDF membrane due to the presence of (CH 2 ) 12 NH 2 pendants www.nature.com/scientificreports/ in PP 5, which is hydrophobic and provides a better opportunity for its interaction with the PVDF membrane. The elemental mapping has provided critical information about the presence and spread of the PP 5 PVDF membranes. The spread of S, N, O, and P was present throughout the membrane, indicating that the PP 5 spread throughout PVDF (Fig. 6). Hydrophilicity of PVDF membranes. Pristine PVDF membranes are hydrophobic, which affects their performance during wastewater treatment. Owing to its less attraction towards the water, the optimum performance of the membrane has remained a critical challenge during the separation of the oil emulsified water. The separation performance of the membranes critically depends upon the surface wettability and the appropriate pore size. Surface wettability is one of the significant factors contributing to the efficient separation of the emulsified oil from water. The water contact angle on the surface of the pristine PVDF membrane was 92.5° that has shown that the surface is hydrophobic and not water friendly. This finding is according to the literature 54 . A gradual decrease in contact angle has been observed as the concentration of the PP 5 increased in the PVDF membranes. The contact angle reached 47.4° from 92.5° when the concentration reached up to 0.5% in PVDF (Fig. 7). From the PP 5 structure, it has been clear that it contained a range of anionic and cationic groups such as carboxylate (CO 2 − ), phosphonate (PO 3 H − ), sulfonate (SO 3 − ), and quaternary ammonium motifs. The specifically designed water-loving polymer caused a drastic decrease in water contact angle, and it dropped to 47.4°. Overall, the PP 5 backbone is zwitterionic, and both positively charged and negatively charged groups participated in improving the hydrophilicity of the PVDF membranes, which made them highly effective for the separations of the nano-emulsions (vide infra). Separation of oil-in-water nano-emulsions by PP-5-PVDF membranes. Oil and water separation has become critical owing to the rising exploration and industrial applications. Separation of the floating oil is easy, and it can be accomplished by fabricating the super-selective wettable surfaces, which generally have a big- www.nature.com/scientificreports/ ger pore size. The separation phenomena mainly depend upon the selectivity of the surface where surfaces are sharp enough to recognize the non-polar component from water and water from the non-polar oil. However, these sorts of materials sometimes may remain effective for separating the emulsion but, under pressure, might fail owing to the lack of screening potential of the emulsified oil from water. The separation performance has become more complicated when the stabilized oil droplet in emulsions is in the nano range. In the case of nanoemulsion, more control on surface chemistry and pore size is required to separate the nano dispersed emulsified oil drops. The transient fluxes and rejections were recorded at a different transmembrane pressure of 4, 5, and 6 bar. The separation efficiencies of the various PVDF membranes were calculated by Eq. (1) 55 : where %Eff is the percentage of separation efficiency, C o is the oil content in the feed, and C p is the oil content in the permeate. The pristine PVDF has shown the lowest rejection at all the evaluated pressure of 4-6 bar. The rejection of the pristine PVDF membrane was observed in the range of 90-92%. The introduction of the PP 5 into the PVDF membrane has shown a noticeably positive impact, and the membrane performance was improved substantially. All the PP 5 incorporated PVDF membranes exhibited high performance compared to the pristine PVDF membranes. For instance, the rejection for M-1, M-2, and M-3 membranes was observed in the range 97-97.8%, 96.4-97.7%, and 95.7-96.4%, respectively. The highest rejection has been observed with the PP 5-PVDF named M-2 and M-3, which contained 0.1% and 0.25% PP 5, respectively (Fig. 8). A significant impact on the permeation flux of the PVDF membranes has been observed after incorporating the PP 5. The permeate flux was calculated by Eq. (2) 56 : where J is the permeate flux, V is the permeate volume in L, A is the effective area of the membrane (m 2 ) and t is the time of the permeation (h). At 4 bar pressure, the incorporation of 0.1% PP 5 has shown an increase of 73% of flux compared to the pristine PVDF. The flux increased more than 300% when the doping of PP 5 reached from 0 to 0.5% in PVDF. The permeate flux of the PVDF membrane was in the order of PP 0.5% -5-PVDF › PP 0.25% -5-PVDF › PP-0.1% -5-PVDF › pristine PVDF (i.e., M-3 › M-2 › M-1 › M-0) (Fig. 9). The oil rejection of the membranes was in the order M-1 › M-2 › M-3 › M-0. The permeation flux and rejection are trading off; as the flux increases, the rejection was slightly compromised in the PP 5-PVDF membranes. As the M-3-PVDF membrane (PP 0.5% -5-PVDF) has shown a substantially high flux, its rejection is compromised somewhat compared to M-1 and M-2. www.nature.com/scientificreports/ The antifouling behavior of the M-2 membrane has been further evaluated owing to its appropriate flux and better oil rejection. The membranes were compacted at a pressure of 8 Bar for 1 h, and an antifouling study was performed at a transmembrane pressure of 4 bar. The pristine PVDF (M-0) and PP 5 incorporated PVDF (M-2) were exposed to the 1000 ppm oil-in-water emulsion, and after every 15 min, the flux was recorded. After this, the pristine PVDF (M-0) and PP 5 incorporated PVDF (M-2) were exposed to Bovine Serum Albumin (BSA) solution. For 30 min, it was kept with BSA, and after that, every 15 min, the permeate flux was recorded. The pristine PVDF and PP 5 incorporated PVDF membranes were washed with deionized water to wash out the adsorbed BSA. The flux recovery of the washed membranes was analyzed by exposing them to the emulsions at the same pressure of 4 bar. The Flux Recovery Ratio (FRR) of the pristine PVDF membranes was found to be about 60%, whereas the FRR of the PP 5 incorporated PVDF membrane was about 82% (Fig. 10). The membrane was better equipped to fight against fouling when it was combined with the PP 5. The results have shown that the pristine PVDF membrane was tough to recover. Mechanism of oil-in-water emulsion separation and separation efficiencies. PP 5-PVDF membranes have shown significant efficiencies for the separation of nano-emulsions. The modified PVDF has demonstrated a considerable capacity to deal with the contaminated water with a high concentration of oil emulsions such as 1000 ppm. The separation of the oil emulsions depends upon several factors, such as membrane pore size, emulsions size, surface chemistry of the membrane. The pore size of the membranes is in the range of nm, so these membranes are quite fit for the separation of the small-sized nano-emulsions. It is evident through their SEM images that the pores sizes are small enough to deal with the emulsions. However, some of the refined oil emulsions passed through the pristine PVDF due to its oil-loving nature, which cannot resist significantly the tiny oil droplets and has shown rejection in the range of the 90-92%. Furthermore, the oil formed a cake layer that resulted in the blockage of the pristine PVDF membrane's pores, resulting in the substantial reduction of the flux during operation. The blockage of the pores or the cake layer was severe in the pristine PVDF membrane that it could not be washed simply by deionized water. The PP 5-PVDF membrane was found effective and more efficient for the separation of the oil-in-water emulsions. The pore size of the membrane was refined, which screened the emulsified oil droplet with great www.nature.com/scientificreports/ efficiency. Furthermore, it has contained a range of hydrophilic functionalities imparted by the doping of the PP 5. As discussed, the synthesized new PP 5 contained a zwitterionic backbone containing various polar groups such as CO 2 − , PO 3 H − , SO 3 − and NH 2 . These polar groups imparted the hydrophilicity to the PVDF surface and enhanced the potential to attract the water molecules from the emulsions. The presence of PP 5 assisted in forming the hydration layer, which provided the resistance to the PVDF membrane from fouling from the oil. The zwitterionic polymers are being considered as promising next-generation antifouling materials. The zwitterionic polymers used electrostatic interactions to form the hydration shell. It is a strong interaction between the adsorbed water and zwitterionic polymers, resulting in better antifouling characteristics than the materials where interaction consists of hydrogen bonding 57 . Due to the abovementioned reasons, the water passed easily, oil emulsions broke down, and oil was released and prevented from passing through the PP 5-PVDF membranes (Fig. 11). The observed permeate appeared as clear water without any emulsion. The separation efficiencies of some recent membranes used for separation of oil/water emulsions are compared with that of the current membrane in Table 2, which reveals its notable efficacy. Conclusions In conclusion, the de-emulsification of oily wastewater has become critically important to make the water reusable and prevent the collapse of our sustainable ecosystem. Membranes are effective for separating the emulsions, but the separation of the nano-emulsions is a challenging job. In this work, we have synthesized a new zwitterionic polymer PP 5 via cyclopolymerization of diallylammonium salts. The resulting PP 5 contains alternately placed SO 2 and randomly placed three different zwitterionic motifs of carboxylate, phosphonate, and sulfonate groups. The PP 5 was thoroughly characterized by 1 H, 13 C and 31 P NMR spectra. The PP 5 was incorporated into PVDF membranes to improve their performance for the separations of the nano-emulsions. In PP 5-PVDF, the finger-like projections have been observed, and the skin layer has shown well-controlled pores in the nanometer range that can efficiently screen the water; and with the help of PP 5, successfully rejected the nano-sized oil droplets. The TGA analysis has shown that incorporation of the PP 5 did not affect the PVDF membranes' thermal stability, and all the membranes exhibited stability at 400 °C. The presence of polyzwitterionic polymer PP 5 helps in reducing the hydrophobic nature of the membranes and might be responsible for the formation of the www.nature.com/scientificreports/ hydration layer that assisted in improving the membrane flux and played its role in breaking the oil emulsions. The PP 5-PVDF membrane was exposed to the feed of 1000 ppm of oil-in-water nano-emulsions and permeate collected that contained just 23 ppm of oil. The PP 5-PVDF membrane exposed to the BSA and the FRR was found 82%. This study has shown that the zwitterionic polymers have great potential as membrane modifiers. Their presence can substantially improve the performance of the PVDF membranes for the separation of the oil-in-water emulsions.
2022-03-25T06:18:23.144Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "37500c462355c131f823030bb06b272606d99970", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-09046-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0dc99cfa1c0c5f5f1d360a91cba2fc10d936656", "s2fieldsofstudy": [ "Engineering", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
256105185
pes2o/s2orc
v3-fos-license
Approximate Controllability of Delayed Fractional Stochastic Differential Systems with Mixed Noise and Impulsive Effects We herein report a new class of impulsive fractional stochastic differential systems driven by mixed fractional Brownian motions with infinite delay and Hurst parameter $\hat{\cal H} \in ( 1/2, 1)$. Using fixed point techniques, a $q$-resolvent family, and fractional calculus, we discuss the existence of a piecewise continuous mild solution for the proposed system. Moreover, under appropriate conditions, we investigate the approximate controllability of the considered system. Finally, the main results are demonstrated with an illustrative example. Introduction For a long time, the subject of fractional calculus and its applications has gained a lot of importance, mainly because fractional calculus has become a powerful tool with more accurate and successful results in modeling several complex phenomena in numerous, seemingly diverse and widespread fields of science and engineering. It was found that various, especially interdisciplinary, applications can be elegantly modeled with the help of fractional derivatives [1][2][3][4]. See also the recent works of [5][6][7][8]. Fractional Brownian motion (fBm for short) is a family of Gaussian random processes that are indexed by the Hurst parameterĤ ∈ (0, 1). It is a self-similar stochastic process with long-range dependence and stationary increment properties whenĤ > 1/2. For more recent works on fractional Brownian motion, see [9][10][11][12][13][14] and the references therein. In order to describe various real-world problems in physical and engineering sciences subject to abrupt changes at certain instants during the evolution process, impulsive fractional differential equations have become important in recent years as mathematical models of many phenomena in both physical and social sciences. Impulsive effects begin at any arbitrary fixed point and continue with a finite time interval, known as non-instantaneous impulses. For more details, we refer the reader to [15][16][17][18][19][20][21][22][23]. The concept of controllability plays a major role in finite dimensional control theory. However, its generalization to infinite dimensions is too strong and has limited applicability, while approximate controllability is a weaker concept completely adequate in applications [24]. Recently, many authors have established approximate controllability results of (fractional) impulsive systems [25][26][27][28][29][30][31]. For example, Kumar et al. [32] investigated the approximate controllability for impulsive semilinear control systems with delay; Anukiruthika et al. [33] analyzed the approximate controllability of semilinear stochastic systems with impulses. Although arXiv:2301.09572v1 [math.OC] 18 Jan 2023 several works exist in this area, the study of the approximate controllability of impulsive fractional stochastic differential systems driven by mixed noise with infinite delay and Hurst parameterĤ ∈ (1/2, 1) is still an understudied topic in the literature. This fact provides the motivation of our current work. We consider an impulsive fractional stochastic delay differential equation with mixed fractional Brownian motion defined by where P : D(P ) ⊂ Z → Z is the generator of an q-resolvent family {S q (t) : t ≥ 0} on the separable Hilbert space Z, c D q t is the Caputo fractional derivative of order 1/2 < q < 1, and state z(·) takes values in the space Z, and is a Q-Wiener process defined on a separable Hilbert space Y 1 , and BĤ = {BĤ(t) : t ≥ 0} is a Q-fBm with the Hurst parameterĤ ∈ (1/2, 1), defined on a separable Hilbert space Y 2 . The history-valued function z t : (−∞, 0] → Z is defined as z t (θ) = z(t + θ), ∀ θ ≤ 0, and belongs to an abstract phase space D h . The initial data φ = {φ(t), t ∈ (∞, 0]} are F 0 -measurable, D h -valued random variable independent ofŴ and BĤ. The functions F , G, σ, and K i satisfy several suitable hypotheses, which will be specified later. The work is arranged as follows. In Section 2, relevant preliminaries are given that will be used later. In Section 3, we prove the existence of a piecewise continuous mild solution for the proposed system (1). Then, in Section 4, we study the approximate controllability for problem (1). In Section 5, an example is given to show the application of the obtained results. We end with Section 6, in which we present the conclusion of our results and also suggest directions of possible future research. Preliminaries Let L(Y j , Z ) denote the space of all linear and bounded operators from Y j to Z, j = 1, 2. The notation · represents the norms of Z, Y j , L(Y j , Z ). Let (Ω, F , {F t } t≥0 , P) be a filtered complete probability space, where F t is the σ-algebra generated by {BĤ(e),Ŵ (e) : e ∈ [0, t]} and P-null sets. Let Q j ∈ L(Y j , Y j ) be the operators defined by Q j e are non-negative real numbers and {e j i } i≥1 is a complete orthonormal basis in Y j . Then, there exists a real independent sequence B i (t) of the standard Wiener process such thatŴ The infinite dimensional Y 2 -valued fBm BĤ(t) is defined as where BĤ i (t) are real, independent fBms. Lemma 2 (See [35]). For any α ≥ 1 and for an arbitrary L 1 2 -valued predictable process Υ(·), For α = 1, we obtain Assume that h : for any a > 0, (E|φ(θ)| 2 ) 1/2 is a measurable and bounded function on If D h is endowed with the norm Definition 1 (see [38]). Let M > 0, θ ∈ [π/2, π], and ω ∈ R. A closed and linear operator P is called a sectorial operator if Lemma 4 (see [39]). Let P be a sectorial operator. Then, the unique solution of the linear fractional is given by Here, B r denotes the Bromwich path. Solvability Results We assume the following hypotheses. Hypothesis 4 (H4). There exists a constant N . . , m, and for all t ∈ [0, t 1 ], and Proof. We define the operator Ξ from D T to D T as follows: (3), then z(t) = g(t) +ȳ(t) for t ∈ J , which implies that z t = g t +ȳ t for t ∈ J , and the function y(·) satisfies Define the operator Ψ from D 0 T to D 0 T as follows: In order to prove the existence result, we need to show that Ψ has a unique fixed point. Let Hence, For t ∈ (t i , s i ], i = 1, 2, . . . , m, we have Hence, Similarly, for t ∈ (s i , t i+1 ], i = 1, 2, . . . , m, we have Hence, From Equations (4)- (6), we obtain that which implies that Ψ is a contraction. Hence, Ψ has a unique fixed point y ∈ D 0 T , which is a mild solution of problem (1) on (−∞, T]. Next, using Krasnoselskii's fixed point theorem, we establish the second existence result. At this stage we make the following assumptions. Hypothesis 8 (H8). The inequality The set D r = {y ∈ D 0 T : y 2 D 0 T ≤ r, r > 0} is clearly a convex closed bounded set in D 0 T for each y ∈ D r . By Lemma 3, we obtain Proof. Let E 1 : D r → D r and E 2 : D r → D r be defined as For convenience, we divide the proof into various steps. Step 2. We show that the operator E 1 is continuous on D r . Let {y n } ∞ n=1 be a sequence such that y n → y in D r . For all t ∈ (t i , s i ], i = 1, 2, . . . , m, we have Since the maps K i , i = 1, 2, . . . , m, are continuous functions, one has For all t ∈ (s i , t i+1 ], i = 1, 2, . . . , m, we have Therefore, Equations (10) and (11) imply that the operator E 1 is continuous on D r . Step 3. The operator E 1 maps bounded sets into bounded sets in D r . Let us show that for r > 0 there exists a r > 0 such that, for each y ∈ D r , we obtain E E 1 (y)(t) 2 ≤ r, for all t ∈ (s i , t i+1 ], i = 1, 2, . . . , m. For all t ∈ (s i , t i+1 ], i = 1, 2, . . . , m, we have For all t ∈ (t i , s i ], i = 1, 2, . . . , m, we have From the above equations, we obtain where r = max{M 2 1 υ i λ 3 , υ i λ 3 }. Hence, the operator E 1 maps bounded sets into bounded sets in D r . Step 5. The operator E 2 is a contraction map. For y, y * ∈ D r and for t ∈ (t i , s i ], i = 1, 2, . . . , m, we have Similarly, for y, y * ∈ D r and for t ∈ (s i , t i+1 ], i = 0, 1, . . . , m, we have Hence, From above, we obtain Thus, E 2 is a contraction map. By Krasnoselskii's fixed point theorem, we obtain that problem (1) has at least one solution on (−∞, T]. Approximate Controllability We consider the following control system: The controlû(·) ∈ L 2 (J , U ), where L 2 (J , U ) is the Hilbert space of all admissible control functions. The operator A is linear and bounded from the separable Hilbert space U into Z. Assume that the linear system Define the operator t t i+1 s i associated with system of (17) as Here, A * and S * q (t) are the adjoint of A and S q (t), respectively. The operator t t i+1 s i is a bounded and linear operator. The following assumption is needed. Clearly, P is the generator of an analytic semigroup {S(t) : t 0}. The spectral representation of S(t) is given by S(t)w = ∑ n∈N e −n 2 t w, w n w n , where w n (y) = √ 2/π sin (ny), n ∈ N, is the orthogonal set of eigenvectors corresponding to the eigenvalue λ n = −n 2 of P. The semigroup {S(t) : t ≥ 0} is compact and uniformly bounded, so that R(λ, P ) = (λI − P ) −1 is a compact operator for all λ ∈ ρ(P ), i.e., P ∈ P q (θ 0 , ω 0 ). Let h(e) = e 2e , e < 0. Then = 0 −∞ h(e)de = 1/2 and we define The system (23) can be written as an abstract formulation of (1), and thus previous theorems can be applied to guarantee both existence and approximate controllability results. Conclusions We have investigated impulsive fractional stochastic control systems defined on separable Hilbert spaces. The proposed problem is driven by mixed noise, i.e., it involves both a Q-Wiener process and a Q-fractional Brownian motion with the Hurst parameterĤ ∈ (1/2, 1). For our results, we have mainly applied fixed point techniques, a q-resolvent family, and fractional calculus. The obtained results are supported by an illustrative example. As further directions of investigation and continuation to this work, it would be interesting to investigate the sensitivity on the noise range and develop numerical and computational methods to approximate the solution. We also intend to extend our results via discrete fractional calculus.
2023-01-24T06:42:19.309Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "464e4a1cb9736609d16d868cd21470a0299833f4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "464e4a1cb9736609d16d868cd21470a0299833f4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
2478818
pes2o/s2orc
v3-fos-license
Oxidative Stress and Inflammation in Renal Patients and Healthy Subjects The first goal of this study was to measure the oxidative stress (OS) and relate it to lipoprotein variables in 35 renal patients before dialysis (CKD), 37 on hemodialysis (HD) and 63 healthy subjects. The method for OS was based on the ratio of cholesteryl esters (CE) containing C18/C16 fatty acids (R2) measured by gas chromatography (GC) which is a simple, direct, rapid and reliable procedure. The second goal was to investigate and identify a triacylglycerol peak on GC, referred to as TG48 (48 represents the sum of the three fatty acids carbon chain lengths) which was markedly increased in renal patients compared to healthy controls. We measured TG48 in patients and controls. Mass spectrometry (MS) and MS twice in tandem were used to analyze the fatty acid composition of TG48. MS showed that TG48 was abundant in saturated fatty acids (SFAs) that were known for their pro-inflammatory property. TG48 was significantly and inversely correlated with OS. Renal patients were characterized by higher OS and inflammation than healthy subjects. Inflammation correlated strongly with TG, VLDL-cholesterol, apolipoprotein (apo) C-III and apoC-III bound to apoB-containing lipoproteins, but not with either total cholesterol or LDL-cholesterol. In conclusion, we have discovered a new inflammatory factor, TG48. It is characterized with TG rich in saturated fatty acids. Renal patients have increased TG48 than healthy controls. Introduction A progressive loss of renal function is associated with a markedly increased risk for cardiovascular disease (CVD) [1]. Among nontraditional risk factors, increased oxidative stress (OS) and inflammation have emerged as potentially significant contributors to the progression of kidney dysfunction and its cardiovascular consequences. Although a number of studies have shown increased OS and inflammation in renal patients [2][3][4][5][6][7][8][9][10][11][12], no publication has demonstrated the relationship between OS/inflammation and plasma apolipoproteins (apo) and lipoproteins (Lp), including the well established risk factors for CVD, such as triacylglycerols (TG) and apoC-III. Since the relationship between OS/inflammation and CVD in renal patients is not well understood, the first aim of this study was to investigate the possible relationship between OS/ inflammation and lipoprotein variables already known to be related to CVD with special emphasis on apoC-III, which has recently been shown to be a link between inflammation and atherogenesis [13]. The present study was performed on patients undergoing haemodialysis (HD) and on patients before dialysis (CKD), using a simple cholesteryl ester (CE) ratio method on gas chromatography (GC) for OS [14]. Interestingly, during the measurement of OS with GC, we found marked differences between renal patients and healthy subjects regarding a TG species, named TG48 (i.e. the sum of the 3 carbon chain lengths of fatty acids in TG equals to 48) which was absent or undetectable in healthy young normolipidemic subjects but markedly increased in renal patients. We wondered what the significance of TG48 is in renal disease and what types of fatty acids are present in TG48. Thus, the second aim of this study was to investigate the TG48, i.e. to measure TG48 with GC in renal patients and healthy subjects, and to identify the TG48 components using mass spectrometry (MS) and MS/MS. Patients and Controls Seventy two non-diabetic adult patients (46 men and 26 women) participated in the study. The patients' mean age was 61613 (range 27-84) years. Patients were recruited from the Department of Nephrology, Sahlgrenska University Hospital, Göteborg, Sweden, and collaborating hospitals in Western Sweden. There were 35 patients (23 men and 12 women) with varying degrees of renal functional impairment before dialysis (CKD stages [3][4][5]. In these patients, the mean glomerular filtration rate (GFR) was 26.3 mL/min/1.73 BSA (body surface area), with a range from 9 to 53. Patients were treated with antihypertensive agents, angiotensin converting enzyme (ACE) and by diuretics and phosphate binders, as appropriate. A number of patients with severely reduced renal function (GFR ,20 mL/min) were advised to consume a protein reduced diet (0.6-0.8 g protein/kg BW/ 24 h) for the treatment of symptoms of renal insufficiency. Thirty seven patients (23 men and 14 women) were treated with chronic haemodialysis (HD). Haemodialysis was performed either with conventional haemodialysis or with haemodiafiltration using standard procedures and equipment. All patients were dialyzed with bicarbonate-containing dialysis fluid. In the majority of cases low molecular weight heparin was added as an anticoagulant. Patients were treated three times weekly for four to five hours per session with antihypertensive drugs, diuretics, calcium supplements, phosphate binders, intravenous iron and erythropoietin, as appropriate. The target hemoglobin levels were 120-130 g/L. Patients on steroid treatment, lipid-lowering drugs or other drugs that may have influenced lipoprotein metabolism, were excluded from the study as were patients with diabetes mellitus. Patients were not advised or monitored with respect to dietary lipids. None of the patients received intravenous iron, vitamin D or intravascular radiocontrast media within a week before blood sampling. Patients were not specifically treated with larger doses of vitamin E preparations. Control subjects were recruited among healthy, normolipidemic employees (30 men and 33 women) of the Sahlgrenska University Hospital, Göteborg, Sweden and Oklahoma Medical Research Foundation, Oklahoma City, Oklahoma, USA. All studies were conducted in accordance with the Helsinki declaration and approved by the Ethical Committee of the Sahlgrenska University Hospital, Göteborg, Sweden. All subjects gave their written informed consent. Blood samples were drawn after an over-night fast and collected into EDTA-containing tubes. In HD patients, blood was drawn prior to heparinization for dialysis through the dialysis access. Blood samples were then centrifuged. Preservatives containing thimerosal and a protease inhibitor, e-amino caproic acid, were added to a final concentration of 0.1% and 10 mM respectively, and immediately shipped by air to Oklahoma City, Oklahoma, USA, where analyses were performed. Glomerular filtration rate was measured by renal or plasma clearance of either iohexol or 51 Cr-EDTA, as previously described [15]. Lipid and apolipoprotein analyses Plasma TC (Alfa Wassermann; Caldwell, NJ)) and TG (Alfa Wassermann; Caldwell, NJ) concentrations were determined enzymatically on Alfa Wassermann Nexct Analyzer. High-density lipoprotein-cholesterol (HDL-C) was measured as previously described [15]. Very low density lipoprotein cholesterol (VLDL-C) levels were assumed to equal one fifth of the plasma TG concentration, and low density lipoprotein-cholesterol (LDL-C) levels were determined as described previously [15,16]. Measurement of oxidative stress by cholesteryl ester ratios method Indices of OS were measured according to the method developed by Lee [14]. Briefly, the neutral lipids were extracted from fresh plasma with isopropanol:heptane (7:3) containing internal standard (I.S.) CE-butyrate (CE-C4) under argon. After acidification with dilute H 2 SO 4 , the top heptane phase was concentrated under N 2 , and the neutral lipid molecular species were analyzed by GC as described by Kuksis et al [17] and modified by Lee [14], using Varian Chrompack CP-3000 Gas Chromatograph and ultra-pure helium gas as carrier for vaporized samples. The column was CP 7592, WCOT Ulti-metal, coating HT Simdist film thinness 0.52 mm, 10 m in length and 0.53 mm in diameter. The initial temperature of the column, 160uC, was increased to 380uC at a rate of 10uC per minute. The area on the GC chromatogram for CE containing C16 fatty acyl groups was calibrated with recrystalized cholesteryl palmitate; CE containing C18 fatty acyl groups was calibrated with cholesteryl linoleate and CE containing C20 fatty acyl groups with cholesteryl arachidonate. All primary standards were from Nu-Chek-Prep, Inc. Elysian, MN, USA. Cholesteryl esters with C16 fatty acyl groups are mostly saturated fatty acids (C16:0) with a small amount of monounsaturated fatty acid (C16:1), which is considered not susceptible to oxidative damage. Cholesteryl esters with C18 fatty acyl groups are mostly C18:2 with small amounts of C18:1 and C18:0. The C18:1 and C18:0 are resistant to oxidation while CE-C18:2 are susceptible to oxidative damage. Cholesteryl esters with C20 fatty acyl groups are mostly C20:3 with small amounts of C20:4 and C20:5, which are the most sensitive to oxidative damage. Since it is possible to separate the molecular species of CE by the number of carbon atoms of esterified fatty acids, it is possible to calculate the ratios CE-C20/CE-C16 (R1) and CE-C18/CE-C16 (R2) as estimates of peroxidative damage [14]. Due to a greater loss of polyunsaturated fatty acid esters to oxidative damage, lower CE ratios indicate a higher OS. The method was verified with thiobarbituric acid-reactive substances (TBARS) since both R1 and R2 are negatively correlated with TBARS (r = 20.68, p,0.0001, n = 24, for R1, and r = 20.62, p,0.001, n = 25, for R2) [14]. Since the sizes of CE-C18 peaks are greater than those of CE-C20, the measurement of the former are more accurate than those of the latter. Therefore, we have selected R2 as the marker for OS in this study. Measurement of TG species with gas chromatography From the same above neutral lipid extract, TG species were measured following CEs separations. TG48 was calibrated with tripalmitate. TG54 was calibrated with tristearate. (All primary standards were from Nu-Chek-Prep, Inc. Elysian, MN). Two reference sera (from Boehringer-Mannheim Corp., Indianapolis, IN) with known TC and TG values were also used to verify the TC and the TG values derived from the sum of molecular species of neutral lipids by GC. TG species include TG48, TG50, TG52, TG54 and TG56 (the numbers following TG represent the sum of the 3 carbon chain lengths of fatty acids). Occasionally, CE-C20 and TG48 did not separate completely. A vertical line drawing through the valley of the two peaks was used for separation and measurements. These measurements were validated by re-application with smaller amount of samples that could be separated completely. Measurement of TG48 with GC and identification of the components of TG48 by Mass Spectrometry For example, as demonstrated in Fig. 1A a GC chromatogram of a young fasting healthy subject with low plasma TG showed the absence of TG48 peak. However, in renal patients the TG48 peak height increased (See example in Fig. 1B). For identification of compounds occurring in TG48 peak, MS was used, following the method of Liebisch et al. [19]. The same hexane extract (10 ml) of plasma neutral lipids containing I.S. CE-C4 for GC analyses was evaporated under a stream of nitrogen. The sample was then redissolved in 50 ml of a solvent mixture of methanol and chloroform (3:1 v:v) containing 7.5 mM ammonium acetate. This solution was then analyzed by direct flow injection analysis using a syringe pump at a flow rate of 5 ml/min. Bruker Daltonics Inc. HCT ultra ion-trap mass spectrometer equipped with an electrospray ion source was used and operated in the positive ion mode. Samples ionized in the presence of ammonium acetate carried an NH 4 +1 ion, so the mass/charge (m/z) increased by 18 over that of the uncharged molecule. CE-C4 with +1 charge was confirmed to have m/z 474.3 on MS. The expected m/z for saturated TG48:0, monounsaturated TG48:1, di-unsaturated TG48:2 and tri-unsaturated TG48:3 are shown in Table 1. Standard tripalmitate (TG48:0 from Nu-Chek-Prep) with +1 charge was confirmed to have m/z = 824.7. Standard tripalmitolein (TG48:3 from Nu-Chek-Prep.) with +1 charge was confirmed to have m/z = 818.7. The areas of all m/z peaks observed within the measured mass range were calculated automatically by the instrument software. After normalizing the areas of I.S. from all runs, the intensities from different runs could be compared. Identification of the fatty acids composed in TG48 by MS/MS The fatty acid composition of TG48 was determined by MS/ MS of ammoniated triacylglycerol (TG) ions isolated in the first stage MS. The MS/MS data was obtained by collisional induced disassociation at 3.5 volts of selected TG ions, essentially according to the method of McAnoy et al. [20]. Ammoniated TG (parent) ions at 818.6, 820.7, 822.7, and 824.7 m/z corresponding to the m/z values of TG48:3, TG48:2, TG48:1 and TG48:0, respectively, were individually fragmented, generating TG +1 as a result of loss of ammonia, diacylglycerols (DG) and DG fragments at position 1,2; 2,3; and 1,3 for each TG fragmented. The fatty acids for each TG were identified by comparison of our products to the published m/z values for TG +1 , DG 1,2 , DG 2,3 , and DG 1,3 , [21]. When all DG and DG fragments observed by MS/MS were accounted for, the composition of fatty acids for each TG48 was considered completed. Statistical methods A two-tailed T-test with Satterthwaite adjustment was used to identify whether there was a difference in means. A Pearson product moment correlation was used to quantitate the relationship between variables. Multiple linear regression was performed to create a multivariate model to predict TG48 concentrations. Data were analyzed in SAS v.9.1.3 (Cary, NC) and R v.2.5.1 (Vienna, Austria). Univariate p-values were adjusted for multiplicity through the Benjamini & Hochberg False Discovery Rate (FDR) procedure [22]. A 5% FDR was considered statistically significant. Results Due to its high correlation with an established OS marker, TBARS [14], R2 was used as a valuable index of OS. The R2 and TG48 levels for patients with CKD before dialysis and patients on chronic maintenance haemodialysis (HD) compared to those of normal controls are shown in Table 2. These data showed that the R2 in CKD and HD patients were significantly lower than that of healthy controls, thus higher OS. The data also showed that TG48 in CKD and HD patients were significantly higher than that of healthy controls. Pearson correlations were calculated between TG48 and R2 for each of the three groups: CKD, HD and controls. CKD (r = 20.39, p-value = 0.02), and HD (r = 20.41, pvalue = 0.01) both had significant p-values associated with their correlations. In contrast, controls had a non-significant p-value (r = -0.17, p-value = 0.18) which is understandable, since a large number of controls had zero or near zero TG48 values. These results indicated that TG48 levels were closely related to OS in CKD and HD patients. Table 2 also shows the lipid and apolipoprotein profiles of renal patients and healthy controls. The levels of TC, TG, VLDL-C, LDL-C, apoB, apoE, apoC-III, and apoC-III-HP were significantly higher, and HDL-C were significantly lower in CKD and HD compared to controls. The concentrations of apoC-III-HS were significantly higher in HD compared to controls, but not in CKD. The apoC-III-R was significantly lower in CKD, but not in HD. The control group is younger than the patient population. However, analysis of the influence of age and gender shows that the contribution of age was very small as seen in TG48 model equation #1; therefore, this age difference is inconsequential to our conclusions. We may calculate the age effect by placing real numbers into this equation, assuming VLDL-C = 33, an age difference of 11 years will affect only 5-6% of the TG48 value, while the value of TG48 in HD group is 317% higher, and TG48 in CKD group is 550% higher than controls. BMI data was not listed in Table 2 because there were missing data in the HD and control groups. BMI for controls was 24.163.5 (n = 51) and for HD was 24.064.6 (n = 23), p = not significant (N.S.)) when compared to controls. Therefore, the significant differences in R2 and TG48 between HD and control cannot be due to BMI effect. BMI for CKD was 25.763.6 (n = 35) (p,0.025) compared to controls. There was no gender difference in the HD group for BMI, 24.564.51 (n = 7) for females, and 23.7664.71 (n = 16) for males with p = N.S. The BMI for females and males in CKD group was 24.8564.10 (n = 12), and 26.1563.35 (n = 23) respectively with p = N.S. The BMI for females and males in controls was 22.6362.77 (n = 23) and 25.3963.49 (n = 30) respectively with p,0.005 between females and males. There was, as expected, a correlation between BMI and TG, VLDL-C, apoC-III-HS, and HDL-C but no correlation between BMI and R2, TG48, GFR in CKD group. Our data showed that there were no gender differences in R2 for CKD or for HD. Nor was there a gender difference in apoC-III-HP for CKD or HD.There was no correlation between GFR and R2 or TG48. GFR was negatively correlated with apoC-III, apoE, and apoC-III-HP and positively correlated with apoC-III-R. The lack of correlation between GFR and R2 or TG48 was probably due to narrow ranges of these values. In HD patients, TG48 was moderately correlated with TG and VLDL-C (r = 0.47, p,0.025) ( Table 3). Correlations between lipoprotein variables and R2 were not significant. Multivariate linear regression modeling revealed that VLDL-C and LDL-C could be combined to predict TG48 in renal patients when adjusted for age. The data were modeled according to the following equation: The model explained 61.5% of the variability in TG48. Examining the partial sums of squares revealed that VLDL-C explained 51% of the variability while LDL-C explained only 3.3% of the variability. Alternatively, a second model showing apoC-III-HP and apoB with LDL-C could be combined to predict TG48 according to the following equation: This model explained 44.1% of variability in TG48. Examining the partial sum of squares revealed that apoC-III-HP explained 29.2% of the variability in the model. Thus, apoC-III bound to apoB-containing lipoproteins are most associated with TG48 among all lipoprotein particles. Since apoC-III-HP is a known pro-inflammatory factor [13], and there is a direct relationship between TG48 and apoC-III-HP, we can therefore conclude that there is relationship between TG48 and inflammation. Note that the equation #2 contains no age effect and no BMI effect. Identification of the Saturated and Unsaturated Forms of TG48 by MS Mass spectrometry was employed for analyses of the forms of TG48 and it showed that indeed TG48:3, TG48:2, TG48:1 and TG48:0 species were all present in TG48 as shown in Fig. 2 and as predicted in Table 1. MS shows that the intensities of these four TG48 molecular species were significantly higher in CKD and HD patients than in healthy controls (Table 4). In fact, CKD patients had 7-13 times higher quantities of these 4 peaks and HD patients had 3-6 times higher quantities of the same peaks than those in healthy controls. Likewise, the relative quantities of TG48:3 and TG48:1 species, as judged by percentage of total TG48, were significantly different between CKD patients and controls ( Table 4). The percentage of TG48:3 was significantly lower and TG48:1 was significantly higher in CKD patients than in controls. The percent composition of HD patients, though Fig. 1A. A typical GC chromatogram of separation of neutral lipid molecular species from a normal subject. The separated FC, CE molecular species with C16, C18 and C20 fatty acyl groups and TG molecular species termed TG50, TG52, TG54 and TG56 (the numbers represent the sum of the carbon chain lengths of the three fatty acyl groups of TG) are identified at the bottom of each peak. Note the absence or undetectable of the peak TG48. Fig. 1B. An example of the GC chromatogram of separation of neutral lipid molecular species from a HD patient. The separated molecular species are identified at the bottom of each peak. Note the presence of the peak TG48 at retention time 17.699 min. in this renal patient. In case when TG48 was not completely separated from CE-C20, re-run with a reduced sample application was carried out and complete separation could be achieved. Or, a vertical line drawing through the valley as this one, the results were close to the former method. doi:10.1371/journal.pone.0022360.g001 showing the same trend as those of CKD's, was not significantly different from controls. In CKD patients, the distribution of the TG48 species shows relatively more monounsaturated TG48:1 and relatively less polyunsaturated TG48:3 than controls. Therefore we can conclude that CKD patients have relatively more SFAs and relatively less un-SFAs than controls. Identification of the Fatty Acids in Various TG48 Forms by MS/MS To further our research we determined what fatty acids were present in each form of these TG48 molecules. To characterize the fatty acids comprising TG48:3, TG48:2, TG48:1 and TG48:0, we performed MS/MS analysis of each TG48 form, corresponding to ammonium adducted ions, 818.7, 820.7, 822.7 and 824.7 m/z., respectively. The MS/MS fragmentation of TG results in the neutral losses of ammonia and a single fatty acid moiety and the formation of a positively charged diacylglycerol (DG) ions. Results are summarized in Table S1 (online table). To demonstrate an example of the analysis, the four ammonium adducted TG48 ions observed in the MS for patient A were each subjected to MS/MS fragmentation to determine their DG +1 fragments and constituent fatty acids. MS ion at m/z = 818.6, representing the TG48:3 molecules, was fragmented by MS/MS. Ammonia was lost to yield TG +1 at 801.7 m/z along with DG fragments and its product ions were determined. Seven observed DG ions (Fig. not shown) indicated that four different TG48:3 forms were present. LaOL (laurate/oleate/linoleate, C12:0/C18:1/C18:2) was confirmed based on the presence of ions 521.5 m/z, (DG 1,2 ), 601.5 m/z (DG 2,3 ) and 519.4 m/z (DG 1,3 ). The DG ion profile of MPLn (myristate/palmitate/linoneneate, C14:0/C16:0/C18:3) was also observed to consist of DG 1,2 at 523.5 m/z, DG 2,3 at 573.5 m/z and DG 1,3 at 545.5 m/z. Two other TG48:3 molecules were found to be present. They were MPoL (M/palmitoleate/L, C14:0/C16:1/C18:2), represented by DG 1,2 at 521.5 m/z, DG 2,3 at 573.5 m/z, DG 1,3 at 547.5 m/z and PoPoPo (C16:1/ C16:1/C16:1), in which all three DG forms produced an ion at 547.5 m/z. By the same approach, sample ion at 824.7 m/z (TG48:0) was fragmented by MS/MS. Ammonia was lost to yield TG +1 at 807.7 m/z along with three DG fragments, DG +1 1,2 at 523.5 m/z, DG +1 2,3 at 579.5 m/z and DG +1 1,3 at 551.5 m/z. The ion at 551.5 was by far the most significant. This data indicated the presence of two different TG48:0 molecules, PPP (tripalmitate, By the same approach, the fatty acids in TG48:2 (parent ion 820.7 m/z) and TG48:1 (parent ion 822.7 m/z) were also identified. The results are shown in Table S1 (online table). The fatty acids within the TG48 molecules were also determined for patients B, C and D and two control subjects E and F. The fatty acid composition of patient B was nearly identical to that of patient A. The fatty acid composition of patients C, D and control E was Table 4. Normalized intensities and percent composition of TG48 components in CKD, HD and controls. identical to that of patient A, and the fatty acid composition of control F was identical to that of patient B. All results are listed in Table S1 (online table). It is interesting to note that the compositions of the fatty acids in TG48 in these six subjects are identical or nearly identical. Further exploration with additional patients or controls was thought unnecessary. We have now demonstrated that TG48 is a SFA-rich TG species in both patients and normal subjects. The difference between patients and controls is the quantity. Patients have many fold higher levels of SFAs than controls. Also CKD patients have lower percent polyunsaturated TG48:3 and higher percent monounsaturated TG48:1 than controls. When TGs' are partially hydrolysed in vivo, large amount of saturated DGs' are accumulated. Saturated DGs' and SFAs are inflammatory [18]. Discussion In this study, we have clearly established that TG48 is a SFArich TG species. Since SFAs are inflammatory [18], their increased levels reflect increased inflammation in renal patients. The association between SFAs and inflammation and their mechanisms has been well studied. SFAs inhibit the activation of insulin receptor substrate 1, which causes insulin resistance and contributes to metabolic syndrome [18]. This inhibition could occur in insulin-sensitive cells like white adipose tissue, muscle, heart and liver [23]. SFAs cause the activation of the mitogenactivated protein kinases and subsequent induction of inflammatory genes in white adipose tissue, immune cells, and mitotubes. SFAs decrease adiponectin production, which decreases the oxidation of glucose and fatty acids [18]. SFAs cause recruitment of immune cells, such as macrophages, neutrophils, and bone marrow-derived dendritic cells to white adipose tissue and muscle. Palmitate but not linoleate activates macrophages [24] which lead to the production of cytokine interleukin-6 in coronary artery endothelial cells and coronary artery smooth muscle cells [25]. Excess palmitate causes white adipose tissue dysregulation [26]. It also increases inflammation and apoptosis through oxidative or endoplasmic reticulum stress and generation of reactive oxygen species. Oleate co-supplementation blocks palmitate-mediated suppression of b-oxidation, insulin sensitivity, and DG accumulation. These experiments suggest that SFAs are inflammatory, while the unsaturated fatty acids are not [18]. Furthermore, higher polyunsaturated fatty acids, such as eicosapentaenoic and docosahexaenoic, have anti-inflammatory effects by inhibiting cyclooxygenase 2 and demonstrate the ability to reduce plasma concentrations of C-reactive protein (CRP), tumor necrotizing factor-a and interleukin-6 [27]. Interestingly, the carbon flux from a carbohydrate diet is converted to palmitate in the liver during endogenous lipogenesis [28]. Thus, a diet rich in carbohydrates can also be inflammatory. As shown in Table 2, TG increased in patients with HD and CKD relative to levels in control subjects. TG48 also increased from controls to HD and to CKD. However, TG48/TG ratios (or TG48%/TG) are not the same for the three groups. TG48%/TG significantly (p,0.0005) increased from controls (1.7%62.9%) to HD (4.2%62.6%) and to CKD (5.2%63.2%). This means that when TG is increased, TG48 is disproportionately increased. This results in the longer-chain TG species (TG50+TG52+TG54+ TG56), increased less proportionally, because the sum of TG48+ TG50+TG52+TG54+TG56 equals TG. As shown by Byrdwell [21], the longer the carbon chain length of the fatty acids, the higher the unsaturation of the TG. Thus, TG48 possesses relatively higher number of SFAs than those of TG50, TG52, TG54 and TG56. Therefore, increasing TG results in increasing more of TG48, thus, increasing more of SFAs and concomitantly increasing inflammation. We observed that subjects with increased TG without renal disease also have increased TG48 (unpublished data). So the presence of measurable TG48 by GC is not a unique marker for renal disease. It is actually a reflection of the increased SFAs, therefore a reflection of increased inflammation. Indeed, hypertriglyceridemia is also known to be associated with increased inflammation. The presently established link between OS/inflammation and apoC-III bound to apoB-containing lipoproteins is further enhanced by the recently reported findings [13] that apoC-III, or its corresponding apoB-containing lipoproteins, were also proinflammatory as demonstrated by increasing endothelial cell expression of vascular and intercellular cell adhesion molecules and recruitment of monocytic cell. It is not surprising to see that TG48 is well correlated with apoC-III. Is there a cause-effect relationship between apoC-III and TG48? We are not certain. We may rationalize that when apoC-III is increased, TG would increase, since over expression of apoC-III gene could cause hypertriglyceridemia [29]. Our data indicate that when TG is increased, TG48 is increased more than the other TG molecular species. This implies that increased apoC-III could cause the increase of TG48 and thus increased levels of SFA. On the other hand, a diet rich in SFAs might increase TG. Would increased TG cause the increase of apoC-III? There might be such a possibility. However, more research is needed to answer this question unequivocally. Inflammation causes the generation of reactive oxygen species [18]. This may be why inflammation is closely associated with OS and vice versa. Several studies support a link between OS and inflammation in atherogenesis [30,31]. Our data showing that OS index R2 correlated with TG48 may support this notion. A number of studies have suggested that increased OS in association with inflammation contributes significantly to the accelerated kidney dysfunction and its consequential cardiovascular morbidity and mortality [1][2][3][4][5][6][7][8][9][10][11][12]. Most of the previously described procedures to access OS are rather complicated and time consuming [2][3][4][5][6][7]9,12,[32][33][34][35]; however, our presently described method does not require treatment with chemical reagents, monoclonal antibodies, incubation, isolation of lipid classes or transesterification and therefore, provide a direct and very useful simplification. The main advantage of the GC method besides simplicity and speed is high reproducibility. R2 values have shown that a combination of OS and the levels of apoC-III in a binary system predict the risk for atherosclerosis in normolipidemic and hypertriglyceridemic subjects [36]. Cholesteryl ester ratio method has also been successfully used to monitor the delay of OS recovery in patients with diabetic ketoacidosis after correction of ketoacidosis with insulin [37]. Inflammation levels have been most frequently measured by CRP [38] and interleukin-6 [25,27]. Both of these proteins are the results of inflammation. In contrast, SFAs measured by TG48 are the causes of inflammation. Our data show that the inflammation measured by TG48 is correlated with TG, not CE. Since TG48 is part of the total TG, in general, the higher the TG, the higher the TG48, thus the higher the SFA and the higher the inflammation. Indeed, higher TG is associated with higher OS, as also shown by the CE-ratio method, but TG alone does not represent TG48 because the TG48/TG ratio is not a constant among different subjects. Abnormalities of lipid and apolipoprotein profiles of CKD and HD patients confirmed and extended the results of our previous studies [15,16]. Moreover, based on correlation coefficients between R2 and especially, TG48 and lipid, apolipoprotein, and lipoprotein-cholesterol levels, it was established that inflammation is highly correlated with the concentrations of TG and VLDL-C, moderately correlated with apoC-III bound to apoB-containing lipoproteins, and weakly with apoC-III, but not with TC, LDL-C nor with HDL-C. These results suggest that the high inflammation of renal patients stems more from TG-rich lipoproteins, not CErich apoB lipoproteins. The highly selective correlation of inflammation with TG or apoC-III-containing apoB lipoproteins, but not with cholesterol-rich apoB lipoproteins, was confirmed by the results of multivariate regression analysis showing a stronger relationship between TG48 and VLDL-C over LDL-C (15 times stronger for VLDL-C than that for LDL-C as shown by the equation #1). Results from our study and those of other laboratories have already shown the significant role played by apoC-III bound to apoB-containing lipoproteins in the formation and progression of atherosclerotic lesion [39]. Specifically, renal insufficiency resulting in elevated apoC-III, or LpB:C-III, presumably via insulin resistance triggers the chain of inflammatory events [40]. Taken together, findings of the present study suggest that TG-rich apoC-III-containing apoB lipoproteins are linked to inflammation, while TC-rich apoB lipoproteins are not. It remains to be determined whether there are differences in the degree of inflammation between apolipoprotein-defined apoC-III-containing lipoproteins, including LpB:C, LpB:C:E and LpA-II:B:C:D:E particles. The use of the TG48 marker revealed that HD patients had significantly lower inflammation and partially reduced levels of lipoprotein variables than CKD patients, suggesting a beneficial effect of haemodialysis that alleviates the uremic intoxication but cannot eliminate the underlying metabolic disturbance that persists during dialysis. It has been observed that increased OS is already prevalent in CKD patients before dialysis [5,41]. Our findings on increased OS/inflammation in renal patients are in agreement with the literature reports [2][3][4][5][6][7][8][9][10][11][12]33]. This is the first report using TG48 for measuring SFAs and inflammation. This is also the first report to relate SFAs and inflammation with lipids and lipoproteins, and particularly, with apoC-III and apoC-III-containing apoB lipoproteins. Conclusions The major finding of this study is the recognition of TG48 as a new inflammatory factor characterized by the occurrence of inflammatory saturated fatty acids. The levels of TG48 had been found to be elevated in patients with chronic renal disease in comparison with those of normolipidemic subjects. The TG48 levels were correlated with the levels of TG, VLDL-C, apoC-III and apoC-III bound to apoB-containing lipoprotein subclasses, but not with TC, LDL-C and HDL-C in kidney patients. Furthermore TG48 is also correlated with oxidative stress. The well established dual role of apoC-III in inflammation and atherogenesis in combination with oxidative stress and inflammatory TG48 should be considered as severe risk factors for atherogenesis in chronic kidney disease. Further studies are needed to establish the occurrence of similar risk factors for atherogenesis in other dyslipoproteinemics.
2014-10-01T00:00:00.000Z
2011-07-28T00:00:00.000
{ "year": 2011, "sha1": "8349fa19a37e7ec093250b96e4a901f5bdc040d4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022360&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8349fa19a37e7ec093250b96e4a901f5bdc040d4", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
235821380
pes2o/s2orc
v3-fos-license
Efficacy of S‐1 after pemetrexed in patients with non‐small cell lung cancer: A retrospective multi‐institutional analysis Abstract Background S‐1 and pemetrexed (PEM) are key treatments for non‐small cell lung cancer (NSCLC). However, the mechanism of anticancer activity of S‐1 and PEM is similar. Cross‐resistance between S‐1 and PEM is of concern. This exploratory study was designed to evaluate the treatment effect of S‐1 following PEM‐containing treatment. Methods This retrospective study included patients with advanced (c‐stage III or IV, UICC seventh edition) or recurrent NSCLC who received S‐1 monotherapy following the failure of previous PEM‐containing chemotherapy at six hospitals in Japan. The primary endpoint of the study was the overall response rate (ORR). The secondary endpoint was the disease control rate (DCR), time to treatment failure (TTF), progression‐free survival (PFS), and overall survival (OS). Results A total of 53 NSCLC patients met the criteria for inclusion in the study. Forty‐six patients had adenocarcinoma (88.7%) and no patients had squamous cell carcinoma. Thirty‐one patients (58.5%) received the standard S‐1 regimen and 18 patients (34.0%) received the modified S‐1 regimen. ORR was 1.9% (95% confidence interval [CI]: 0.00%–10.1%). Median TTF, PFS, and OS were 65, 84, and 385 days, respectively. Conclusions Although there were several limitations in this study, the ORR of S‐1 after PEM in patients with nonsquamous (non‐SQ) NSCLC was low compared to the historical control. One of the options in the future might be to avoid S‐1 treatment in PEM‐treated patients who need tumor shrinkage. cisplatin (CDDP) have previously shown superior overall survival (OS) compared with gemcitabine and CDDP in treating nonsquamous (non-SQ) NSCLC patients, and PEM and platinum treatment is usually used for non-SQ NSCLC patients. 1 S-1 monotherapy showed noninferiority for OS compared with docetaxel (DTX) in NSCLC patients previously treated with platinum-based antineoplastic drugs containing treatment and is used for NSCLC as a standard treatment. 2 However, the mechanism of anticancer activity of S-1 and PEM appears to be similar. For example, both S-1 and PEM target thymidylate synthase (TS). 3 Because of this similarity, cross-resistance between S-1 and PEM is a concern. Several preclinical studies have indicated that elevation of TS expression after PEM treatment might be one of the causes of cross-resistance between S-1 and PEM. 4,5 TS expression level has been reported to be associated with response to both S-1 and in NSCLC in a clinical setting. 6,7 These preclinical data indicate the concern about resistance to PEM might indicate resistance to S-1 in a clinical setting. Moreover, although there have been several studies on S-1 for NSCLC, 2,8 unfortunately, studies on the treatment effect of S-1 after PEM in the clinical setting are limited. If a crossresistance between S-1 and PEM exists, then S-1 should be avoided as a treatment after PEM for NSCLC. The aim of our study was therefore to evaluate the treatment effect of S-1 after PEM containing treatment. Patient selection This retrospective study included patients with advanced (c-stage III or IV, Union for International Cancer Control [UICC] seventh edition) or recurrent NSCLC who received S-1 monotherapy following the failure or discontinuation of previous PEM containing chemotherapy at six institutions in Japan between April 2012 and March 2017. The full analysis set (FAS) included patients who (i) were pathologically diagnosed with NSCLC, (ii) received S-1 monotherapy for more than 15 days, (iii) previously received three or less treatments prior to S-1, (iv) received PEM-containing treatment prior to S-1 and (v) had at least one target lesion. The electronic medical records were reviewed retrospectively. Data collection The following factors were collected from the electronic medical records: age, sex, pathology, smoking status, main medical histories, main comorbidities, epidermal growth factor receptor (EGFR) mutation status, anaplastic lymphoma kinase (ALK) fusion gene status, clinical stage (UICC seventh edition), Eastern Cooperative Oncology Group (ECOG) performance status (PS) at the date of S-1 administration, date of S-1 or PEM administration, medication method of S-1, Statistical analysis This exploratory study was a multi-institutional retrospective observational study including six institutes in Japan. The primary endpoint was the overall response rate (ORR), which included partial response (PR) and complete response (CR). Secondary endpoints were disease control rate (DCR) which included CR, PR and stable disease (SD), time to treatment failure (TTF), progression-free survival (PFS), and OS. TTF was the number of days between the date of discontinuation of S-1 the date of the first day of S-1 monotherapy and PFS was the number of days between the date of the start of S-1 monotherapy and the date of disease progression or the date of death. OS was the number of days between the date of the start of S-1 monotherapy and the date of death. Tomita et al. previously reported that the ORR of S-1 was 9%. 9 This study was selected as a historical control because the cohort in this study was similar with this study (the efficacy of S-1 was evaluated retrospectively). On the other hand, PEM was not administered before S-1 in most cases in this cohort because the pharmaceutical approval of PEM occurred in 2009 in Japan (the therapy period of S-1 in the cohort was between March 2004 and October 2010 in the historical control). In this study, expected ORR was set to 9% if there was no cross-resistance between PEM and S-1 and an unacceptable ORR due to cross-resistance was set to 4%. Using the OneArm Binomial program (Cancer Research and Biostatistics, Seattle, WA, USA), 78 patients were needed to produce a stastical power of 80% with a one-sided type I error of 10%. If ORR had been less than 4%, then the confirmatory study was taken into consideration. DCR, median TTF, PFS and OS were also compared with the historical control. The Kaplan-Meier method was used to calculate TTF, PFS and OS. Tumor responses were assessed according to the Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. 10 All statistical analyses were performed using EZR, 11 which is for R. More precisely, it is a modified version of R commander designed to add statistical function frequently used in biostatistics. RESULTS The method of patient selection is shown in Figure 1. Fiftythree NSCLC patients met the criteria, and these patients were defined as the FAS. The patient characteristics are shown in Table 1. Of the 53 patients, 26 patients (49.0%) were <70 years of age. Age, PS, smoking history, staging, and EGFR gene mutation status were similar to the historical control. 9 There were no patients with the ALK fusion gene. A total of 46 patients had adenocarcinoma (88.7%) and none of the patients had squamous cell carcinoma. Thirty-one patients (58.5%) received the standard S-1 treatment (administered for four weeks and the rest for two weeks) and 18 patients (34.0%) received the modified S-1 treatment (administered for two weeks and the rest for one week). Twenty-four patients received five or more PEM administrations. The median number of days between the last PEM administration and first S-1 administration was 118 (range: . No immune checkpoint inhibitors (ICIs) were administered between PEM and S-1. The treatment efficacy of S-1 is shown in preplanned primary endpoint. It was suggested that the treatment efficacy of S-1 for NSCLC after PEM containing treatment might be less than no prior PEM containing treatment. Furthermore, in the historical control, especially in the non-SQ subset, ORR was 15.8% (95% CI: 3.3%-39.8%) and DCR was 57.8% (95% CI: 33.5%-79.7%). 9 The difference of ORR was higher in non-SQ. Median TTF, PFS, and OS in this study were 65, 84, and 385 days, respectively (Figure 2). In the non-SQ subset of historical control, median PFS and OS were 4.2 months and 15.7 months, respectively (TTF not shown). Compared with the historical control, PFS and OS in this study tended to be worse. To search for the predictive factor of efficacy of S-1 effect after PEM containing treatment, differential analysis was used for two factors. One was the number of PEM administrations and the other was the period between the last PEM administration and first S-1 administration. ORR was too low to analyze, TTF and PFS were used as a surrogate of efficacy. The differences in TTF and PFS between the two groups were assessed using a stratified log-rank test. As a result, both of two factors could not predict the efficacy of S-1 after PEM treatment (Figure 3, Figure 4). However, the longer period between last PEM and first S-1 group tended to lengthen PFS and TTF. DISCUSSION S-1 is the standard treatment for patients with previously treated NSCLC in a clinical setting. 2 However, ORR of S-1 after PEM in the present study seemed to have less antitumor effect than the historical control. DCR, PFS and OS also showed similar tendencies, although the difference was modest. Banqu et al. previously reported a relationship between TS expression levels and the ability to develop resistance to antifolates using PEM resistant cell lines. 5 In addition, Takeda et al. reported immunohistochemical expression levels of TS and the response to treatment with S-1 in NSCLC. In the study comparing S1 plus carboplatin (SC group) with paclitaxel plus carboplatin (PC group), PFS of the low TS group tended to be longer than PFS of the high TS group in SC group, and there was no difference among the PC group. 6 Unfortunately, there have been no reports about evidence of elevated TS expression after PEM pretreatment in vitro or in clinical specimens. However, taking these reports into consideration, it has been postulated that one of the mechanisms of crossresistance between PEM and S-1 is reduction of TS expression due to prior PEM treatment. Previous reports on S-1 monotherapy have been compared with the findings of this study (Table 3). 2,8,9,[12][13][14][15][16][17][18][19] Interestingly, in two studies on efficacy of S-1 with a registration period prior to 2009, S-1 showed higher ORR in adenocarcinoma or non-SQ than in squamous cell carcinoma. 13,14 PEM was probably not administered to the analyzed populations because the efficacy of PEM was not improved in clinical trials at the time. After 2009, PEM containing treatment was mainly used for patients with non-SQ NSCLC. In 2016, a randomized phase III trial comparing S-1 with docetaxel (DTX) in patients with non-SQ NSCLC patients previously treated with platinum-based chemotherapy was reported. Subset analysis of this study suggested that PFS of S-1 was inferior to PFS of DTX in adenocarcinoma. 2 In this population, many non-SQ NSCLC (mainly adenocarcinoma) patients received PEM treatment and the registration period of this study was between July 2010 and June 2014. These studies reinforce the view that previous PEM treatment weakens the antitumor effect of S-1 and supports the presence of crossresistance between PEM and S-1. No ICIs were administered between PEM and S-1 in this study. Grigg et al. reported that some chemotherapies might act via immune-mediated mechanisms and chemotherapy response rates might be higher when administered after ICIs. 20 On the other hand, Kato et al. and Tamura et al. reported that subsequent S-1 after ICI did not have a better overall response rate than S-1 without ICIs in their retrospective analyses. It is still unknown whether ICIs improve the treatment efficacy of S-1. 18,19 Exploratory analysis on the predictive factor of S-1 after PEM suggested a longer period between last PEM administration and first S-1 administration. It might therefore be an option to avoid S-1 treatment immediately after PEM. There are several limitations in this study. First, this study was single arm study and had control arm to compare. This might have affected the results. For example, there was worse PS and patients who smoked in our study compared with the historical control, and patient characteristics in this study were slightly different from the historical control (e.g., 11.3% nonadeno-nonsquamous NSCLC cases). This might have affected the efficacy of S-1. Second, the sample size in this study was small and our study findings might have been arrived at by chance. Third, there was no diagnostic radiological central review in this study. Fourth, a more modified regimen was used in this study than in the historical control (41.5% vs. 9.3%). 9 The difference in the treatment schedule may also have affected the efficacy. In conclusion, the efficacy of S-1 after PEM in patients with NSCLC in our study showed low ORR compared with the historical control. An option in the future might be to avoid S-1 treatment in PEM-treated patients who need tumor shrinkage. Further large-scale studies to confirm the findings in this population are essential.
2021-07-15T06:16:38.554Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "e2dc4dc868e692b35e2db27bcec02bebdd61ac3f", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.14055", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8edc78fda3ab9b5da85b96bdfe6c17f88d1dc042", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
34511
pes2o/s2orc
v3-fos-license
Modeling of Noise and Resistance of Semimetal Hg1-xCdxTe Quantum Well used as a Channel for THz Hot-Electron Bolometer Noise characteristics and resistance of semimetal-type mercury-cadmium-telluride quantum wells (QWs) at the liquid nitrogen temperature are studied numerically, and their dependence on the QW parameters and on the electron concentration is established. The QW band structure calculations are based on the full 8-band k.p Hamiltonian. The electron mobility is simulated by the direct iterative solution of the Boltzmann transport equation, which allows us to include correctly all the principal scattering mechanisms, elastic as well as inelastic. We find that the generation-recombination noise is strongly suppressed due to the very fast recombination processes in semimetal QWs. Hence, the thermal noise should be considered as a main THz sensitivity-limiting mechanism in those structures. Optimization of a semimetal Hg1-xCdxTe QW to make it an efficient THz bolometer channel should include the increase of electron concentration in the well and tuning the molar composition x close to the gapless regime. Background The problem of creation of fast and sensitive detectors of THz spectral range is important for various areas such as medicine, security, and aerospace. Among other un-cooled or moderately cooled detector types, semiconductor bolometric detectors allow one to combine high sensitivity, operation speed, and devise compactness and durability. Mercury-cadmium-telluride (MCT) quantum wells (QWs) are promising materials for the implementation of the channel for thermal direct detectors, because Hg 1-x Cd x Te QWs can be characterized by high electron mobility and concentrations even at liquid nitrogen temperatures [1]. Detectors based on a quantum well (QW) are a good choice, because the momentum quantization in the QW growth direction allows for a significant reduction of the 2D electron heat capacity. The requirement of a high sensitivity and low noise could be obtained by using a low-resistive channel. High operation speed could be realized for highmobility channels, or for channels with a fast energy relaxation of the 2D electron gas (2DEG). Hg 1-x Cd x Te QWs can realize semimetallic or semiconducting states, depending on their molar composition x and QW width L [2]. Semimetallic state is characterized by much higher conduction electron concentration at the liquid nitrogen temperature [3]. Therefore, semimetallic QWs can have much lower resistivity and lower noise comparing with undoped semiconducting Hg 1-x Cd x Te QWs of the same thickness. So we restrain our simulation for the case of semimetallic Hg 1-x Cd x Te quantum wells. To create the efficient and sensitive THz bolometers, one needs to be able to predict expected characteristics of Hg 1-x Cd x Te quantum well channel. To our knowledge, a systematic theoretical study of transport properties of semimetallic Hg 1-x Cd x Te QWs at liquid nitrogen temperature as well as experimental is still lacking. Electron mobility, energy spectra, and intrinsic carrier concentrations in the n-type Hg 0.32 Cd 0.68 Te/Hg 1-x Cd x Te/ Hg 0.32 Cd 0.68 Te (QW) in semimetallic state we have modeled numerically in [1]. Our modeling has shown that high electron mobility can be obtained at high electron concentration in the well, which enhances 2D electron gas screening and decreases hole concentration. Based on these results, we aim at finding the best parameters of Hg 1-x Cd x Te QW as a channel of hotelectron bolometer for THz detection. For this purpose, we estimate transport and noise properties of such channels in the dependence of QW properties in the present paper. We model the resistance and noises in Hg 0.32 Cd 0.68 Te/Hg 1-x Cd x Te/Hg 0.32 Cd 0.68 Te QW in the dependence on the well chemical composition x, thickness, and electron concentration at liquid nitrogen temperature (T = 77 K). Also we estimate noise equivalent power (NEP) of bolometric detectors utilizing considered structures as a channel, for the frequency of incident radiation of 140 GHz. These estimates are useful not only for thermal-type detector such as bolometer but also for rectifying one as field-effect transistor with high electron mobility (HEMT), where an increase of the electron concentration could be achieved by applying the gate bias voltage. In this work, we also compare estimated characteristics of semimetal-type HgCdTe THz hot-electron bolometer with semiconductor-type MCT and with graphene HEBs. Methods Simulations of the energy spectra and wave-functions were performed in the framework of the 8-band k.p Hamiltonian [4] to incorporate strong band mixing and nonparabolicity of carrier dispersion law. Such modeling allows one to describe the presence of semimetallic or semiconducting states in the well. In our modeling [1,3], we consider Hg 1-x Cd x Te quantum wells with different x, grown in (001) plane, where lattice mismatch strains are compensated. Barrier layers have composition x = 0.68 (Hg 0.32 Cd 0.68 Te). Level of background charged impurities in QW is taken to be stable, it equals to 10 15 cm −3 . We have modeled numerically the energy spectra and intrinsic carrier concentrations in the n-type Hg 0.32 Cd 0.68 Te/Hg 1-x Cd x Te/Hg 0.32 Cd 0.68 Te quantum well (QW) in semimetallic state. The three most important electron scattering mechanisms-longitudinal optical phonon scattering (inelastic), residual charged impurities scattering, and electron-hole scattering (both are elastic)-take place in bulk Hg 1-x Cd x Te at nitrogen temperature [5]. To calculate the impact of these scattering mechanisms on the electron mobility in QW, the linearized Boltzmann transport equation (lBTE) was iteratively solved [1]. Direct solution of lBTE allows one to account accurately inelasticity of electron scattering and recovers how carrier distribution function is perturbed by the applied electric field in the channel. Estimation of the perturbed distribution function allows one to calculate electron mobility. We have also estimated the contributions from other scattering mechanisms involving acoustic phonons, interface roughness, alloy disorder, fluctuations of composition and effective mass, which have been found to be negligible for QW widths larger than 12 nm. Comparing the separate impacts of each scattering mechanism, one can see that the longitudinal optical phonon scattering is strongly suppressed because of the strong dynamical screening. For an intrinsic 20-nm-wide quantum well with the composition x = 0, the electron mobility for the LO phonon scattering is about 3.8 * 10 6 cm 2 /(Vs), while for n-doped quantum well of the same geometry with composition x = 0.06 (the electron concentration 1.5 * 10 17 cm −3 ), the electron mobility limited by the LO phonon scattering is about 6.8 * 10 6 cm 2 /(Vs). As these mobilities are much higher than the corresponding total mobilities, we can conclude that the main contribution to the total mobility comes from the charged impurity scattering and electron-hole scattering. Relative importance of these two scattering mechanisms can be established from the comparison of hole and charged impurity concentrations. Our modeling [1] has shown that at the liquid nitrogen temperature, high electron mobility can be obtained at high electron concentration in the well, which enhances 2DEG screening and decreases hole concentration. Such an increase of the electron concentration could be achieved by delta-doping of barriers or by applying the top-gate bias voltage. Growth of the mobility with the increase of the quantum well composition x could be explained by a lower concentration of heavy holes at the same value of the electron concentration. Since the concentration of holes in QW is often higher than 10 15 cm −3 , the fabrication of high purity samples with low concentration of residual charged impurities (of the order of 10 14 cm −3 ) will not improve the electron mobility sufficiently. Our modeling shows that because of the high hole concentration, the purity of samples in many configurations is of a lower importance for obtaining high electron mobility than the electron concentration in the well. This conclusion could be important for the reduction of fabrication costs for high-mobility HgCdTe heterostructures. Our results show that the increase of the electron concentration in the well enhances the screening of the 2D electron gas, decreases the hole concentration, and can ultimately lead to a high electron mobility at liquid nitrogen temperatures. The highest mobility values (up to 10 6 cm 2 / (Vs)) can be achieved in the Hg 1-x Cd x Te at x = 0.09, notable near the inversion point, at high electron concentration in the well. The increase of the electron concentration in the QW could be achieved in situ by delta-doping of barriers or by applying the top-gate potential. Our modeling has shown that for low molar composition x, the concentration of holes in the well is high in a wide range of electron concentrations; in this case, the purity of samples does not significantly influence the electron mobility. If to consider HgCdTe QW as a channel of hotelectron bolometer, three main noise generation mechanisms present in this case: thermal (Johnson's) noise, generation-recombination noise, and photon noise [6]. Total noise U N can be found as a mean square of these three noises. In further calculations, we use the bandwidth of the central frequency Δf = 1 Hz. The value of thermal noise can be found as [6]: where R is the detector resistance, k B is the Boltzmann constant, and T is the detector temperature. Generation-recombination noise can be found from [6]. For the case of (001) oriented unstrained QWs which are considered in this work, the effective mass of holes is more than an order of magnitude greater than the effective mass of electrons at the Fermi level. Thus we can simplify the equation for this noise: Here τ is the dominant lifetime, I c is the current, n 0 and p 0 refer to the total numbers of electrons and holes in the channel at thermal equilibrium, ω is the circular frequency which is related to the frequency of incident radiation f as ω = 2πf. Photon noise was calculated with the usual framework (see [6]): where N is the photon flux from the T b = 300 K background hemisphere, detector radiation coupling η was taken to be 0.5, spectral range of radiation wavelength (λ 1 ≤ λ ≤ λ 2 ) which fell on the bolometer was taken to be ±15 % from central frequency of incident radiation from the source, A is the antenna area, e is electron charge, and c is the speed of light. NEP is determined as [7]: where Δf = 1 Hz is the pass bandwidth. Results and Discussion We have applied our recent calculations of electron mobility in the semimetal Hg 1-x Cd x Te QW [1] to model the resistivity and noises in such QW when it is used as a channel of the THz-range hot-electron bolometer. In this case, the channel thickness corresponds to QW width L. Using the lateral dimensions of the bolometer channel (width D w and length D l ), one can calculate the channel resistivity ρ in a usual way as ρ = 1/(eμn), where n is the electron concentration, e is the electron charge, and μ is the electron mobility. Then its resistance is given by R = ρD l /(D w L). Figures 1 and 2 present the result of such calculations for a channel with D w = D l =50 μm. From Figs. 1 and 2, one can see that the main impact on the resistance of the sample is made by electron concentration (Fig. 2). Variation of well thickness changes channel resistance much less significantly. At Fig. 2, leftmost points of each curve correspond to the intrinsic case and very high resistance of samples. Increase of electron concentration in the channel leads to substantial decrease of its resistance. It is important to outline the physical reasons of such resistance behavior to provide the technologists with the most efficient recipes of growth of low-resistance MCT heterostructures. High resistance (and low mobility) in the left-hand side of Fig. 2 is explained by high hole concentration values in an intrinsic case [1]. As electron-hole scattering is one of the most important scattering mechanisms, high hole concentration sufficiently deteriorates electron mobility and increases sample resistance. Growth of electron concentration (at the stable concentration of background charged impurities in the well) leads to strong increase of mobility and appropriate reduction of resistance due to two simultaneous processes: decrease of hole concentration and growth of screening [1]. The channel resistance varies by more than two orders of magnitude depending on the electron concentration. Such a strong dependence could provide high sensitivity of the hot-electron bolometer, as small variations in the gate voltage should result in strong changes of the bolometer resistance. A high dynamical tenability is additional merit of the considered system as a THz detector. When choosing QW thickness to obtain the optimal work characteristics of semimetal MCT QW channel, it could be important to avoid too thin wells (about 10 nm and less), because in that case the channel resistance and noises could be increased by interface scattering [8]. Also one should be careful with QWs in the band inversion point, as in that case additional scattering mechanism (scattering on effective mass fluctuations) could suppress the mobility and increase resistance and noises. Low-resistance Hg 1-x Cd x Te quantum wells could be obtained by n-type doping of barriers or by application of the top-gate bias to the channel. As noises are the sensitivity-limiting mechanism, the modeling of the dependence of such noises on QW parameters is important. There is a very few experimental data regarding the electron lifetime in semimetal HgCdTe structures. However, this time could be roughly estimated from [9][10][11] as 10 −10 s. For the noise and NEP estimates for incident radiation frequency 140 GHz, current I c was taken to be 0.4 mA, in analogy to the experimental work of authors [12]. The area of antenna A for this frequency was estimated as A ¼ λ 2 4π ð Þ ¼ 3:7 Ã 10 −3 cm 2 . Using the numerical values of total noise, we could roughly estimate noise equivalent power (NEP) of the hotelectron bolometer with semimetal QW, using the experimental data for sensitivity S V for semiconducting channel from [12]. For the frequency of incident radiation of 140 GHz, value of sensitivity was about 20 V/W [12]. It is important to compare semimetal-type MCT QWs, used as a THz bolometer channel, with their direct competitors. Comparing these semimetal-type heterostructures with semiconductor MCT layers [6], one could note that NEP is an order of magnitude lower for semimetal structures. Also, semiconductor MCT structures are characterized by much greater recombination times, about 10 −7 s [12,13]. Thus compared to semiconductor MCT structures, semimetal QW could provide higher bolometer operation speed (which estimates from a lifetime) and higher sensitivity. Comparing semimetal MCT QWs with graphene, which is also often considered as a candidacy for the channel of THz bolometer, we can outline several important benefits of structures we are considering in this article. First, graphene sheets are usually characterized by resistances about 10 kOhm. This results in very strong noises, which are at least one order of magnitude Second, the high resistance of graphene layers results in quite inefficient coupling of bolometer channel with a planar metal antenna, and so leads to low sensitivity and high NEP. Third, carrier energy relaxation process in graphene is very slow, as the only inelastic scattering mechanism present there is optical phonon scattering, and in contrast to MCT, the energy of optical phonons in graphene is more than an order of magnitude (200 meV) greater than mean thermal energy of electrons at T = 77 K. This results to a decrease in the efficiency of the 2DEG energy relaxation, which could deteriorate the detector operation speed and degrade the overall performance of the detector. In contrast to graphene, 2DEG energy relaxation in semimetallic Hg 1-x Cd x Te QW is much faster due to low energy of the LO phonon (17 meV in HgTe), since this energy and mean electron energy are of one order. Fast 2DEG energy relaxation could be important for increasing the detector operation speed. Fourth, graphene sheets are more sensitive to the substrate or top-gate presence. While high values (10 6 cm 2 /(Vs)) of the mobility are measured at room temperature in exfoliated graphene sheets [14], substrate presence decreases the room-temperature mobility to measurable values of about (1…2.3) * 10 4 cm 2 /(Vs). This also results in hard carrier density operation in graphene. In this connection, MCT-based devices can provide an order of magnitude higher mobility than graphene and much easier carrier density conduction. Thus, semimetal HgCdTe QWs used as a channel for THz hot-electron bolometer at T = 77 K could provide high operation speed combined with high sensitivity and low noise. Conclusions We have studied the resistance end noise dependencies of a hot-electron bolometer channel, based on n-type semimetallic Hg 0.32 Cd 0.68 Te/Hg 1-x Cd x Te/Hg 0.32 Cd 0.68 Te quantum well, on the electron concentration and QW parameters. The channel resistance was obtained from our modeling of the well mobility. It was shown that the HEB channel resistance varies by more than two orders of magnitude (from several tens of Ohm to about 10 kOhm) depending on the QW electron concentration and by about one order of magnitude depending on the QW molar composition (in the inverted band structure range of x). Such a strong dependence could provide high volt-watt sensitivity of the hot-electron bolometer, as small variations in the gate voltage should result in strong changes of the bolometer resistance. A high dynamical tunability brings another benefit of the considered system for the THz detection. We show that the generation-recombination noise in semimetal MCT quantum wells is strongly suppressed, compared to semiconducting HgCdTe samples [6]. This is caused by extremely small carrier lifetime in inverted band structure which is realized in semimetal case. Thermal noise is the main source of the noise in these structures. Photon noise and generation-recombination noises are usually significantly smaller. All three noises exhibit strong dependence on the electron concentration. Their level goes down with the increase of the chemical composition parameter x of the QW. To obtain the optimal operation characteristics of a semimetal MCT QW channel for THz detectors, one should provide a high electron concentration in the QW, and adjust the channel chemical composition x to be close to the band structure inversion point (just below the inversion point, to avoid activating an additional mechanism of scattering on the effective mass fluctuations). Our estimates for characteristics of semimetal-type HgCdTe THz hot-electron bolometers show their advantages compared to semiconductor-type MCT and to graphene HEBs. We conclude that HgCdTe semimetallic QWs can demonstrate lower resistance, lower noise values, and higher operational speed and can provide much more efficient coupling to planar antennas and much higher tunability in THz-range detector applications.
2016-05-15T23:10:41.583Z
2016-04-11T00:00:00.000
{ "year": 2016, "sha1": "4d484c2a985698fb897080b82a63d27d4ea0b9c2", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-016-1405-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d484c2a985698fb897080b82a63d27d4ea0b9c2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
17207206
pes2o/s2orc
v3-fos-license
Vowel Perception in Listeners With Normal Hearing and in Listeners With Hearing Loss: A Preliminary Study Objectives To determine the influence of hearing loss on perception of vowel slices. Methods Fourteen listeners aged 20-27 participated; ten (6 males) had hearing within normal limits and four (3 males) had moderate-severe sensorineural hearing loss (SNHL). Stimuli were six naturally-produced words consisting of the vowels /i a u æ ɛ ʌ/ in a /b V b/ context. Each word was presented as a whole and in eight slices: the initial transition, one half and one fourth of initial transition, full central vowel, one-half central vowel, ending transition, one half and one fourth of ending transition. Each of the 54 stimuli was presented 10 times at 70 dB SPL (sound press level); listeners were asked to identify the word. Stimuli were shaped using signal processing software for the listeners with SNHL to mimic gain provided by an appropriately-fitting hearing aid. Results Listeners with SNHL had a steeper rate of decreasing vowel identification with decreasing slice duration as compared to listeners with normal hearing, and the listeners with SNHL showed different patterns of vowel identification across vowels when compared to listeners with normal hearing. Conclusion Abnormal temporal integration is likely affecting vowel identification for listeners with SNHL, which in turn affects vowel internal representation at different levels of the auditory system. INTRODUCTION Initial vowel perception theories suggested that listeners distinguish one vowel from another by comparing idealized steadystate target formant frequency values of the vowels [1]. Later work showed that this representation of vowel perception was inadequate, and could not explain how listeners accurately identify a vowel despite coarticulatory or surrounding phonetic in-fluences, or despite the fact that the vowel may not have formants equaling the idealized target values [2,3]. Subsequent studies showed that presenting listeners segments of the vowel (e.g., transitions, or only the vowel centers) allowed nearly as accurate identification as when the entire syllable is presented [4]. Thus, all aspects of the vowel in a syllable-including formant values, formant trajectories throughout the vowel, duration, and phonetic context-all play a role in accurate vowel identification. In addition, all the vowel segments must be in the proper order; if not, vowel identification is decreased [5]. Studies that have examined vowel perception in listeners with sensorineural hearing loss (SNHL) have found that vowel misidentifications are usually caused by confusions of vowels having similar formant frequency values [6,7]. These misidentifications are magnified by reduced contrast in internal representations of vowels caused by reduced frequency selectivity in lis-teners with SNHL [8]. Vowel perception is crucial for accurate word and sentence recognition, and thus is an important to investigate in listeners with hearing loss [9]. Few studies, however, have examined temporal manipulations or coding with reference to vowel perception by listeners with SNHL. Those studies that have often have used concurrent vowel paradigms in an attempt to mimic situations involving more than one speaker. Results from these studies show worse identification for listeners with SNHL than listeners with normal hearing (NH), owing to reduced spectro-temporal processing [10]. Because vowel perception is likely dependent on several segments of a vowel occurring over time for a single vowel, however, there is a need to study how temporal manipulations may affect single vowel perception in listeners with hearing loss without the compounding effects of the concurrent vowel paradigm. Listeners likely perceive vowels based on an overall perception of several segments of a vowel changing over time. For a vowel in a consonant-vowel-consonant (CVC) syllable, these segments include the formant transitions to and from the vowel center. Formants and formant transitions are parts of the temporal fine structure of speech [11]; loss of tonotopicity and reduced across-fiber temporal coding would seem to preclude accurate internal representation of a formant transition [12,13]. One way to initially examine the effects of frequency and temporal coding in listeners with SNHL is to segment vowels in a CVC syllable into vowel centers (little formant frequency value change) and into initial and final formant transition segments (fluid formant frequency value change). For listeners with SNHL, it may be that spectro-temporal coding for even single vowels is problematic and may more completely describe perceptual deficits than focusing on frequency selectivity alone. The purpose of the current study was to determine the influence of hearing loss on perception of vowel slices from naturallyproduced /b V b/ syllables. The slices would be taken from the vowel center of the syllable, from the transition from the initial /b/ consonant to the vowel (initial transition), and from the transition from the vowel to the final consonant (final transition). Because of the perceptual and linguistic importance of vowel duration, there were slices of varying duration taken from the vowel center and the transitions. Our aims in this preliminary study were to determine whether listeners with SNHL would show an abnormal pattern of decreasing identification accuracy with decreasing slice duration as compared to listeners with NH, and whether there would be different patterns of identification across the listener groups for given vowels. Subjects A total of 14 participants within the age range of 20-27 participated in this study. All participants were paid upon completion of the experiment. All listeners had at least an eighth-grade education, were native speakers of English, and able to use a computer mouse to label the vowel sounds they heard while wearing headphones. There were ten listeners (6 males; mean age, 21 years) with NH. These participants had hearing sensitivity less than or equal to 25 dB HL in the right ear [14]. They were recruited from the Department of Audiology & Speech Pathology, the university campus, and from local churches and community organizations. There were four listeners (3 males; mean age, 23 years) with SNHL. Participants met the qualifications of a mildsevere loss of 30-80 dB HL in the 250-4,000 Hz frequency range and provided a recent audiogram within the past year. Listeners with SNHL all had longstanding hearing losses and were recruited from the University Audiology Hearing Clinic. Table 1 presents demographic and audiometric information concerning the four listeners with SNHL. All potential listeners filled out a case history form, and those individuals with cognitive, neurological, or learning deficits were excluded. All participants provided written informed consent and were given a free hearing screening. This study was approved by Institutional Review Board # IORG0000051. Stimuli The stimuli consisted of six consonant-vowel-consonant (CVC) syllables spoken by a middle-aged adult male. Six vowels were used /i a u ae ɛ ʌ/ in a /b V b/ context, making the words 'beeb,' 'bob,' 'boob,' 'bab,' 'beb,' and 'bub.' Five different productions of each syllable were generated by the speaker, and the middle production was used as the experimental stimulus. Stimuli were recorded in a quiet room using a quality microphone (Spher-O-Dyne) held approximately 1 cm from the speaker's mouth. The microphone output was fed to a preamplifier (Model MA2, Tucker-Davis Technologies, Alachua, FL, USA) and routed to a 16-bit A/D converter (Model DD1, Tucker-Davis Technologies) and sampled at a 12.5 kHz rate. The recordings were edited into individual tokens using a waveform manipulation software package (Adobe Audition ver. 1.5, Adobe Systems Inc., San Jose, CA, USA). A waveform editor in this software package was used to slice the individual syllables into eight approximate segments: the beginning transition, one half of the beginning transition, one fourth of the beginning transition, full central steady-state vowel, one half of the central vowel, ending transition, half of ending transition, and one fourth of ending transition. This made a total of 54 stimuli. For the half vowel centers, the midpoint of the vowel center was selected, and then a quarter of the total vowel center duration was then selected on either side of the midpoint. Spectrographically, an initial transition was defined as the beginning changes in formant frequency from the burst to the vowel steady-state; similarly, a final transition was defined as the beginning change in formant frequency from the vowel steadystate to cessation of periodicity at final stop closure. For initial transition durations, the half transition comprised the beginning of the syllable to the transition midpoint; the quarter transition comprised the first quarter of the transition starting at the beginning of the syllable. Likewise, for final transition durations, the half transition comprised the midpoint of the transition to the end of periodicity; the quarter transition comprised the segment at the end of periodicity. All waveform cuts were made at zero crossings to present acoustic distortions; this also resulted in the above segments being approximate one-half and onefourth durations. Selection of where to cut the transitions was based upon initial and final formant movement as seen in spectrographic representations of the stimuli (Fig. 1). Procedure Listeners were tested individually in a sound-attenuated room. Stimuli were output by a Tucker-Davis DD1 D/A converter, lowpass filtered at 4.9 kHz (PF1, Tucker-Davis Technologies), routed to a headphone buffer (HB, Tucker-Davis Technologies), and then sent to Sennheiser HD265 headphones inside an Industrial Acoustics Company (Winchester, UK) sound booth. Stimuli were presented via the headphones into the right ear of the participants; the stimuli were presented in 10 random orders for a total of 540 presentations. Stimuli were presented at 70 dB SPL (sound press level) for all listeners. For the listeners with SNHL, all stimuli were shaped to mimic the gains of an appropriatelyfit hearing aid. First, the audiometric thresholds from the right ear of a listener with SNHL were logged into a Veri-Fit system. Target gain by audiometric frequency was then selected using the NAL-NL1 formula. These target gain values were then used to develop a software gain function (using Adobe Audition ver. 1.5) for the given listener; this gain function was then applied to all 54 stimuli. In this way, each of the listeners with hearing loss received listener-specific stimuli. All listeners used a computer mouse to select the corresponding word on the screen. To verify the stimuli were at a comfortable listening level and familiarity with the stimuli, participants were given three practice runs using the six whole syllables and were asked to identify the word they heard by using the computer mouse to select the corresponding word on the computer screen. On each trial run, listeners were given correct-answer feedback by a green flash on the computer screen. A red flash indicated the response was incorrect. All listeners correctly identified the whole syllables. RESULTS For analysis of listener responses, portions of the stimuli were grouped according to initial transition slices, final transition slices, and whole syllable/vowel center slices. Whole syllable, vowel steady-state center and half of vowel center Fig. 2 shows the mean identification results for the whole syllable, the vowel center slice only, and half of the vowel center slice when collapsed across the six vowels. A repeated-measures two-way analysis of variance (ANOVA) was computed on the data shown in Fig. 2 using slice or duration (whole syllable, vowel center, half vowel center) as the within-subject variable, group classification (NH or hearing loss) as the between-subject variable, and mean number correct identification as the dependent variable. To guard against violations of sphericity, Huynh-Feldt corrections were used in this and succeeding analyses, as were computations of partial ɳ 2 to determine effect size. . The three-way interaction between vowel, slice, and listener group shows that there are different response patterns for different slice durations of different vowels depending on the listener group. Listeners with NH performed consistently across all stimuli except for the vowel /ae/, for which performance on average became worse for each shortening of the stimulus and approximating that of listeners with hearing loss for the half center duration. The listeners with SNHL, however, performed worse than that of listeners with NH for several conditions. In particular, these listeners' performance fell below 85% for all three stimuli (whole syllable, vowel center, half center) containing /ae/ and /ʌ/, and for half center stimuli for the vowels /u/ and /a/. What is surprising is that the listeners with hearing loss had correctly identified the whole syllables before the experimental data was collected, yet their identification for whole syllables containing /ae/ and /ʌ/ was lower than expected in the ex-perimental data. These patterns taken together show why a three-way interaction occurred-there are different response patterns for different durations of different vowels depending on the listener group. Finally, to determine whether the decrease in performance by the listeners with hearing loss was greater than that experienced by the listeners with NH, differences were computed for each listener as the vowel slices became shorter. That is, the difference in identification performance was computed between the whole syllable versus the vowel center, between the vowel center and the half vowel center, and between the whole syllable and the half vowel center. The mean differences were then used as the dependent variable in a three-way repeated-measures ANOVA using vowel identity, slice or duration, and group as the factors. To help explore the significant three-way interaction, post hoc pairwise comparisons were performed, and showed a significant difference between groups for the difference from whole syllable to half center (P=0.002), at an alpha level of 0.01. This rate of change difference suggests a difference in temporal integration of information between 315 ms (effective mean duration of the whole vowels in the syllables) and 77 ms (mean duration of the half vowel centers). . Noteworthy is the significant vowel×duration interaction, and a significant main effect of listener group. The vowel×duration interaction may be explained by the fact that some vowels simply yielded better performance than the other vowels, even at quarter transition duration. Finally, to determine whether the decrease in performance by the listeners with hearing loss was greater than that experienced by the listeners with NH, differences were computed for each listener as the transition slices became shorter. The results of this three-way ANOVA showed a significant main effect of vowel ( [5,60] Fig. 4 shows the mean identification results for the final transition, the ending half of the initial transition, and the ending quarter of the initial transition collapsed across vowels. A two-way ANOVA was computed and the results showed a significant main effect of slice duration ( [2,24], F=149.188, P<0.001, ɳ 2 =0.926), and a significant main effect of group ( [1,12], F=34.444, P< 0.001, ɳ 2 =0.742). There was also a significant duration×group interaction ( [2,24], F=7.352, P=0.003, ɳ 2 =0.380). To explore the duration ×group interaction, post hoc pairwise comparisons were computed and showed a significant difference between groups for the whole final transition vowel centers (P≤0.001) and for the half transition (P<0.001), using an alpha level of 0.01. Final transitions To determine whether vowel identity influenced identification patterns, a three-way ANOVA was computed, this time adding vowel identity as an additional within-subject factor. These results showed a significant main effect of vowel ( . Noteworthy is again a significant vowel×duration×listener group interaction, meaning that there are different response patterns for different durations of different vowels depending on the listener group. The listeners with NH had identification performance below 85% for the vowel /ae/ for all final transition durations, for /ɛ/ and /ʌ/ for the half-transition stimuli, and for all vowels for the approximate quarter-transition stimuli. The listeners with hearing loss had identification performance below 85% for /i/, /ae/, /u/, and /ʌ/ for the whole final transitions, and for all vowels for the half-and quarter-transition slices. To explore the duration×group interaction, post hoc pairwise comparisons were computed and showed a significant difference between groups for the whole final transition (P<0.001) and for the half final transition (P<0.001), using an alpha level of 0.01. Finally, to determine whether the decrease in performance by the listeners with hearing loss was greater than that experienced by the listeners with NH, differences were computed for each F=5.463, P<0.001, ɳ 2 =0.313). To explore the significant group ×duration interaction, post hoc pairwise comparisons were performed, and showed a significant difference between groups for the difference from half to quarter transition (P =0.003), at an alpha level of 0.01. This rate of change difference suggests a difference in temporal integration of information between 41 ms (mean duration of the half final transition) and 18 ms (mean duration of the quarter final transition). DISCUSSION Our aims were to determine whether listeners with SNHL would show an abnormal pattern of decreasing identification accuracy with decreasing slice duration as compared to listeners with NH, and whether there would be different patterns of identification across the listener groups for given vowels. Listeners were asked to identify vowels from slices of vowel centers, of initial transitions, and final transitions. The results showed that listeners with SNHL had a steeper rate of decreasing vowel identification with decreasing slice duration as compared to listeners with NH, and the listeners with hearing loss showed different patterns of vowel identification across vowels when compared to listeners with NH. These findings are further discussed below. Whole syllable, vowel steady-state center and half of vowel center Results from the current study for the whole syllables are in agreement with earlier studies in that the listeners with hearing loss appear to have vowel misidentifications arising from reduced contrast in internal representations of vowels via reduced frequency selectivity [8]. As expected, there were differences between listener groups as the stimuli were shortened from whole syllables to only vowel centers, and then only half of the vowel centers. The rate of performance decrement was significantly greater for the listeners with hearing loss going from the whole syllable to the half center slices. This time frame covered several hundred milliseconds. This result implies that the temporal sampling of the stimulus may be insufficient and/or distorted by the hearing loss. A potential mechanism for multiple looks in perceiving speech sounds would involve a comparison of a phoneme template stored in long-term memory with repetitive sampling of the incoming stimulus in short-term memory [15][16][17]. Degraded sam-pling of some form may then harm development of either representation in short-term or long-term memory. Results from the current study cannot allow for further speculation on the actual degradation caused by the hearing loss. All the analyses including vowel identity as a factor showed vowel identity to be a significant factor. This suggests that bottom-up peripheral processing of vowels cannot completely explain the current results. That is, there were bias effects as some vowels were simply recognized better than others. There also were vowel×group interactions, showing that identification patterns were different between listeners with NH versus listeners with hearing loss. These results suggest that listeners with hearing loss may have vowel perceptual space demarcations different from that of listeners with NH [18]. Since all the listeners with hearing loss in the current study had hearing losses that were longstanding (and likely congenital), it is unlikely that these listeners had much if any significant time of NH or time in which vowel perceptual space was unaffected by the hearing loss. Across all stimuli, the cardinal vowels /i a u/ consistently yielded the better performance for the listeners with NH -but not always for the listeners with hearing loss. This is further evidence suggesting problematic vowel perceptual space in listeners with hearing loss. Other reports have posited that peripheral representation may not be sufficient for explaining vowel identification, particularly for concurrent vowel identification [19][20][21][22]. It may be that hearing loss not only affects peripheral representation, but also more central representation of vowels. It is possible that some differences of performance by listeners with hearing loss across vowels in the current study arose from speaker idiosyncratic productions [23] -but, given the above results, a more general explanation may be altered vowel perceptual space or altered vowel templates. Initial and final transitions Results from the current study on transitional segments show that listeners with SNHL had, on average, difficulty with using transitions to identify vowels. Previous research has shown a similar difficulty by listeners with hearing loss in using transitions for identifying stop consonants [24]. If listeners likely perceive vowels based on an overall perception of several segments of a vowel changing over time, then loss of tonotopicity and reduced across-fiber temporal coding may be preventing accurate internal representation of a formant transition [12,13]. For both initial and final transition slices, significant group×duration interactions using mean differences or rate of change show that the listeners with hearing loss had performance decrements greater than that of listeners with NH. For the final transition slices, the significant decrement was between approximately 40 and 20 ms. Even though this is a different time scale from that listed for the vowel center slices, it would still likely involve some tempo-ral integration of information, even if the time scales may suggest different physiologic mechanisms of integration. For listeners with SNHL, it may be that spectro-temporal coding for even single vowels is problematic and may more completely describe perceptual deficits than focusing on frequency selectivity alone. For instance, in the current study, the relatively poor performance by the listeners with hearing loss on the whole syllable /ae/ and /ʌ/ stimuli in the experimental condition also coincided with relatively poor performance on using transition segments to accurately identify these vowels as well. For both listeners with SNHL and listeners with NH, the approximate quarter-duration transitions probably did not have enough frequency extent to enable correct identification exceeding 50% [25]. Results from the current study show that listeners with SNHL may have both peripheral coding and more central acousticphonetic mapping difficulties. It is difficult to say how exactly one might influence the other, or how they may work synergistically to adversely affect speech perception. Future studies using models of speech that include both bottom-up and topdown processes may prove useful in gaining a more complete picture of auditory processing. Such models include the distributed cohort model, having a neural network influenced by both processes [26], and the TRACE model, in which top-down processing affects bottom-up processing [27]. There are limitations to the current preliminary study-small listener sample size, a stimulus set generated from only one speaker, and not all vowels of English represented. Regarding the small sample size, however, it must also be stated that statistically significant group factor results are evidence of sufficient statistical power; having a larger sample size would not make the results more significant. Furthermore, the large effect sizes for group accompanying the statistically significant analyses suggest that the results would likely be similar for similar subjects even with a larger sample size. The occurrence of numerous higher-order statistical interactions with such a small sample size would again suggest that the results would be similar with similar subjects even with a larger sample size. A strength of the current study is having listeners of similar ages (20s) in both groups, thus controlling for age-related effects. Thus, this preliminary study into vowel perception by listeners with SNHL shows that subsequent studies may provide much more understanding of how SNHL affects speech perception. In conclusion, our aims were to determine whether listeners with SNHL would show an abnormal pattern of decreasing identification accuracy with decreasing slice duration as compared to listeners with NH, and whether given vowels would be more difficult to identify than others. Listeners with hearing loss did show that listeners with SNHL had a steeper rate of decreasing vowel identification with decreasing slice duration as compared to listeners with NH, suggesting abnormal temporal integration in the listeners with hearing loss. Listeners with hearing loss showed different patterns of vowel identification across vowels when compared to listeners with NH, indicating vowel perceptual demarcation different from that of listeners with NH. Further research may show how these two effects may interact one with another to influence the vowel perception of listeners with SNHL.
2016-05-12T22:15:10.714Z
2015-02-03T00:00:00.000
{ "year": 2015, "sha1": "8b5549b1dc3d127749e29af3ba20f4ca09211bfe", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3342/ceo.2015.8.1.26", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b5549b1dc3d127749e29af3ba20f4ca09211bfe", "s2fieldsofstudy": [ "Physics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
899828
pes2o/s2orc
v3-fos-license
A Study on Immunosuppressant Induced Dyslipidemia and Associated Chronic Graft Rejection in Renal Transplant Recipients Article history: Received on: 09/12/2015 Revised on: 22/01/2016 Accepted on: 08/02/2016 Available online: 30/04/2016 Dyslipidemia is a common complication of renal transplantation referred to as new onset dyslipidemia. Immune suppressants, in particular cyclosporine, the calcineurin inhibitor and others are known to cause dyslipidemia through non-competitive inhibition of sterol 27-dehydroxylase (CYP27A1). On the other hand, dyslipidemia has been found to be associated with higher graft rejection due to decrease in immune suppressant activity and direct graft destruction. Hence the study was designed to analyze the effect of dyslipidemia on chronic allograft rejection. Clinical and biochemistry reports of 142 renal transplant recipients were collected in designed case report forms. All statistical analysis was carried out using International Business Machine (IBM) Statistical Package for Social Sciences (SPSS) 17.0. Immunosuppressive therapy, comorbid diabetes and hypertension, age and serum creatinine were found to be the common predictors of dyslipidemia whereas as dyslipidemia, age and gender were found to be predictors of graft destruction and loss (P>0.05). Incidence of graft loss was found higher in dyslipidemic patients (P<0.05). Dyslipidemia is associated with higher incidence of graft loss and hence renal transplant recipients should be effectively managed with dose intense statin therapy or other safer immunosuppressants. This could increase graft survival rates. INTRODUCTION Renal transplantation is the surgical placement and vascular integration of a human kidney from a living or cadaveric donor into a patient who has end stage renal disease (ESRD).It is the only treatment modality that restores reasonable renal function in ESRD patients (Wallace, 1998).Though, renal function is restored to some extent, renal transplantation possesses various short term and chronic complications, the most important being cardiovascular and post-transplant metabolic syndrome (PTMS) (Stephanie et al., 2009;Oruc et al., 2013).Cardiovascular complications remain the major cause of morbidity and mortality in renal transplant recipients (Kasiske et al., 1996).These long term complications are not direct effects of grafting but are caused due to the dose intense immunesuppressant and long term steroid therapy. The US National cholesterol education program -Adult treatment panel III defines metabolic syndrome as the presence of dyslipidemia, obesity, glucose intolerance and hypertension.Metabolic syndrome is characterized by presence of several metabolic anomalies associated with increased risk of cardiovascular mortality (Scott et al., 2001;Shadab and Richard, 2012).Dyslipidemia is one of the common PTMS complications and is referred as new onset dyslipidemia after transplantation.It is characterized by an increase in total cholesterol (TC), low density lipoprotein (LDL-C), very low density lipoprotein (VLDL-C) and triglycerides (TGL) and/or decrease in high density lipoprotein levels (HDL-C) (Deleuze et al., 2006).Most of the immunosuppressants, in particular, cyclosporine used in renal transplant recipients to prevent immune sensitization and graft rejection alter serum lipid levels (Sasa and Gerhard, 2012). Being metabolized by the cytochrome P450 pathway, cyclosporine non competitively inhibits sterol 27-dehydroxylase (CYP27A1) and therefore decreases the production of 27hydroxycholesterol which in turn is a potent inhibitor of 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA), the rate limiting step of cholesterol biosynthesis.In addition to CYP27A1 inhibition, cyclosporine also inhibits lipoprotein lipase and thereby .increases serum triglyceride levels (Ann et al., 2007 andTory et al., 2008) .Similar to calcineurin inhibitors, patients treated with mammalian target of rapamycin inhibitors (mTOR) such as sirolimus also display impaired lipid metabolism.However, dyslipidemia associated with sirolimus is not completely due to CYP27A1 inhibition as with cyclosporine (Morrisett et al., 2003). Sirolimus, in addition to CYP27A1 inhibition also decreases LDL-C clearance by inhibiting the transcription of LDL receptor gene in hepatic cells (Ma et al., 2007).Various studies have shown dyslipidemia to be associated with graft rejection.However, many studies have not examined the effect of immunosuppressant induced dyslipidemia on graft rejection.Hyperlipidemia can affect chronic allograft function indirectly by its effects on vessels and directly by its specific renal destructive effects. Mechanisms of hyperlipidemia induced nephrotoxicity include the following: glomerulosclerosis and chronic interstitial nephritis caused by oxidant stress put forth by generation of reactive oxygen species (Fuiano et al., 2005) and progressive renal damage provoked by monocyte infiltration and mesangial proliferation through increased production of growth promoting cytokines (Keane et al., 1993).Another interesting mechanism behind, dyslipidemia associated graft rejection is decrease in immunosuppressive activity of cyclosporine with increase in serum lipids which ultimately may lead to immune sensitization. Dyslipidemia decreases the availability of intracellular cyclosporine concentration available to inhibit the immune activation process and thereby contributes to chronic allograft loss (Pozzetto et al., 2008).Thus dyslipidemia induced by immunosuppressants tends to decrease the effect of immunosuppressant by decreasing its availability and leading to graft loss. MATERIALS AND METHODS This retrospective observational study was carried out in the nephrology department of a multispecialty hospital for a period of 2 months from January 2015-March 2015.Consent from the hospital authorities and nephrologists were obtained before accessing patient medical records.The study protocol was approved by the institutional ethics committee of Vels University.Clinical and biochemistry reports of 142 renal transplant recipients who visited the hospital in the past one year for any of the following reason was recorded: hemodialysis, routine checkup as instructed by the nephrologist, transplant kidney biopsy and for other comorbidities.Clinical data was recorded from the patient case sheets stored in medical records whereas biochemical parameters were recorded from the laboratory database.A case report form was designed for recording clinical and biochemistry data of renal transplant recipients as per study requirements. Inclusion Criterion The study included chronic kidney disease or ESRD patients of both gender who have undergone unilateral or bilateral renal transplantation. Exclusion Criterion Chronic kidney disease or ESRD patients on renal replacement therapies other than transplantation were excluded from the study.Patient case sheets with incomplete clinical data were not considered for inclusion in the study.For graft rejection dependency analysis, acute rejection episodes were excluded from the study. Statistical Analysis Comparison between two groups was analyzed by means of student t test to determine the presence or absence of statistically significant difference.Contingency and relationship was analyzed using Fishers's exact test whereas incidence rate between two set of variables was analyzed by odds ratio and relative risk quantification.Wherever computed, a P value of less than 0.05 was considered significant, since the confidence interval was maintained at 95%. Predictors of dyslipidemia development and graft rejection were determined by multiple linear regression analysis.All statistical analyses were performed using IBM SPSS 17 statistics package and Graphpad Prism 6.0. RESULTS The study population for the retrospective analysis included Chronic Kidney Disease patients (CKD) who had undergone unilateral or bilateral renal transplantation, receiving immunosuppressant and are on regular visit to the hospital for either of the following reasons: hemodialysis, routine checkup at regular intervals as instructed by the nephrologist, biopsy of transplanted kidney and for any other comorbidity.Age wise distribution of patients considered for the study is shown in Table 1.67.7% patients were males whereas 32.3% patients were females.The incidence of single and combined immunosuppressant usage between genders was found to be almost similar with no statistically significant difference (P value = 0.1671, odds ratio=0.2148)and is shown in Table 2.The incidence of single and combined immunosuppressant usage between genders was found to be almost similar with no statistically significant difference (P value = 0.1671, odds ratio=0.2148).The patients received concomitant steroid therapy with prednisolone (80.2%), methyl prednisolone (11.9%) and hydrocortisone (1.4%).6.3% patients did not receive concomitant steroid therapy. Distribution of patients on the basis of immunosuppressants received is as shown in Table 3.The glomerular filtration rate is an endogenous marker of renal function that requires 24 hours urine collection.However, GFR can theoretically be estimated using the modification of diet in renal disease (MDRD) formula from serum creatinine and age of the patient.Renal transplant recipients in the study have been segregated into different GFR quartiles as shown in Table 4.The comorbidities observed in renal transplant recipients taken for the study are shown in Figure 1.Various comorbidities observed can directly be attributed immunosuppressants or posttransplant causes.Due to various sub-types, dyslipidemia is enlisted separately in Table 5. Hypertension was the common comorbidity due to increase in electrolyte retention and altered renal dynamics in transplant recipients.Dyslipidemia can either be increase in Total cholesterol, LDL, VLDL, triglycerides or decrease in serum HDL levels.Incidence of dyslipidemia between genders was determined in renal transplant recipients included in the study.The results are graphically represented in Figure 2. Females possess less risk of developing dyslipidemia because of presence of estrogen hormones.Steroids possess well established potential to cause posttransplant metabolic syndrome including dyslipidemia.Hence, the incidence of dyslipidemia was compared between patients on steroid and off steroid therapy using Fisher exact tests at 95% confidence interval and the result is graphically represented in Figure 3. Stepwise Multiple Linear Regression analysis was done to determine the predictors of dyslipidemia in renal transplant recipients on immunosuppressive therapy.The following independent variables were regressed against every individual lipid parameters which were taken as dependent variables: age, gender, immunosuppressive regimen and its dose, concomitant steroid therapy and steroid dose, systolic and diastolic blood pressures, comorbid diabetes mellitus, serum creatinine, number of post-transplant years and dialysis history. Age, immunosuppressant and steroid dose, blood pressures, serum creatinine and number of post-transplant years were given as continuous numerical predictors whereas gender, regimen, concomitant steroid, comorbid diabetes mellitus and dialysis history were given as categorical predictors.The results of MLR analysis is shown in Table 6 and the predictors of each regression model are listed below the table.Transplant kidney biopsy reports of patients with abnormal patterns were correlated with their serum lipid profiles.The abnormal patterns of grafts observed in renal transplant recipients are as shown in Table 7. Incidence of graft disturbances betweendyslipidemic and non-dyslipidemic patients is shown in Table 8. Almost all patients with stable functioning grafts and comorbid dyslipidemia had adequate systemic control of lipids with concomitant statin therapy.Logistic linear regression models were used to determine the effect of dyslipidemia and other predictors of chronic allograft rejection and destruction.The results are given in Table 9and the coefficients of individual components of the built regression model are summarized in Table 10. DISCUSSION 61.2% patients received oral cyclosporine, 33.8% patients received oral tacrolimus, 5.6% patients received oral sirolimus, and 1.4% patients received oral azathioprine whereas 4.22% patients received rituximab intravenous infusion at median doses of 100mg, 2.5mg, 1mg, 62.5mg and 500mg respectively.Since calcineurin inhibitors are the most common immunosuppressant of choice and possess well established dyslipidemic potential, the incidence of cyclosporine usage between genders was determined.For this purpose, patients were segregated into cyclosporine and non-cyclosporine receiving group who were sub-segregated on gender basis.Statistically significant difference in incidence of choice for calcineurin inhibitors exists between genders with a relative risk of 0.7395 (P value = 0.0499, odds ratio = 0.4582). The immunosuppressive treatment given complies with the KDIGO clinical practice guideline for the care ofkidney transplant recipients (Chapman, 2010). The modified diet in renal disease (MDRD) formula provides a mean for estimating glomerular filtration rate without 24 hour urine collection.The abbreviated MDRD equation is given below: GFR (mL/min/1.73m 2 ) = 186 x (SCr) -1.154 x (Age) -0.203 x 0.742 if female x 1.210 if African-American (Levey et al., 2000).Various comorbidities observed can directly be attributed immunosuppressants or post-transplant causes.Cardiovascular complications remain the major cause of morbidity and mortality in renal transplant recipients.The US National cholesterol education program -Adult treatment panel III defines metabolic syndrome as the presence of dyslipidemia, obesity, glucose intolerance and hypertension.Metabolic syndromes such as posttransplant diabetes mellitus (NODAT), dyslipidemia, hypertension etc. were observed in these patients (Scott et al., 2001).Opportunistic infections are more common in renal transplant recipients due to immunocompromization.In our study, the following opportunistic infections were observed: 9.15% had urinary tract infections, 9.86% had hepatitis B and C, 8.45% had pulmonary tuberculosis, 2.11% had esophageal candidiasis, 2.11% had cytomegalovirus infection due to extensive mycophenolate usage (Hambach et al., 2002), 2.11% patients had lower respiratory tract infection, 1.41% patients had pneumonia whereas 0.70% had meningeal infection.Other post-transplant metabolic syndromes observed include diabetes mellitus in 46.47%, hypertension 81.69%.93.6% patients displayed some form of dyslipidemia out of which 70.6% were male and 29.3% were female thus indicating a higher incidence of dyslipidemia in males.Thus, a statistically significant difference occurs between genders in developing post-transplant dyslipidemia coinciding with the fact that chance of developing dyslipidemia in females is less since they are protected by natural estrogen hormones (P value = 0.0054) (Paranjape, 2005). Though dyslipidemia was observed in majority of the study population (93.6%), not all patients displayed elevations in all form of lipids.Only 44.3% patients had elevated TC, 50.7% had elevated LDL-C, 57.7% patients had elevated VLDL, 58.4% patients had HDL-C whereas 76% patients had hypertriglyceridemia.This is not on par with the results of previous studies which report LDL-C to be the major elevated lipid.However, deviations in our study could be attributed to concurrent statin therapy in these patients (Lentine and Brennan, 2004;Lisik et al., 2007). Steroids can cause diverse metabolic syndromes on chronic administration.However they are used in combination with immunosuppressants to prevent graft rejection (Helaland Chan, 2011). A comparison to assess the incidence of dyslipidemia in patients receiving and not receiving steroids showed no statistically significant difference between the occurrence of dyslipidemia between the two groups at a confidence interval of 95% (P value >0.05, relative risk = 0.9323) suggesting that steroids are not the only risk factor for development of dyslipidemia in renal transplant recipients. The dyslipidemic potential of cyclosporine and tacrolimus vary to great extent.Various studies have comparatively assessed the extent of dyslipidemia associated with cyclosporine and tacrolimus and have shown cyclosporine to predominantly cause dyslipidemia than tacrolimus (Ploskerand Foster, 2000 andHenry, 1999).Cyclosporine induced dyslipidemia is due to a direct non-competetive inhibition of sterol 27-dehydroxylase (CYP27A1) and decrease in production of 27hydroxycholesterol which in turn is a potent inhibitor of 3hydroxy-3-methylglutaryl coenzyme A (HMG-CoA), the rate limiting step of cholesterol biosynthesis.In addition to CYP27A1 inhibition, cyclosporine also inhibits lipoprotein lipase and thereby increases serum triglyceride levels (Ann et al., 2007).Similar to calcineurin inhibitors, patients treated with mammalian target of rapamycin inhibitors (mTOR) such as sirolimus also display impaired lipid metabolism.However, dyslipidemia associated with sirolimus is not completely due to CYP27A1 inhibition as with cyclosporine (Morrisett et al., 2003).Sirolimus, in addition to CYP27A1 inhibition also decreases LDL-C clearance by inhibiting the transcription of LDL receptor gene in hepatic cells (Ma et al., 2007). The mean TC and LDL levels are significantly higher in the cyclosporine group (p value <0.0001) whereas HDL was high in the cyclosporine group.However, low HDL levels are reported in Cyclosporine treated patients.Decrease in HDL levels predisposes the patient to atherogenic risk (Gerry et al., 2001).Elevated HDL levels in our study could be attributed to concomitant steroid usage.No significant difference was found between HDL and VLDL of the two groups (p value >0.05).Triglycerides was found to be significantly high in the cyclosporine treated group (p value =0.0012). A stepwise multiple linear regression analysis was carried out to determine the predictors of dyslipidemia in renal transplant recipients.Dose of the immunosuppressant, concomitant steroid therapy were the continuous predictors for hypercholestrolemia whereas gender was the categorical predictor, with males being more prone (r 2 = 0.17).However, immunosuppressant, concomitant steroid therapy, age and comorbid systolic hypertension were the predictors of LDL cholesterol in the studied population (r 2 = 0.205). We analyzed the transplant kidney biopsy reports of patients with abnormal patterns.Out of the 142 patients, 23.9% patients have undergone biopsy examination of transplanted kidneys.Out of the 23.9% patients who have undergone biopsy examination of the transplanted kidney, 32.5% patients were observed to have abnormal patterns of transplanted kidneys or graft abnormality.Presence of graft abnormality was correlated with dyslipidemia and it was observed that out of the 32.5% patients with chronic graft abnormality, 90.9% patients had uncontrolled dyslipidemia.Out of the 32.5% patients, 72.7% patients were already on statin therapy.Thus it is clearly evident that patients with dyslipidemia yet controlled by statin therapy have lesser chance of developing graft dysfunction or abnormality when compared to those with uncontrolled dyslipidemia.However, in order to statistically validate these findings by determination of specific predictors of graft rejection and dysfunction, multiple linear regression analysis was carried out with the following covariates: Lipid profile, age, serum creatinine, systolic and diastolic blood pressures, presence of comorbid diabetes mellitus and number of years after transplantation.Thus dyslipidemia causes allograft rejection which may progress to rejection of graft.The patterns of graft destruction observed in our study include glomerulonephropathy, focal glomerulosclerosis, interstitial nephritis, fibrosis and tubular atrophy.Such patterns of destruction have been previously described in various studies (Colvin, 2007).Though clear mechanisms do not exist for the relation between dyslipidemia and graft loss, various studies have demonstrated certain mechanisms with positive correlations.Based on the existing literature, dyslipidemia induced graft direction could be categorized into two types: Direct and Indirect.Direct mechanisms include the non-specific vascular and specific renal effects.Non-specific vascular effects include the narrowing and thickening of interlobular and arcuate arteries ultimately leading to renal ischemia and graft loss.Special renal effects include oxidant stress induced by hyperlipidemia associated enhanced generation of reactive oxygen species (ROS)that leads to glomerulosclerosis and chronic tubulo-interstitial disease, reduction in normal plasma flow through endotherlial dysfunction and also by stimulation of monocyte infiltration (Stephan, 2002). Indirect mechanism of graft destruction involves the decrease in the availability of intracellular cyclosporine concentration available to inhibit the immune activation process and thereby contributes to chronic allograft loss.Thus dyslipidemia induced by immunosuppressants tends to decrease the effect of immunosuppressant by decreasing its availability and leading to graft loss (Pozzetto et al., 2008).Hence dyslipidemia has to effectively be managed to prevent graft survival rates and reduce the risk of cardiovascular mortality.Out of 142 renal transplant recipients, 33 (23.2%) patients were found to have some form of graft disturbance or rejection.However 7 (4.9%)patients with acute graft rejections or acute graft destruction were not coded as "rejections" while building logistic regression models for determining predictors of graft destruction in renal transplant recipients.Thus 26 patients (18.3%) patients were observed to have chronic allograft disturbance out of which 15 (10.5%) patients had chronic rejections were as 11 (7.7%) had some form of graft anomaly as diagnosed by renal biopsy marking.Chronic allograft rejections or destruction were associated with some form of dyslipidemia in 92.3% patients whereas it was not in 7.6% patients.81.7% of the patients had stable functioning grafts of which 69% was associated with dyslipidemia, yet with adequate control through concomitant statin therapy.Logistic linear regression analysis determined hypertriglyceridemia (β = 0.366), hypercholestrolemia (β = 0.341), age (β = 0.231), HDL (= 0.158) and Gender (β = -0.150).Thus dyslipidemia seems to be a significant predictor of chronic allograft rejection in renal transplant recipients.Hence dyslipidemia has to be effectively managed either with dose intense statin therapy or switching to other immunosuppressants with less dyslipidemic potential. CONCLUSION Post-transplant dyslipidemia is a common adverse effect of immunosuppressants usage.The study has analyzed the differential effects of immunosuppressants on serum lipids and Total Cholesterol, LDL-C, HDL, VLDL and Triglycerides and has determined immunosuppressants as a significant predictor of developing dyslipidemia in renal transplant recipients.Cyclosporine was found to possess comparatively higher dyslipidemic potential when compared to tacrolimus; however, tacrolimus itself too possesses dyslipidemic potential.Dyslipidemia in turn was found to be a significant predictor of chronic allograft nephropathy and rejection.The incidence of graft rejections and disturbances observed in dyslipidemic patients was significantly high when compared with non-dyslipidemic patients.Thus, it is clearly evident that chronic immunosuppressive leads to dyslipidemia that causes graft destruction which progresses to graft loss.Thus, dyslipidemia has to be managed effectively through dose intense statin therapy or by switching to other immunosuppressants with lesser dyslipidemic potential to prevent chronic allograft nephropathy and rejections. Table 2 : Distribution of Patients based on Immunosuppressive Regimen. Table 3 : Distribution of Patients Based on the Immunosuppressant . Table 4 : Distribution of Patients Based on Estimated Glomerular Filtration Rate. Table 5 : Patterns of Dyslipidemia observed in the Study Population. Table 6 : Stepwise Multiple Linear Regression to Determine Predictors of Dyslipidemia. Table 7 : Correlation of Serum Lipid Profile with Abnormal Graft Patterns. Table 8 : Incidence of Graft Anomalies in Post Renal Transplant Patients. Table 9 : Stepwise Multiple Linear Regression to Determine Predictors of Graft Rejection. Table 10 : Coefficients of Individual Predictors of Allograft Rejection.
2017-08-14T21:56:13.349Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "448e90cabbd41f3ffcb63e6c9f6e19ee6b0b545e", "oa_license": "CCBY", "oa_url": "https://www.japsonline.com/admin/php/uploads/1847_pdf.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "448e90cabbd41f3ffcb63e6c9f6e19ee6b0b545e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13743804
pes2o/s2orc
v3-fos-license
Knowledge, attitude and practice of Iranian hypertensive patients regarding hypertension Introduction: This study aimed at evaluating knowledge and awareness of hypertension and the risk factors for hypertension among hypertensive patients. Methods: In this study, 110 hypertensive patients were enrolled and filled out two self-administered questionnaires. The first questionnaire was about the demographic characteristics and the second one was about the knowledge (n = 10), attitude (n = 9) and practice (n = 8). The internal consistency and the stability of the questionnaires were approved. The Mann-Whitney U test and Kruskal-Wallis and Spearman correlation coefficient were used for statistical analysis. Results: Seventy-three percent of participants know the normal range of hypertension. Most of the participants truly knew that stress (87.3%), obesity (70.9%) and aging (48.2%) are risk factors for hypertension. About 60% of participants knew the complications of uncontrolled hypertension. About 82.7% of participants believed that after adaptation of body to hypertension, there is no need to use antihypertensive drug. About 13.6% of participants measured their blood pressure daily and 11.8% of them measured it once a month. The educational level of participants was significantly associated with knowledge score (P = 0.01). There was a significant correlation between knowledge (P < 0.001) and attitude and also attitude and practice (P < 0.001) scores. Conclusion: These findings have important implications for developing proper and continuous self-management hypertension education programs in Iran which should mostly emphasize on the practical information about control and prevention programs. Introduction Hypertension is a major risk factor for chronic diseases and deaths worldwide with age-standardized prevalence of 24.1% and 20.1% in men and women respectively. 1 This number is growing very fast and it is estimated that the number will reach to more than 1.56 billion by the year 2025. 2 Hypertension is also responsible for 57 million disability adjusted life years (DALYs) and it is estimated that about 7.5 million deaths (12.8 of all-cause deaths) worldwide is due to high blood pressure. 3 In spite of a high prevalence, the ratio of taking blood pressure under control among hypertensive patients is still very low. According to Kilic et al, this value is only 30%-34% in developing and 33%-38% in developed countries. 4 Most hypertensive people are not aware of their condition or have a low level of health literacy. Having an inadequate level of knowledge about the health issues has been reported for the hypertensive patients in different countries all over the world such as the United States, 5 Pakistan, 6 Turkey, 4 and Namibia. 7 In a study carried out in sub-Saharan Africa communities, Hendriks et al showed that only 3% of hypertensive people in Namibia were aware of their condition; this value was found to be 6% for the patients in Kenya. 7 The knowledge, perceptions and attitude of people towards hypertension has a significant role in changing lifestyle including the modifiable risk factors of hypertension. It has been shown that self-management behaviors such as taking prescribed medications, quit smoking, eating a healthy diet and increasing physical activity level are crucial for hypertensive patients. Therefore, it would be possible to reduce burden of hypertension by changing the modifiable risk factors through increasing the health literacy of hypertensive patients. 8 Finland is a good example here. In an awareness campaign concerning cardiovascular diseases launched in 1974, the mortality rate was reduced to more than half just in 25 years. 9 Like most countries, high blood pressure is one of the major causes of deaths in Iran. 10 According to Malekzadeh et al, of the total cohort participants in northeastern of Iran, 43% were hypertensive. 11 Despite this high prevalence, studies examining awareness of hypertension in Iran is scarce. The information in these issues will help to develop suitable intervention programs aimed at increasing disease self-management behaviors among hypertensive patients. To fill this crucial gap, in the present study, knowledge and awareness of hypertension and the risk factors for hypertension among hypertensive patients have been investigated. Materials and Methods In the present cross-sectional study, 110 hypertensive patients who referred to Shahid Madani hospital were participated. Males and females subjects who had diastolic blood pressure >140 and systolic blood pressure >90 on two consecutive reading and aged >30 were participated in the present study. Participants completed two self-administered questionnaires. The first questionnaire was about the demographic characteristics and the second one was about the knowledge (n = 10), attitude (n = 9) and practice (n = 8). Item extraction In order to develop the questionnaire, an extensive literature review was undertaken. Items which seemed appropriate for our questionnaire were extracted and a primary questionnaire was designed. For content validity of the questionnaire, the primary questionnaire reviewed by a panel of 10 experts. The 4 point Likert scale which assessing the relevance, clarity, simplicity and necessity of the primary questionnaire was filled in by experts content validity index (CVI) and content validity ratio (CVR), were calculated and questions with CVI <0.79 and CVR <0.69 were excluded. 12,13 The internal consistency of the instrument was approved using Cronbach α (0.70) in a pilot study of 10 patients. In the same patients after two weeks, the test-retest was performed for assessing the stability of the questionnaire. The Spearman-Brown index was 0.75 and the intra-class correlation coefficient was 0.72, 95% CI (0.46, 0.83). Statistical analysis SPSS version 18 statistical computer software was used for all statistical analyses. Normal distribution was assessed using Kolmogorov-Smirnov test. Each multiple-choice question had one correct answer that was assigned the score of 1 point; whereas, 0 point was assigned to all wrong answers. Mann-Whitney U test and Kruskal-Wallis were conducted to compare the median of correct responses of every section by gender, age, educational level and duration of the disease. For the cor relates analyses, Spearman correlation coefficient was used for investigating the association between knowledge, attitude and practice (KAP) scores. The significance level of .05 was used. Table 1 outlines the demographic characteristics of participants. Totally, 110 participant including 52 males and 58 females with the mean age of 57.97±10.67 years were participated in the present study. About 95.5% of participants were married and lived in rural areas. Among the participants 11.8% of them were illiterate and 2.7% were unemployed. The median knowledge (total score 12), attitude (total score 9) and practice (total score 10) scores of participants is summarize in Table 2. The median (IQR) score of knowledge, attitude and practice of participants were 7 (2), 7 (3) and 4 (2) respectively. to hypertension, there is no need to use antihypertensive drug. About 13.6% of participants measured their blood pressure daily and 11.8% of them measured it once a month. About 76.36% of participants reduced their salt intake and 59.09% of them reduced their fat intake. About 25.5% of participants use alternative medicine in addition to using antihypertensive drugs. Table 4 shows the differences in KAP scores across different demographic variables. The statistical analysis showed that only the educational level of participants was significantly associated with knowledge score (P = 0.01). Table 5 depicted the correlation coefficient between knowledge-attitude-practice scores and the results showed that there was a significant correlation between knowledge (P < 0.001) and attitude and also attitude and practice (P < 0.001) scores. Discussion It is of great importance for hypertensive patients to discover the KAP level of them to develop the appropriate educational and self-management programs. In this regard in the current cross-sectional study the KAP of hypertensive patients in Tabriz-Iran was studied. According to results, the overall scores of participants were medium except for attitude that is higher than other variables. In another studies from Iran, it has been reported that in more than 50% of the participants, the knowledge level of participants regarding hypertension was average. 14,15 It seems that comparing with other studies conducted in Nepal, 16 the United States 17 and Mongolia, 18 the present study reported lower scores on the issue. The difference between studies could result from differences in the educational level of participants, using different tools for assessing the KAP level and availability of educational programs in different countries. Majority of the hypertensive patients in the present study had a high knowledge regarding the obesity and stress as risk factors of hypertension. The link between obesity and stress and hypertension has been the subject of various review articles 19 and it has been shown that this may be due to activation of the sympathetic nervous system, renin-angiotensin system, and also sodium retention. 19 Along with their knowledge regarding obesity their attitudes toward changing their diet and increase their physical activity was also high. Although their practice regarding the lowering consumption of salt and also high fat foods was good, the practice of increasing the physical activity level was not impressive and they did not put their knowledge into practice. The differences between knowledge and practice may be due to the this fact that they know that for controlling high blood pressure, they should reduce their weight by diet and also increasing physical activity but they may not have enough knowledge regarding the appropriate ways to do this. Previous studies in Pakistan 6 and Nepal 16 showed that there was a significant difference in the KAP level of males and females. However, in another study in Nepal, there was no association between KAP level and sex. In the present study we also did not observe the significant differences in KAP scores between males and females but the educational level was significantly associated with knowledge scores. 20 The plausible explanation of these results is that there were not significant differences in the educational level of males and females in the present study. Previously, it has been shown that the literate people had better knowledge than illiterate ones. 16 Consistent with the results of study conducted in Nepal, there was not significant association between disease duration and KAP scores. 20 So, it seems that education intervention in any stage of the disease could have significant influence on patients practice. About 25% of our participants use complementary alternative medicine (CAM). The use of CAM in present study was similar to the studies done in Nigeria 29%, 21 Ghana 19.5% 22 and South Africa 21% 23 but it is lower than the studies done India (63.9%). 24 The differences in the prevalence of CAM use across different countries may be due to the variations in sociocultural background and accessibility of modern medical practice. 25 As most of the information about CAM were suggested by families, relatives, and friends (49%), to prevent the side effects of CAM use, it is important to inform health care practitioner about the type and amount of CAM and get proper information from medical centers. 26 One of the important limitations of the present study is low sample size. Also, the self-reported practice which is prone to bias by participants is the other important limitation. So, the results should be interpreted considering these limitations. Conclusion The prevalence of hypertension in Iran is increasing and for decreasing the burden of disease, focusing on KAP of patient by implementing the educational intervention is essential. Accordingly, the results of the present study showed that the knowledge and practice of hypertensive patients is medium. These findings have important implications for developing proper and continuous self-management hypertension education programs in Iran which should mostly emphasize on the practical information about the popper interval of checking blood pressure, signs of the disease, proper and practical methods of increasing physical activity. Furthermore, despite the good knowledge of some respondents, their food safety practices were poor. Therefore, the educational programs should be repeated with proper intervals to ensure that learnt information is turned into practice. Moreover appropriate self-management programs could also designed to improve the living condition of these patients and reduce costs for patients and also health system. According to results of the present study, individuals with lower educational level may benefit more from educational programs. Additionally, as there is significant association between knowledge and practice in hypertensive patients, for all patients the educational programs should be practice oriented. Ethical approval The ethical approval for this study was obtained from Ethics Committee of Tabriz University of Medical Sciences.
2018-05-03T02:53:33.782Z
2018-03-17T00:00:00.000
{ "year": 2018, "sha1": "241b0d40b7f0482c6319a91b5534acdff1814115", "oa_license": "CCBY", "oa_url": "https://jcvtr.tbzmed.ac.ir/PDF/JCVTR_19424_20171010132334", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "241b0d40b7f0482c6319a91b5534acdff1814115", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73609498
pes2o/s2orc
v3-fos-license
New Limits on the Dark Matter Lifetime from Dwarf Spheroidal Galaxies using Fermi-LAT Dwarf spheroidal galaxies (dSphs) are promising targets for the indirect detection of dark matter through gamma-ray emission due to their proximity, lack of astrophysical backgrounds and high dark matter density. They are often used to place restrictive bounds on the dark matter annihilation cross section. In this paper, we analyze six years of {\it Fermi}-LAT gamma-ray data from 19 dSphs that are satellites of the Milky Way, and derive from a stacked analysis of 15 dSphs, robust 95\% confidence level lower limits on the dark matter lifetime for several decay channels and dark matter masses between $\sim 1$GeV and $10$TeV. Our findings are based on a bin-by-bin maximum likelihood analysis treating the J-factor as a nuisance parameter using PASS 8 event-class. Our constraints from this ensemble are among the most stringent and solid in the literature, and competitive with existing ones coming from the extragalactic gamma-ray background, galaxy clusters, AMS-02 cosmic ray data, Super-K and ICECUBE neutrino data, while rather insensitive to systematic uncertainties. In particular, among gamma-ray searches, we improve existing limits for dark matter decaying into $\bar{b}b$, ($\mu^+\mu^-$) for DM masses below $\sim 30\, (200)$~GeV, demonstrating that dSphs are compelling targets for constraining dark matter decay lifetimes. I. INTRODUCTION The existence of dark matter (DM) is well established from observations of galaxies and galactic clusters, and the cosmic microwave background, although its identity remains elusive. In the context of particle physics, DM is often interpreted as Weakly Interacting Massive Particles (WIMPs) with cross sections and masses not far from the electroweak scale. The number density of DM particles is fixed at thermal decoupling in the canonical freeze-out scenario at high redshift. The leftover DM species permeate our Universe, inducing potential signatures in deep underground experiments, colliders and astronomical telescopes/satellites. DM particles do not have to be absolutely stable but simply long-lived, as happens in many well motivated theories (for an excellent review, we refer to [1]). In general the longevity of particles is attributed to the conservation of quantum numbers. For instance, in the case of standard model particles the non-observation of proton decay p → e + π 0 , electron decay e → νγ, and neutrino decay ν → γγ are attributed to the conservation of baryon number, electric charge and angular momentum, respectively. In the case of DM particles, there is no such correspondence based on fundamental symmetries. Therefore DM particles can well be stable on cosmological distance scales, with lifetimes much longer than the age of the universe (13.8 Gyr = 4.56 × 10 17 sec) (see [2,3] for a recent discussion). Such a general requirement should be quantified with no prejudice to current observations, as it has been * baring@rice.edu † ghoshtatha@neo.tamu.edu ‡ queiroz@mpi-hd.mpg.edu § kusinha@syr.edu in the context of extragalactic background radiation (EGRB) [4][5][6][7][8], Galaxy Clusters [9][10][11], anti-proton [12][13][14] and x-ray data [15], the Cosmic Microwave Background [2,16] and optimized targets using Fermi-LAT data [17]. These datasets have also been used for DM annihilations. In this paper, we set constraining limits on the DM lifetime using Fermi-LAT gamma-ray data from the observation of dSphs. dSphs that are proximate to the Milky Way are special targets for indirect detection of DM signals for several reasons: (i) their gravitational dynamics indicate that they are DM-dominated objects; (ii) they are generally located at moderate or high Galactic latitudes and therefore are subject to low diffuse gamma-ray foregrounds; (iii) their lack of unambiguously discernible astrophysical gamma-ray emission; (iv) they possess relatively small uncertainties on the DM profile. Thus, it is fruitful to derive bounds on DM properties using dSphs observations. A first offering of dSphs constraints on DM lifetimes was made in [11] using around one year of Fermi-LAT observations 1 . Yet greater emphasis in the literature has been on constraining DM annihilation cross sections. In [18], the authors focused on how to distinguish a signal coming from DM annihilation and/or decay using dSphs observations from gamma-ray experiments, whereas in [19] a multi-wavelength approach was performed for annihilating DM, and in [21] the impact of hosting intermediate massive black holes was investigated. Various aspects of DM annihilation in these contexts were explored in [20]. The Fermi-LAT collaboration has invested extensive effort in increasing the sensitivity to potential DM signals [22,24], including updates to their point source catalog, and upgrades to the event reconstruction and foreground/background subtraction afforded by the new PASS 8 analysis tool. These have resulted in stringent bounds on the annihilation cross section [25]. For dark matter decay, here we extend and complement previous works by including six years of LAT data and also employing the new PASS8 event class. Moreover, for the first time in the literature for decay studies, we stack a larger pool of 15 dSphs using a bin-by-bin maximum likelihood method, treating the astrophyiscal J-factor of the dSphs as nuisance parameters. This protocol renders our conclusions robust, and less sensitive to systematics and statistical uncertainties. The baseline conclusion is that herein we raise the decay liftetime lower bounds of [11] by factors of around 3-10. For our focus on dark matter decays, the gamma-ray flux (see Eq. [6]) from any DM congregation is linearly proportional to the J-factor J d (see Eq. [1]) for the volumeintegrated DM content of a galaxy. The J-factors for dSphs are fairly accurately estimated: recent measurements of the stellar velocity dispersion and half-light radius have led to better determinations of these J-factors [26][27][28], and such improvements are exploited here to define more accurate bounds on DM properties. We combine these updated J d values with extensive datasets from six years of observations of dSphs using Fermi-LAT. Several dSphs observed by Fermi-LAT do not have their J-factor estimated and are removed from our analysis. For a similar reason we are not including the new dSphs observed by Dark Energy Survey and Panoramic Survey Telescope and Rapid Response System [29][30][31][32]. II. DATA ANALYSIS We gather six years of Fermi-LAT gamma-ray data belonging to the P 8R2SOU RCEV 6 instrument response function, dating since August 4, 2008, for the 19 dSphs shown in the main portion of Table I. The energy bins range from 500 MeV to 500 GeV. We use the Pass-8 event class which contains an improved point-spread function (PSF) and increased telescope effective area compared to previous Fermi-LAT analysis protocols. We also employ data from the new point source Fermi-LAT catalog, 3FGL. The lower energy bound is chosen to avoid systematics due to the leakage of photons coming from the Earth limb due to poor/broad PSF at energies lower than 500 MeV 2 . As aforementioned, we show the 19 dSphs of interest plus Reticulum II with their respective positions, distances and J-factors in Table I. Within 2σ, the DM profile of all dSphs are well described by a NFW profile (see Table IV of [33]). We singled out these 19 dSphs because several dSphs, namely Bootes II, Bootes III, Canis Major, Pisces II, and Sagittarius, have J-factors that are either poorly constrained, or are not determined at all. They are thus excluded from our study. Moreover, in our stacked analysis, Canes Venatici I and Leo I were left out because their regions of interest (ROIs) in the sky overlap with Canes Venatici II and Segue 1 that have larger J-factors. Furthermore the ROI of Ursa Major I overlaps with that of Wilman 1, as pointed out in [22]. Nevertheless, Wilman 1 is omitted here since [33] did not report its J-factor. Those choices concur with those from Fermi-LAT collaboration in [24]. Hence, to avoid statistical interference and follow the procedure in [24], we use 15 dSphs, namely Bootes I, Canes Venatici II, Carina, Coma Berenices, Draco, Fornax, Hercules, Leo II, Leo IV, Sculptor, Segue 1, Sextans, Ursa Major I, Ursa Major II, and Ursa Minor in the stacked analysis. As usual, we reject events with rocking angle larger than 100 • to minimize contamination from the bright limb of the Earth as well as events during periods when the rocking angle of the LAT instrument was larger than 52 • using the gtmktime tool of Fermi-LAT software. After defining the ROI as in [24] with 0.1 • pixels and 24 energy bins logarithmically separated using gtltcube and gtexpcube2 tools, we model the diffuse and isotropic background emission using the galactic and extragalactic models provided in [34]. We perform a bin-by-bin likelihood analysis of the gammaray emission within 5 • of each dSph galaxy's center, which set the normalizations of the diffuse sources and the normalizations of point-like background sources within 5 • of each dSph center as in [24]. For each dSph, the spatial DM distribution is modeled by a NFW dark matter profile with a J-factor (J d ) defined as where the DM density ρ DM is integrated along lines of sight elements ds for different directions within the ROI solid angle ∆Ω. Values for J d for our dSphs sample are listed in Table I, taken from [33]; these are proportional to the expected intensity of γ-ray emission from DM decay in a given ROI assuming a spherically symmetric NFW DM density distribution, where ρ s and r s are the characteristic density and scale radius, which are determined dynamically from the maximum circular velocity v c and the enclosed mass contained up to the radius of maximum v c [37]. We emphasize that within 0.5 • the integrated J-factor is rather insensitive to the choice of the DM density profile for slopes not steeper than 1.2 [38]. The integrated J-factors of our selected dSphs were obtained over a cone of radius θ = 0.5 • , i.e., accounting for 50% of the total DM emission, which is a conservative approach. If we had instead used the larger value θ max from [33], our limits would be raised by Refer to text for the list of dSphs used in the stacked analysis. The decay into qq takes into account all light quarks. For heavy DM, decays intobb,hh and qq provide the strongest limits, whereas for relatively light dark matterbb,qq and τ τ are dominant. As we shall see further in Fig. 3 these are the most stringent limits in the literature for DM from gamma-ray searches masses below ∼ 30 GeV (200 GeV) for decays into bb (µ + µ − ). a factor of two or so. We then compute the likelihood of an individual target i, where µ are the parameters of the DM model, i.e. the product of the dark matter lifetime and mass as we shall see further, θ i is the set of nuisance parameters that includes both nuisance parameters from the LAT analysis (α i ) and the dSph J-factor J i , and D i is the gamma-ray data. Notice that we incorporated a likelihood J-factor term as an attempt to account for statistical uncertainties on J-factors of each dSphs which is defined as, where J i is the true value of the J-factor of a dSphs i, and J obs,i is the measured J-factor with error σ i . We later join the likelihood functions, Notice that this procedure, which matches the one adopted in [24], is independent of the DM energy spectrum in each Nearby Dwarf Spheroidal Galaxies Name l b distance log 10 J d (θ0.5) GeV cm I. Galactic longitude (l), latitude (b), distance (in kpc), and DM decay J-factor for 20 dwarf galaxies that are satellites of the Milky Way. The J d factors are integrated over a cone of radius θ0.5, where θ0.5 is the "half-life radius" i.e. the angle containing 50% of the total dark matter emission. For Reticulum II, we adopted the Jfactor value reported in [23]. For other dwarf galaxies, we adopted the values reported in [33]; see text for details. energy bin, since it corresponds to an upper limit on the energy flux. We now evaluate the test statistic (TS) defined as T S = −2ln(L(µ 0 , θ|D)/L( µ, θ|D)), and require a change in the profile log-likelihood of = 2.71/2 from its maximum corresponding to 95% C.L. upper limit on the energy flux as described in [39]. In the next section we discuss the expected gamma-ray signal from DM decay and our results based on the aforementioned procedure. III. LOWER BOUNDS TO DARK MATTER LIFETIMES The differential flux of photons from a given angular direction ∆Ω within an ROI produced by the decay of a DM particle into a single final state is expressed as where M DM , τ DM and dN γ /dE γ are the DM mass, lifetime and differential γ-ray yield per decay, respectively. In a given particle physics model, in order to find the total gamma-ray flux coming from the decay of a DM particle, dN γ /dE γ has to be summed over all possible final states. In this work, however, we focus on one final state channel at a time, and compute the energy spectrum using PPPC4DM [35] for the qq, bb,τ + τ − ,µ + µ − ,W + W − and hh, and instead the Pythia code [36] for the Zν τ ,hν τ and W ν τ channels. With the energy spectrum determined, we can compute the profile likelihood function for the lifetime τ DM vs M DM by maximizing the global likelihood function in Eq. (5) with respect to the nuisance parameters and derive our bounds. In the left panel of Fig.1, we exhibit the constraints on the DM lifetime for decays intobb for the 19 dSphs in our study. Draco, Ursa Minor and Ursa Major II give rise to the strongest bounds on the DM lifetime due to their proximity and their large J-factors. Draco excludes a DM lifetime smaller than ∼ 2 × 10 26 sec (i.e., > 10 8 Hubble times) at 95% C.L. for DM masses below 10 TeV. The characteristic mass dependence of the limit curves for each individual dSph can be explained by comparing the shape of the upper flux limit curve and the energy spectrum of the final decay state, in this case bb. The strongest bounds from the upper flux occur at energies of few GeV or so, which roughly coincides with the peak in the bb energy spectrum for dark matter masses 10GeV-10TeV. In the right panel, we exhibit the limits on decays into τ τ pairs for the same dSph set. For DM masses below 100 GeV we found a lower limit of τ DM ∼ 3 × 10 26 sec at 95% C.L. For such masses, decays into τ + τ − produce more photons than those from bb, and this leads to the slight skewing of the limit curves. What we meant is that the dark matter distribution of each of the dSphs does not have to be precisely the same. Plus, the upper limit on the flux of each individual dSph differs, which results in different limits on the dark matter lifetime. In other words, for a given final state, the shape of the limit curve differs from galaxy to galaxy, as can be seen in Fig.1. Therefore, stacking a large sample of dSphs makes our combined limit less sensitive to a peculiar dwarf galaxy, i.e. more broadly representative. The use of an individual dSph to place constraints on DM properties might bias bounds, since the dark matter distribution of each of the dSphs does not have to be precisely the same. Plus, the upper limit on the flux of each individual dSph differs, which results in different limits on the dark matter lifetime. In other words, for a given final state, the shape of the limit curve differs from galaxy to galaxy, as can be seen in Fig.1. Therefore, stacking a large sample of dSphs makes our combined limit less sensitive to a peculiar dwarf galaxy, yet more broadly representative. For these reasons we performed a maximum likelihood analysis of a stack of 15 dSphs treating the J-factors as nuisance parameters, as described in the previous section, and obtained stringent constraints on the DM lifetime shown in Fig.2 for the decay modes: hh, hν, W W, W τ, Zν, bb, µµ, τ τ and qq, where qq includes decays into all light quarks. These decay channels encompass both fermionic and bosonic DM, making our bounds applicable to a plethora of DM models. As expected, after properly stacking the bin-by-bin likelihood functions of each dSphs, our combined limit gets stronger and less sensitive to systematic and statistical uncertainties. We conclusively exclude decay lifetimes up to 7 × 10 26 sec into bb and 4 × 10 26 sec into τ + τ − . Most of the final states have a kinematic cut-off prohibiting existence of limits for certain masses. For sufficiently small DM masses most of the photons appear outside the energy window of interest (i.e. below 500 MeV), thereby defining the sharp drop for channels such as µ + µ − , τ + τ − and qq. These findings demonstrate that stacked studies of dSphs provide robust and restrictive lower limits for the DM lifetime. To provide context for our study, it is insightful to compare our dSph bounds with constraints from various other gamma-ray searches for decaying DM. To facilitate this, in the left panel of Fig.3 we gather limits from different gamma-ray search strategies. There we plotted the limits coming from extragalactic gamma-ray background (EGRB) derived in [4] (Fig.3 foreground model A) with a dashed line, and galaxy clusters [9] (Fig.4 for bb and Fig.5 for µ + µ − ) with a dotted black line, optimized ROI searches [17] with solid gray line, along with our limits from a stacked analysis (blue curve). For the bb channel, our bounds improve upon previous results for dark matter masses below 30 GeV or so, whereas for decays into µ + µ − , our constraints are the most restrictive for masses below ∼ 200 GeV. One should keep in mind that [9] used older data and Fermi-LAT software, therefore an improvement on their limit is expected when updating the data/analysis specially for the bb final state, though it is beyond the scope of our manuscript to compute it. Here, we simply quote their results. We stress that antiproton (positron fraction) data may provide stronger limits [2,42,43] on DM decaying intobb (µ + µ − ), but since these are subject to rather large uncertainties we left them out, and focused our comparison among gamma-ray searches. Moreover, we neglected existing limits from PLANCK data [2,16], Super-K and ICECUBE [44] on µ + µ − since they are much weaker. We point out that recently a γ-ray excess has been claimed for a newly discovered dwarf galaxy Reticulum II [45]. The Fermi-LAT collaboration has independently performed a similar analysis that indicates that the excess above ∼ 100 MeV is merely a statistical fluctuation of the background, since no surplus of photons is observed in the remaining dwarf galaxies [46,47]. The origin of this γ-ray emission is unclear, especially because the two groups used different datasets, and their conclusion concerning the chance of a background fluctuation mimicking the potential dark matter signal differs. For these reasons we have omitted Reticulum II from the stacked analysis, but as a contextual note we obtained the limits on the dark matter lifetime, exhibited in Fig.4, stemming from Reticulum II using the upper flux reported in [46] with the J-factor presented in the Table I. It is clear that there is anomalous behavior for bb, which provides a very good fit to the excess seen in [45] for DM masses around dozens of GeV. The limit arising from Reticulum-II is still weaker than our combined one. Similar to the other dSphs, the shapes of the limit curves result from a combination of the shape of the energy spectrum of the final state, and the upper flux reported in [46]. IV. CONCLUSIONS In this paper, we have used 500MeV-500GeV gamma-ray data from the Fermi-LAT observation of Milky Way satellite dSphs to place stringent and robust lower bounds to the DM lifetime. We derived individual and stacked limits for several channels for the first time in the literature. We further compared our results with others from different search strategies, conclude that among gamma-ray searches dSphs are the leading ones for dark matter masses below 30 GeV and 200 GeV for thebb and µ + µ − final states, respectively. Our findings show that gamma-ray searches from the observation of dSphs using Fermi-LAT data are compelling targets for probing dark matter decay physics. Optimzed ROI Ref. [17] Galaxy Clusters Ref. [9] EGRB Ref. [4] Our work -Stacked Analysis Optimized ROI Ref. [17] Galaxy Clusters Ref. [9] EGRB Ref. [4] Our work -Stacked Analysis DM → μμ FIG. 3. In both plots we compare our results with existing gamma-ray observations, namely bounds extragalactic gamma-ray background (EGRB) from [4] (dashed) and galaxy clusters from [9] (dotted) and using optimized ROI strategy [17] (solid gray) employing Fermi-LAT data. See text for detail. Left: limits for dark matter decay intobb; Right limits for dark matter decay into µ + µ − . We conclude that our limits are the strongest for dark matter masses below ∼ 30 GeV and ∼ 200 GeV for the bb and µ + µ − decay channels respectively. 4. 95% C.L bound on DM lifetime for bb and τ + τ − channels using Reticulum-II data only. Notice the anomalous behavior for bb, which provides a limit still weaker than our combined one. As for the other dSphs the aspect of the limit curves are result of a combination of both energy spectrum of the final state and the upper flux limit reported in [46].
2016-04-26T22:53:12.000Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "64237e06d249b34a15abc40c0afc759cd841a247", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.93.103009", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "64237e06d249b34a15abc40c0afc759cd841a247", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
26761057
pes2o/s2orc
v3-fos-license
A case of multiple squamous cell carcinomas arising from red tattoo pigment Ornamental tattooing involves the administration of exogenous pigments into the skin to create a permanent design. Our case focuses on a 62-year-old woman who presented with an inflamed enlarging nodule on her right proximal calf, which arose within the red pigment of an ornamental tattoo. The nodule was diagnosed as squamous cell carcinoma (SCC) and subsequently excised. Over the course of the following year, the patient was diagnosed with a total of five additional SCCs that also arose within the red pigment of the tattoo. The increased popularity of tattooing and the lack of industry safety standards for tattoo ink production, especially metal-laden red pigments, may lead to more cases of skin cancer arising within tattoos among patients of all ages. A 62-year-old otherwise healthy woman presented with a 1-year history of an inflamed enlarging growth on her right proximal calf, arising within the red pigment of an ornamental tattoo. Her dermatologic history is significant for multiple actinic keratoses and blistering sunburns, but there was no history of skin malignancy. A physical examination revealed an erythematous tender nodule with hyperkeratotic scale located on the right proximal calf within the inferior lower border of the tattoo (Fig. 1). No popliteal or inguinal lymphadenopathy was palpable. A shave biopsy was performed, and a histological analysis of the tissue demonstrated pleomorphic squamous keratinocytes with prominent intercellular bridges and dyskeratotic cells arising in the epidermis with irregular extensions into the upper dermis with an overall depth measuring less than 2 mm, most consistent with an invasive squamous cell carcinoma (SCC; Fig. 2A and B). Exogenous pigment deposition was noted throughout the dermis, consistent with the tattoo. Due to the tumor location, Mohs surgery was elected as the best option for complete resection with concurrent tattoo preservation. The SCC was extirpated with Mohs micrographic surgery, and the resultant defect was closed with a complex repair. (See Fig. 3). Three months later, the patient returned with a new growth located proximal and discontiguous to the previous tumor on the right calf, also arising within the red tattoo pigment (Fig. 4). She noted that the nodule was inflamed and painful and had been present for the past month. A biopsy of the lesion was consistent with a squamous cell carcinoma, keratoacanthoma type. The patient underwent wide local excision with clear histologic margins, and the defect was repaired with a primary closure. Over the course of the following year, the patient presented with two additional separate SCCs lateral to the original tumor. The tumors were treated with wide local excision, each time obtaining clear histologic margins. A fifth biopsyproven squamous cell carcinoma was identified with the same histological features as the original tumors (Fig. 5). The patient then was referred to a plastic surgeon for complete tattoo excision with split thickness skin grafting. Discussion Ornamental tattooing involves the administration of exogenous pigment into the dermis, which results in a permanent design. As the incidence of tattooing increases, especially among teenagers, cutaneous reactions to the organic dyes and metals are more frequently encountered (Kluger and Koljonen, 2012). Overall, the risk of adverse outcomes with tattoos is reported to be as high as 20%, which amounts to more than 50 million people (Haugh et al., 2015;Tammaro et al., 2016). The colorful pigment of tattoos is often composed of azo dyes, which are commonly used in consumer product staining (Wenzel et al., 2013). Currently, the production and administration of tattoo inks and pigments in the United States is not regulated, and there are no national guidelines or issued standards (Haugh et al., 2015). Multiple adverse reactions to tattoo pigments, especially red pigment, have been described in the literature. Tattoo-related infections can range from acute pyogenic infections to tuberculosis and are sometimes encountered decades after the initial application (Simunovic and Shinohara, 2014. Among the different pigments used, red tattoo pigments are thought to contain potentially toxic metals such as cadmium, mercury, and aluminum compounds, which may lead to a higher incidence of adverse reactions such as lichenoid and allergic contact dermatitis (Forbat and Al-Niaimi, 2016;Garcovich et al., 2012;Simunovic and Shinohara, 2014;Sowden et al., 1999). Although less frequently encountered, non-melanoma skin cancers such as SCCs that arise from the red pigment of tattoos have also been reported (Kluger et al., 2008;Paprottka et al., 2014Sherif et al., 2016Vitiello et al., 2010). The first report of SCC arising within the red pigment of a tattoo was by McQuarrie, 1966, and more than 23 cases of SCC and keratoacanthoma skin cancers in red tattoo pigment have been reported (Kluger and Koljonen, 2012;McQuarrie in 1966). Despite multiple reports of single tumors within tattoos, there is little evidence of multiple tumors arising from red tattoo pigment in the literature. There are two proposed mechanisms for SCC within tattoos. The first is that the traumatic skin puncture process of creating a tattoo combined with the body's ongoing inflammatory response in an attempt to degrade the foreign material (Kluger et al., 2011). Traumainduced cases occur rapidly, usually within 1 year after the tattooing procedure. The second theory is chronic ultraviolet sun exposure due to tattoo locations on UV-exposed areas (Kluger et al., 2011). A national data set of tattoos in the United States shows that the most common locations for tattoos are the arms and ankles, and the majority of respondents had at least one exposed tattoo (Laumann and Derick, 2006). The pathogenesis of SCC arising specifically in the red pigment of the tattoo is unknown. A review of the literature found that most of the reported cases of melanoma and basal cell carcinoma occurred on black and darkly colored tattoos while SCCs and keratoacanthomas arose primarily within the red pigment of tattoos (Kluger and Koljonen, 2012). The exogenous tattoo pigments located within the dermis are often phagocytosed by macrophages, which carry the pigments into lymphatics and regional lymph nodes (Jacob, 2002). For this reason, sentinel lymph node biopsies in individuals with tattoos may mimic the appearance of metastatic melanoma (Kluger and Koljonen, 2012). Treatment of a suspicious lesion within a tattoo should begin with a full thickness skin biopsy. If the biopsy shows evidence of malignancy, a complete surgical excision should be performed. Currently, the treatment modality with the highest cure rate and lowest recurrence rate for SCC is wide local excision (Barton et al., 2015). The increased popularity of tattooing and the lack of safe industry standards for tattoo ink production, especially metal-laden red pigments, may lead to more cases of skin cancer arising within tattoos among patients of all ages.
2018-04-03T00:48:35.699Z
2017-08-25T00:00:00.000
{ "year": 2017, "sha1": "fb81a84760433843c43c10e0412f1dc6d23d4c97", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijwd.2017.07.006", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fb81a84760433843c43c10e0412f1dc6d23d4c97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199023103
pes2o/s2orc
v3-fos-license
Traumatic lingual haematoma: Another unusual cause of upper airway obstruction in systemic lupus erythematosus Lingual hematoma (LH) is an uncommon and potentially life-threatening condition due to its tendency to cause upper airway obstruction. It usually occurs as a result of trauma (motor vehicle accidents, grand mal seizures or traumatic tracheal intubations) and rarely spontaneously in cases of patients with inherited or acquired coagulopathies, high blood pressure, hematological disorders, or vascular malformations. Herein, we report the first case, to our knowledge, of a traumatic massive lingual hematoma in a patient with Systemic lupus erythematosus (SLE) secondary to tongue biting after neurological deterioration, hypertensive crisis and multiple tonic clonic seizures during hemodialysis for chronic renal failure. Introduction Systemic lupus erythematosus (SLE) is an inflammatory autoimmune disorder that affects multiple organ systems with main involvements in kidney, skin and blood. It is usually present in young adult women and is diagnosed by the presence of standard criteria. Symptoms and signs depend on which organs are involved. Hemorrhages are a dangerous and potentially life-threatening complication in SLE. The most common etiologies are hypertension, anticoagulation, catastrophic anti phospholipid antibody syndrome (APLA) and comorbidities. Alveolar pulmonar hemorrhage, spinal hemorrhage, intracranial hematomas, gastrointestinal hemorrhage, retroperitoneal, inner ear or even acute hemorrhagic miocarditis [1][2][3][4][5] has been previously reported in the literature in patients with SLE, however, to the best of our knowledge, there are no published case reports of massive lingual hemorrhage. In this article, we will describe the etiology, diagnosis and management of upper airway obstruction due to traumatic lingual hematoma in a patient with SLE. This is an interesting case of a patient with some of the numerous complications associated with chronic renal failure which in combination resulted in an unusual cause of life-threatening upper airway obstruction. His past medical history was significant for a seven year history of SLE with hypertension and a long standing lupus-related Chronic Kidney Disease in renal replacement therapy of 3 years' duration after acute rejection of kidney transplant. He was on treatment with amlodipin 5 mg, doxazosin 8 mg, carvedilol 25 mg, enalapril 25 mg and omeprazole 20 mg with poor adherence to medication. On examination, he was afebrile, responsive to verbal stimuli and oriented in time, place, and person with no evidence of apparent focal deficits. His blood pressure was 200/120 mmHg, pulse 132/min and the pulse oximeter was reading at 98%. Cardiovascular and respiratory system examination revealed no abnormality. The electrocardiogram was normal. Laboratory tests on presentation were remarkable for elevated urea and creatinine of 116 mg/dl and 12,61 mg/dl respectively and thrombocytopenia with a platelet count of 131 × 10 3 per μL. The coagulation profile was unremarkable with a PT of 100 s and a APTT of 28 s. One hour later, the patient was conscious and breathing comfortably, when he suffered a generalized convulsion. Immediate management included intravenous benzodiazepines (diazepam 5 mg) followed by 1000 mg Levetiracetam without response, so he received the administration of further diazepan dosage. He was admitted to the Intensive Care Unit (ICU) for more aggressive management under assisted ventilation. Patient presented at the ICU with severe dyspnoea and massive swelling of the tongue. The patient was finally anaesthetized with propofol and fentanyl infusions and intubated to secure airway. After airway management was achieved by orotracheal intubation, physical examination revealed a massive swelling of the tongue which was significantly displaced anteriorly and superiorly, protruding out of his mouth 3 cm (Images 1,2), firm on palpation with a generalized dark-blue coloration. The dorsal surface of the tongue showed a 1,5 cm laceration on the posterior right half which was sutured. No hemorrhage was observed other than that of lingual hematoma. A computed tomography scan of the head and neck showed severe diffuse enlargement of the tongue with obliteration of the airway without any particular vessel to be the cause of the hematoma. Thus embolization was not attempted because the source of bleeding was not identified. It was thought that the vessels had undergone tamponade from physical pressure from the hematoma and thus there was likely no active hemorrhage. A clinical diagnosis of lingual hematoma secondary to trauma (tongue biting), hypertension and thrombocytopenia was made, and he was managed conservatively with intravenous midazolam and levetiracetam to prevent further seizure activity. Blood pressure was monitored and antihypertensive medications were started and titrated. Percutaneous tracheotomy was performed to secure the airway, On the third day of admission the patient was diagnosed of Ventilator-Associated Pneumonia caused by Enterobacter aerogenes. According to the antibiotic susceptibility pattern of the isolated bacteria he was treated with meropenem (2 g every 8 h) for 10 days. The patient's lingual hematoma resolved over five to six days. The blood pressure was well controlled near 140/90 mmHg with a combination of several anti-hypertensive medication. The patient was discharged on hospital day 34th. Follow-up at 2 weeks showed complete resolution of the lingual hematoma and traumatic ulcer (Image 3). Discussion Lingual hematoma is both a rare and potential life-threatening phenomenon due to its tendency to cause upper airway obstruction. Bleeding from the lingual artery or its branches can result in very drastic tongue enlargement. This enlargement results in the tongue being displaced in a cephalad and posterior direction endangering the patient's life. A classification of acute enlargement of the tongue has been formulated by Renehan and Morton [6] and is based on the various etiologies encountered. Their classification Image 1. Massive swelling of the tongue protruding out of his mouth. Lateral view. system includes hemorrhage secondary to trauma, vascular anomaly or disorder of coagulation, edema secondary to exudates or transduates, infarction, and infection. In this case, the patient had obviously suffered a lingual hematoma secondary to two subclasses of etiology (trauma and coagulopathy), both mediated by chronic renal failure, hypertensive crisis and multiple tonic clonic seizures in association with Systemic Lupus. Airway management is of prime importance in cases of acute tongue enlargement. Progressive lingual and sublingual swelling displaces the tongue posteriorly and cephalad eventually producing dysphonia, drooling, dyspnoea and finally stridor heralding upper airway obstruction. In the presence of these features, it is axiomatic that a definitive airway must be established. Endotracheal intubation is often difficult to perform orally in such cases and thus is usually performed nasally. Blind nasal intubation can be extremely difficult and potentially traumatic. Fiberoptic laryngoscopy can be very helpful except in cases of active hemorrhage obstructing the view. Given the difficulty of such intubations, they are often performed with the patient awake because the ability to ventilate a patient with a bag-valve mask is unpredictable. Depending on the surgeon's experience, a cricothyroidotomy or a rapid emergent tracheostomy can be used for rapid airway establishment [7]. In our patient the decision was made to perform an immediate endotraqueal oral intubation with the patient awake. Oral intubation was successfully completed on the first attempt. After airway management is achieved, the most critical step management is recognition of the event itself. Therefore, a careful Image 2. Lingual haematoma. Frontal view. Image 3. Complete resolution of the lingual hematoma and traumatic ulcer. evaluation of clinical signs and laboratory data is essential in order to make the right medical decision. In cases with diffuse hemorrhage, a hematological disorder should be kept in mind. Those presenting with active SLE can present with immune thrombocytopenia. In our particular case of a patient with some of the numerous complications associated with SLE and chronic renal failure, the hemorrhage was due not only to decreased platelet count, but also to platelet dysfunction as a result of intrinsic platelet abnormalities and impaired platelet-vessel wall interaction [8]. This probably led to the unusually large haematoma of the tongue when he bit it, sufficient to cause an acute upper airway obstruction. The profuse bleeding associated with high blood pressure also contributed to the airway emergency. The management of these cases can vary depending on the cause of the hemorrhage. In cases with active bleeding hemostasis can be obtained by local control, interventional radiology or ligation of injured vessels. Hematoma evacuation of the tongue is not usually indicated because bleeding occurs into the intrinsic muscles of the tongue rather than into the potential anatomic fascial spaces [8]. Some authors believe that surgical attempt to evacuate the blood may cause further swelling and subsequent worsening of condition in postoperative period. Those due to inherited or acquired coagulopathies, treatment is usually conservative once the causative factors have been corrected. However, there is some controversy in cases in which patients undergo anticoagulation therapy acutely. Some of these cases are managed with anticoagulation reversal, whereas other authors prefer observation while maintaining anticoagulation to prevent rethrombosis of the targeted vessels [9]. In conclusion, lingual hematomas can be a deadly phenomenon requiring rapid identification and management. The first objective of treatment should be guarantee airway safety. Once the airway is secured the treatment focus switches to hemostasis and etiology assessment. In our particular case, decreased platelet count and probable platelet dysfunction associated with trauma and uncontrolled hypertension, played a significant role in the development of the lingual hematoma.
2019-08-03T13:03:26.706Z
2019-07-19T00:00:00.000
{ "year": 2019, "sha1": "91d1c9d8419a9feef1b0f2b01bfa5f767e7a9ff7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tcr.2019.100226", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7a2080bc52d83642b1ac5a80e5cf2771917a930", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270723769
pes2o/s2orc
v3-fos-license
Exploring the Neuroprotective Potential of Desmodium Species: Insights into Radical Scavenging Capacity and Mechanisms against 6-OHDA-Induced Neurotoxicity In this study, we collected seven prevalent Taiwanese Desmodium plants, including three species with synonymous characteristics, in order to assess their antioxidant phytoconstituents and radical scavenging capacities. Additionally, we compared their inhibitory activities on monoamine oxidase (MAO) and 6-hydroxydopamine (6-OHDA) auto-oxidation. Subsequently, we evaluated the neuroprotective potential of D. pulchellum on 6-OHDA-induced nerve damage in SH-SY5Y cells and delved into the underlying neuroprotective mechanisms. Among the seven Desmodium species, D. pulchellum exhibited the most robust ABTS radical scavenging capacity and relative reducing power; correspondingly, it had the highest total phenolic and phenylpropanoid contents. Meanwhile, D. motorium showcased the best hydrogen peroxide scavenging capacity and, notably, D. sequax demonstrated remarkable prowess in DPPH radical and superoxide scavenging capacity, along with selective inhibitory activity against MAO-B. Of the aforementioned species, D. pulchellum emerged as the frontrunner in inhibiting 6-OHDA auto-oxidation and conferring neuroprotection against 6-OHDA-induced neuronal damage in the SH-SY5Y cells. Furthermore, D. pulchellum effectively mitigated the increase in intracellular ROS and MDA levels through restoring the activities of the intracellular antioxidant defense system. Therefore, we suggest that D. pulchellum possesses neuroprotective effects against 6-OHDA-induced neurotoxicity due to the radical scavenging capacity of its antioxidant phytoconstituents and its ability to restore intracellular antioxidant activities. Introduction Parkinson's disease (PD) is a chronic neurodegenerative disorder affecting the central nervous system (CNS), typically afflicting individuals over the age of 55.Its primary clinical manifestations include motor impairments such as resting tremor, rigidity, bradykinesia, and postural instability, which stem from progressive neuronal degeneration in the nigrostriatal dopaminergic pathway.While remains a cornerstone therapeutic agent for PD, long-term treatment often leads to side effects such as levodopa-induced fluctuations and dyskinesias.In response, medical scientists have developed targeted drugs aimed at enhancing the activity of the central dopaminergic neuronal system, notably dopamine agonists, catechol-O-methyltransferase (COMT) inhibitors, and monoamine oxidase (MAO) inhibitors, particularly the latter [1].MAO exists in two forms, MAO-A and MAO-B, each selectively distributed across various organs and preferentially metabolizing different neurotransmitters.MAO-A primarily metabolizes norepinephrine and serotonin, while MAO-B predominantly acts on dopamine.Consequently, their respective selective inhibitors have played pivotal roles in modulating neurotransmitter metabolism and have been utilized as therapeutic agents for conditions such as depression and PD [2].Given that the oxidative deamination of dopamine catalyzed by MAO-B generates hydrogen peroxide and aldehydes, easily leading to oxidative damage in nerve cells, irreversible MAO-B inhibitors such as selegiline and rasagiline have demonstrated antioxidant and neuroprotective properties in in vitro and in vivo PD models.As a result, MAO-B inhibitors have been recognized for their multifaceted pharmacological activities and employed by clinicians in the management of PD [3].Some researchers are actively exploring the potential of plant-derived or natural products with MAO-B inhibitory activity and neuroprotective effects as promising therapeutic agents for PD [4]. The causes of PD are multifactorial, encompassing genetic predisposition, infections, and exposure to environmental toxins.Epidemiological investigations have linked chronic exposure to herbicides and pesticides with an increased risk of PD.The induction of Parkinsonism by 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) strongly supports the environmental toxin theory of PD.Additionally, subsequent research has revealed that the herbicide paraquat and the pesticide rotenone can also trigger PD-like symptoms [5].These effects of these environmental toxins elucidate several key neuropathological mechanisms underlying PD.Two major hypotheses have emerged regarding the pathogenesis of PD.One prominent theory implicates mitochondrial dysfunction and subsequent oxidative stress, particularly involving toxic oxidative dopamine (DA) species such as DA-quinone.The metabolite 1-methyl-4-phenylpyridium (MPP + ) derived from the metabolism of MPTP by MAO-B or rotenone selectively inhibits mitochondrial complex I of the electron transport chain.This inhibition disrupts oxidative phosphorylation, impairs ATP production, and elevates the generation of reactive oxygen species (ROS).Under normal circumstances in dopaminergic neurons, dopamine auto-oxidation and dopamine metabolism are prone to produce large amounts of ROS, not to mention the presence of the above-mentioned environmental toxins.Excessive ROS production leads to aberrant protein processing and triggers the apoptosis of dopaminergic neurons.Consequently, targeting cellular redox homeostasis to mitigate intracellular oxidative stress represents a promising therapeutic approach for PD [6].Building upon these insights into PD neuropathology, 6-hydroxxydoppamine (6-OHDA) and rotenone are commonly employed as tools to establish in vitro and in vivo models replicating PD-relevant neuropathology and behavioral phenotypes.These models facilitate the assessment of the potential efficacy of plants, natural products, or compounds as therapeutic interventions for the prevention and treatment of PD [7]. The genus Desmodium, a member of the Fabaceae family, has a long history of use as traditional folk medicine in Taiwan.There are approximately 350 species of this genus Desmodium worldwide, primarily found in tropical and subtropical regions, with about 48 species recognized for their medicinal properties [8].In Taiwan's Flora, eighteen species have been recorded, with sixteen being native [9].Notably, ten of these species, including D. caudatum (Thunb.)DC., D. gangeticum (L.) DC., D. heterocarpon (L.) DC., D. heteropyllum (Willd.)DC., D. laxiflorum DC., D. microphyllum (Thunb.)DC., D. multiflorum DC., D. renifolium (L.) Schindl., D. sequax Wall., and D. triflorum (L.) DC. are recognized for their medicinal value, often employed in treating various ailments such as cardiovascular syndromes, hepatitis, pneumonia, nephritis, and parasitic diseases.Recent pharmacological studies have revealed the diverse therapeutic effects of Desmodium plants, including their antioxidant, anti-inflammatory, cardioprotective, hepatoprotective, sedative, and antipyretic properties [8,[10][11][12][13].Their phytoconstituents include flavonoids, phenylpropanoids, phenolic compounds, and alkaloids, with the former two being the main classes [8,10].Notably, three commonly used species in Taiwan that were initially classified under the Desmodium genus are now listed under different genera in "The Plant List".The accepted scientific names of these three homotopic synonym species including D. motorium (Houtt.)Merr., D. pulchellum (L.) Benth., and D. triquetrum (L.) DC. are Codariocalyx motorium (Houtt.)H. Ohashi., Phyllodium pulchellum (L.) Desv., and Tadehagi triquestrum (L.) H. Ohashi. in Plants 2024, 13, 1742 3 of 24 "The Plant List" (http://www.theplantlist.org,accessed on 16 May 2024), respectively.The traditional uses and pharmacological properties of these three species are the same as those of the above-mentioned Desmodium species [8,14,15].Furthermore, Cai's report has indicated that D. pulchellum has MAO inhibitory activity [16].However, there have been few scientific studies reporting the neuroprotective activities of the Desmodium species.Therefore, we collected seven common Desmodium plants, including the three homotopic synonym species from Taiwan's wild low-altitude areas.Initially, this investigation involved comparing the radical scavenging capacities of the seven Desmodium species against 2,2-diphenyl-1-picrylhydrazyl (DPPH), 2,2 ′ -azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) (ABTS), and ROS using microtiter spectrophotometric and spectrofluorimetric methods in vitro.As the radical scavenging capacities of plants are closely associated with their reducing power [17], we also compared the relative reducing power of the seven Desmodium species.As phenolic compounds, flavonoids, and phenylpropanoids are the major antioxidant phytoconstituents of Desmodium species [8,10], the total phenolic, flavonoid, and phenylpropanoid contents in the seven Desmodium species were also quantified.Subsequently, the inhibitory activities of the seven Desmodium species on MAO-A and MAO-B were further analyzed.We then evaluated and compared the effects of the three Desmodium plants with higher antioxidant contents and superior radical scavenging capacities on 6-OHDA auto-oxidation and neuronal damage induced by 6-OHDA and rotenone in SH-SY5Y cells.Finally, we delved into the neuroprotective mechanism of D. pulchellum against 6-OHDA-induced neuronal damage in human neuroblastoma SH-SY5Y cells.Our comprehensive study aims to elucidate the neuroprotective potential of the Desmodium species, shedding light on their therapeutic applications in the context of neurological disorders, especially PD. Antioxidant Phytoconstituent Contents of Seven Desmodium Plants Phenolic compounds represent a significant class of secondary metabolites found widely across plant species.These compounds endow plants with antioxidant properties and various biological activities, including anti-inflammatory and neuroprotective effects [18].Consequently, the total phenolic content serves as an important indicator of a medicinal plant's biological activity and potential medicinal value.Folin-Ciocalteu's phenol (FCP) method, based on the formation of blue complexes through redox reactions, offers high sensitivity and accuracy for the spectrophotometric determination of phenolic compounds [17,19].(+)-Catechin, a member of the flavan-3-ol subclass, was used as a reference standard.Over the concentration range of 0-250 µg/mL, a highly linear relationship between concentration and absorbance was observed (y = 0.0097x − 0.0026, R 2 = 0.998).The absorbance values of all the methanolic filtrates of the Desmodium plants (5 mg/mL) fell within this linear range.Upon conversion using the provided linear formula, the total phenolic content present in the Desmodium plants is depicted in Figure 1A.Among the seven Desmodium plants, D. pulchellum contained the highest total phenolic content, followed by D. motorium and D. sequax, while D. gangeticum displayed the lowest content. Similar to other phenolic compounds, flavonoids are renowned for their antioxidant properties and diverse pharmacological activities [20,21].The total flavonoid content also serves as a significant parameter in assessing the potential medicinal value of a plant.Aluminum solution reacts with flavonoids such as flavonols or flavan-3-ol to form an aluminum-flavonoid complex through ion chelating reactions [22].The flavonol compound quercetin was used as a reference standard.There is a highly linear relationship between concentration (0-250 µg/mL) and absorbance which was observed (y = 0.0025x − 0.0057, R 2 = 0.999).The absorbance values of all the methanolic filtrates of the Desmodium plants (10 mg/mL) were within the calibration linear range of quercetin.Based on the provided linear formula, the total flavonoid content present in the Desmodium plants is depicted in Figure 1B.D. motorium contained the highest total flavonoid content, followed by D. pulchellum and D. triflorum, while D. triquetrum displayed the lowest content. trates of Desmodium plants (1 mg/mL) were within the calibration linear range of quercetin.According to the above linear formula, the total phenylpropanoid content present in the Desmodium plants is depicted in Figure 1C.D. pulchellum contained the highest total phenylpropanoid content, followed by D. motorium and D. sequax, while D. gangeticum displayed the lowest content.Phenylpropanoids are a class of secondary metabolites synthesized from phenylalanine and tyrosine through the shikimic acid pathway, serving as intermediate compounds in the biosynthesis of various plant defense secondary metabolites such as flavonoids, coumarins, lignins, and stilbenes.Similarly to phenolic compounds and flavonoids, phenylpropanoids are also recognized for their antioxidant properties and diverse pharmacological activities [23].Arnow reagent, comprising a mixture of sodium molybdate and sodium nitrite, reacts with phenylpropanoids (dihydroxycinnamic derivatives) [24].The phenylethanoid glycoside verbascoside was used as a reference standard.There is a highly linear relationship between concentration (0-100 µg/mL) and absorbance which was observed (y = 0.0007x + 0.0245, R 2 = 0.993).The absorbance values of all the methanolic filtrates of Desmodium plants (1 mg/mL) were within the calibration linear range of quercetin.According to the above linear formula, the total phenylpropanoid content present in the Desmodium plants is depicted in Figure 1C.D. pulchellum contained the highest total phenylpropanoid content, followed by D. motorium and D. sequax, while D. gangeticum displayed the lowest content. Radical Scavenging Capacities of Seven Desmodium Plants The DPPH radical is a stable nitrogen radical which has a maximum absorption at 517 nm, resulting in a deep violet color.When an electron or hydrogen atom is donated from plant extracts or natural products, the DPPH solution turns colorless or pale yellow; consequently, the absorbance value at 517 nm decreases.This assay has been widely used for assessing the antioxidant activity of plant extracts or natural products [25].In this investigation, the absorbance of the DPPH radical (300 µM) at 517 nm was approximately 0.92.The reference standard (+)-Catechin exhibited significant scavenging ability for the DPPH radicals at concentrations ranging from 0 to 25 µg/mL.A strong linear relationship between concentration and scavenging ability was observed (R 2 = 0.995), with an IC 50 value of 14.09 ± 0.17 µg/mL for (+)-Catechin against the DPPH radicals.The scavenging percentages of all the methanolic filtrates of Desmodium plants (at 0-5 mg/mL) against the DPPH radicals also displayed a concentration-dependent linear relationship (R 2 = 0.982-0.995).Based on the relative conversion of the Desmodium plants and reference standard (+)-catechin at the concentration that provides 50% DPPH scavenging, the DPPH scavenging capacities of these Desmodium plants relative to (+)-catechin (i.e., (+)-catechin equivalent of the DPPH radical scavenging capability, CEDSC) are shown in Figure 2A.D. sequax exhibited the highest DPPH radical scavenging capacity, followed by D. pulchellum and D. motorium, while D. gangeticum demonstrated the lowest capacity.ROS are the byproducts of normal oxygen metabolism in cells and organisms.Due to their high reactivity as free radicals, they can readily induce oxidative damage to cells and tissues.ROS typically encompass superoxide, hydrogen peroxide, and hydroxyl radical.Superoxide, an initial ROS, is primarily generated through intracellular metabolic reactions, notably from the mitochondrial respiratory chain-particularly complex I and complex III-as well as from enzymatic reactions such as the conversion of xanthine to The ABTS method commonly employs Trolox as a positive control and compares the reactivity of test antioxidants with it; hence, it is often referred to as the Trolox equivalent antioxidant capacity (TEAC) assay.This assay is also widely used to evaluate the antioxidant activity of crude plant extracts or purified compounds [25,26].In this investigation, the absorbance of ABTS radical (8 mM) at 734 nm was approximately 0.73.The reference standard Trolox exhibited effective scavenging ability for the ABTS radicals at concentrations ranging from 0 to 50 µM, showing a highly linear relationship between concentration and scavenging ability (R 2 = 0.999), with an IC 50 value of Trolox for the ABTS radicals of 28.61 ± 1.24 µM.The scavenging percentages of all the methanolic filtrates of the Desmodium plants (at 0-5 mg/mL) for the ABTS radicals also showed a concentration-dependent linear relationship (R 2 = 0.985-0.993).According to the equivalent activity of the Desmodium plants and the reference standard Trolox at concentrations providing 50% ABTS scavenging, the ABTS scavenging capacities of the Desmodium plants relative to Trolox (TEAC values) are illustrated in Figure 2B.D. pulchellum exhibited the highest ABTS radical scavenging capacity, followed by D. sequax and D. motorium, while D. gangeticum displayed the lowest capacity. The relative reducing power (RRP) assay relies on the electron-donating activity of plant extracts, natural products, or compounds, thereby facilitating the reduction of Fe 3+ [17].In this investigation, the RRP of the reference standard ascorbic acid exhibited a strong linear relationship within the concentration range of 0-20 µM (R 2 = 0.995).Similarly, the RRP of all the methanolic filtrates of the Desmodium plants also displayed a concentration-dependent linear relationship within the concentration range of 0-2.5 mg/mL (R 2 = 0.990-0.999).Comparing the linear slope between the concentration and absorbance of the Desmodium plants with the reference standard (ascorbic acid), the RRP values of these Desmodium plants relative to ascorbate were determined, as illustrated in Figure 2C.D. pulchellum exhibited the highest reducing power, followed by D. sequax and D. motorium, while D. gangeticum displayed the lowest capacity. ROS are the byproducts of normal oxygen metabolism in cells and organisms.Due to their high reactivity as free radicals, they can readily induce oxidative damage to cells and tissues.ROS typically encompass superoxide, hydrogen peroxide, and hydroxyl radical.Superoxide, an initial ROS, is primarily generated through intracellular metabolic reactions, notably from the mitochondrial respiratory chain-particularly complex I and complex III-as well as from enzymatic reactions such as the conversion of xanthine to uric acid and hydrogen peroxide by xanthine oxidase in various cell types [27].In this investigation, superoxide production was simulated by mimicking the physiological reaction between xanthine and xanthine oxidase, quantified by the absorbance value of formazan blue dye resulting from the reduction of nitroblue tetrazolium (NBT) through generated superoxide.As superoxide is primarily detoxified through dismutation by intracellular superoxide dismutase (SOD), SOD serves as a reference standard for assessing superoxide scavenging capacity.The reduction rate of NBT through the reaction of xanthine and xanthine oxidase was approximately 30.5.The reference standard SOD exhibited effective scavenging ability for superoxide at concentrations ranging from 0 to 500 mU/mL, showing a strong logarithmic relationship between concentration and scavenging ability (R 2 = 0.985), with an IC 50 value of 144.74 ± 5.66 mU/mL.The methanolic filtrates of the Desmodium plants (at 0-10 mg/mL) also displayed concentration-dependent scavenging activities for superoxide, with a logarithmic relationship (R 2 = 0.981-0.993).According to the equivalent activity of the Desmodium plants and reference standard SOD at the concentration that provides 50% superoxide scavenging, the superoxide scavenging capacities of these Desmodium plants relative to SOD (SOD equivalent) are illustrated in Figure 3A.D. sequax exhibited the highest superoxide scavenging capacity, followed by D. pulchellum and D. motorium, while D. gangeticum showed the lowest capacity.Additionally, we conducted a Pearson correlation analysis to examine the relationships between the aforementioned antioxidant phytoconstituent contents and radical scavenging capacities across the seven Desmodium plants.The results are presented in Table 1, revealing significant positive correlations between certain capacities and contents.Notably, there was a strong positive correlation between total phenolic contents and several radical scavenging capacities, including DPPH radical, ABTS radical, and superoxide (r = 0.835, 0.98, and 0.848; p < 0.05, p < 0.01, and p < 0.05, respectively).These above radical scavenging capacities-encompassing DPPH radical, ABTS radical, and superoxide-also exhibited highly positive correlations with the total phenylpropanoid contents (r = 0.854, 0.903, and 0.83, respectively; all p < 0.05).Moreover, the relative reducing power demonstrated strong correlations with total phenolic contents (r = 0.978; p < 0.01) and phenylpropanoid contents (r = 0.892; p < 0.05).Furthermore, there was a positive and high correlation between the relative reducing power and the aforementioned radical scavenging capacities (DPPH radical, ABTS radical, and superoxide; r = 0.93, 0.995, and 0.939, respectively; all p < 0.01). Subsequently, we calculated the relative reducing ability per milligram of total phenolic contents in the seven Desmodium plants, and then analyzed the Pearson correlation and linear relationship between this relative phenolic-reducing ability and radical Hydrogen peroxide, primarily formed from the monovalent reduction of superoxide by SOD or the divalent reduction of oxygen by xanthine oxidase, is relatively inert and non-cytotoxic at or below concentrations of about 20-50 µM.However, excessive hydrogen peroxide entering cells can lead to the generation of the highly reactive hydroxyl radical in the presence of metal ions, potentially causing cellular damage [28].In this investigation, the hydrogen peroxide scavenging capacities of the methanolic filtrate of Desmodium plants were assessed based on the H 2 O 2 -dependent oxidation of homovanillic acid (3-methoxy-4-hydroxyphenylacetic acid, HVA) to a highly fluorescent dimer (2,2 ′dihydroxy-3,3 ′ -dimethoxydiphenyl-5,5 ′ -diacetic acid), catalyzed by horseradish peroxidase.The fluorescence intensity of HVA in the presence of hydrogen peroxide (500 µM) at 425 nm was approximately 24,750.The reference standard Trolox exhibited effective scavenging ability for hydrogen peroxide at concentrations ranging from 0 to 100 µM, showing a strong linear relationship between concentration and scavenging ability (R 2 = 0.997), with an IC 50 value of Trolox for the hydrogen peroxide radicals of 61.02 ± 1.22 µM.The methanolic filtrates of the Desmodium plants (at 0-5 mg/mL) also exhibited concentration-dependent scavenging activities for hydrogen peroxide with a linear relationship (R 2 = 0.985-0.989).Based on the relative conversion of the Desmodium plants and reference standard Trolox at the concentration that provides 50% hydrogen peroxide scavenging, the hydrogen peroxide scavenging capacities of these Desmodium plants relative to Trolox (Trolox equivalent) are illustrated in Figure 3B.D. motorium exhibited the highest hydrogen peroxide scavenging capacity, followed by D. triquetrum and D. pulchellum, while D. gangeticum showed the lowest capacity. Finally, the hydroxyl radical-a highly reactive oxygen species in cells-is primarily formed through the Fenton reaction, where hydrogen peroxide reacts with metal ions such as copper or iron.It attacks lipids, polypeptides, proteins, and nucleic acids in the cell membrane, cytosol, and nucleus to cause cell damage and death [29].Due to the high reaction of hydroxyl radicals, the scavenging activity of the hydroxyl radical is a necessary item in evaluating the ROS scavenging activity of plants and natural products.Therefore, the Fenton reaction was established in vitro with hydrogen peroxide and ferrous sulfate.Luminol chemiluminescence is a very sensitive method for the monitoring of hydroxyl radicals [30].In this investigation, the luminescence intensity of luminol (2 mM) in the Fenton reaction of hydrogen peroxide and ferrous sulfate was approximately 20,816.The reference standard Trolox exhibited effective scavenging ability for the hydroxyl radicals at concentrations ranging from 0 to 500 mU/mL, showing a strong logarithmic relationship (R 2 = 0.989), with an IC 50 value of Trolox for the hydroxyl radicals of 19.79 ± 0.91 µg/mL.The scavenging percentages of all the methanolic filtrates of the Desmodium plants (at 0-5 mg/mL) for the hydroxyl radical also presented a Log-linear relationship (R 2 = 0.971-0.992).According to equivalent activity of the Desmodium plants and reference standard Trolox at the concentration that provide 50% hydroxyl radical scavenging, the hydroxyl radical scavenging capacities of these Desmodium plants relative to Trolox (Trolox equivalent) are illustrated in Figure 3C.D. triquetrum exhibited the highest hydroxyl radical scavenging capacity, followed by D. pulchellum and D. caudatum, while D. triflora showed the lowest capacity. Additionally, we conducted a Pearson correlation analysis to examine the relationships between the aforementioned antioxidant phytoconstituent contents and radical scavenging capacities across the seven Desmodium plants.The results are presented in Table 1, revealing significant positive correlations between certain capacities and contents.Notably, there was a strong positive correlation between total phenolic contents and several radical scavenging capacities, including DPPH radical, ABTS radical, and superoxide (r = 0.835, 0.98, and 0.848; p < 0.05, p < 0.01, and p < 0.05, respectively).These above radical scavenging capacities-encompassing DPPH radical, ABTS radical, and superoxide-also exhibited highly positive correlations with the total phenylpropanoid contents (r = 0.854, 0.903, and 0.83, respectively; all p < 0.05).Moreover, the relative reducing power demonstrated strong correlations with total phenolic contents (r = 0.978; p < 0.01) and phenylpropanoid contents (r = 0.892; p < 0.05).Furthermore, there was a positive and high correlation between the relative reducing power and the aforementioned radical scavenging capacities (DPPH radical, ABTS radical, and superoxide; r = 0.93, 0.995, and 0.939, respectively; all p < 0.01).Subsequently, we calculated the relative reducing ability per milligram of total phenolic contents in the seven Desmodium plants, and then analyzed the Pearson correlation and linear relationship between this relative phenolic-reducing ability and radical scavenging capacities.The results, depicted in Figure 4, indicate a significant Pearson correlation and linear relationship between the relative reducing power of total phenolics in the seven Desmodium plants and the radical scavenging capacities against the DPPH radical and superoxide (r = 0.965 and 0.945, respectively; both p < 0.01). MAO Inhibitory Activities of Seven Desmodium Plants The MAO inhibitory activities of the methanolic filtrates of the Desmodium plants were assessed using kynuramine as a nonselective substrate.Kynuramine undergoes deamination by MAO to produce 4-hydroxyquinoline, whose fluorescence intensity was measured using a fluorescence reader.To discern the inhibitory effects of the methanolic filtrate of the Desmodium plants on MAO-B and MAO-A, the selective MAO-A inhibitor clorgyline and MAO-B inhibitor pargyline were employed.In the present investigation, MAO Inhibitory Activities of Seven Desmodium Plants The MAO inhibitory activities of the methanolic filtrates of the Desmodium plants were assessed using kynuramine as a nonselective substrate.Kynuramine undergoes deamination by MAO to produce 4-hydroxyquinoline, whose fluorescence intensity was measured using a fluorescence reader.To discern the inhibitory effects of the methanolic filtrate of the Desmodium plants on MAO-B and MAO-A, the selective MAO-A inhibitor clorgyline and MAO-B inhibitor pargyline were employed.In the present investigation, there was a good linear relationship (R 2 = 0.999) between the concentration (0-12.5 µM) and fluorescence intensity of 4-hydroxyquinoline.The pretreatment of brain homogenate solely with pargyline (excluding the methanolic filtrate of the Desmodium plants) was considered as 100% MAO-A activity.The MAO inhibitory percentages of all the methanolic filtrates of the Desmodium plants (at 0-25 mg/mL) for the MAO-A activity exhibited a concentrationdependent relationship (R 2 = 0.977-0.993).The IC 50 values for the MAO-A activity of all the methanolic filtrates of the Desmodium plants are presented in Table 2.Among the seven Desmodium plants, D. motorium demonstrated the most potent inhibitory activity on MAO-A, followed by D. caudatum and D. triquetrum, whereas D. sequax exhibited the weakest inhibitory activity on MAO-A.Similarly, the pretreatment of brain homogenate solely with clorgyline (excluding the methanolic filtrate of the Desmodium plants) was considered as 100% MAO-B activity.The MAO inhibitory percentages of all the methanolic filtrates of the Desmodium plants (at 0-25 mg/mL) for the MAO-B activity also showed a concentrationdependent relationship (R 2 = 0.985-0.998).The IC 50 values for the MAO-B activity of all the methanolic filtrates of the Desmodium plants are presented in Table 2. D. sequax exhibited the most potent inhibitory activity on MAO-B, followed by D. motorium and D. triflorum, while D. pulchellum showed the weakest inhibitory activity on MAO-B.Furthermore, we conducted a ratio analysis of IC 50 values with respect to MAO-B and MAO-A for the seven Desmodium plants in order to confirm their relative selective inhibitory effects on MAO-B and MAO-A.D. sequax displayed the best selective inhibitory activity on MAO-B, whereas D. motorium exhibited the best selective inhibitory activity on MAO-A. 6-OHDA Auto-Oxidation Inhibition of Three Desmodium Plants The compound 6-OHDA, derived from dopamine, is highly unstable and prone to auto-oxidation, resulting in the formation of the intermediate p-quinone and ROS such as superoxide and hydrogen peroxide.The toxicity of 6-OHDA is directly linked to the levels of the intermediate p-quinone generated during this process.When p-quinone is produced from 6-OHDA, it has a maximum absorption at 490 nm.Thus, an analytical method for 6-OHDA auto-oxidation inhibition can serve as an initial assessment to determine whether crude extracts or purified compounds from plants possess protective effects against 6-OHDA-induced neuronal damage [31].Following the assessment of the radical scavenging capacities of the seven Desmodium plants, the three species (D. pulchellum, D. sequax, and D. motorium) exhibiting the highest radical scavenging capacities were selected for further evaluation of their inhibitory effects against 6-OHDA auto-oxidation, with NAC (N-acetylcysteine) serving as a positive control.The inhibitory activities of the three Desmodium plants, along with NAC, against 6-OHDA auto-oxidation are depicted in Figure 5.It can be observed that the inhibitory activities of the three Desmodium plants against 6-OHDA auto-oxidation were concentration-dependent in the range from 25 to 500 µg/mL.Among them, D. pulchellum exhibited the most significant inhibitory effects (p < 0.05, p < 0.01, p < 0.001), followed by D. motorium and D. sequax (p < 0.05, p < 0.01, p < 0.001).NAC demonstrated remarkable inhibitory effects, achieving inhibition percentages of 83% to 88% at the concentrations of 1 and 2 mM (p < 0.001). Protective Effects against 6-OHDA-Induced or Rotenone-Induced Neurotoxicity in SH-SY5Y Cells The human neuroblastoma SH-SY5Y cell line, possessing catecholaminergic and n ronal properties which express tyrosine hydroxylase and dopamine β-hydroxylase to s thesize dopamine and noradrenaline, is often regarded as an important in vitro mode investigating the intracellular molecular pathways of neural cells and developing neu protective drugs for neurodegenerative diseases, especially Alzheimer's disease (AD PD [32].Utilizing the dopaminergic neurotoxin 6-OHDA is a common method to estab both in vitro and in vivo PD models, allowing for the assessment of the efficacy of po tial treatments derived from plants, natural products, or compounds.Referring to pr ous studies [33], 25 µM 6-OHDA was used to induce neuronal damage in this study sulting in a significant reduction in cell viability at 24 h post-treatment (58.4 ± 2.8%; 0.001) (Figure 6A).Among the tested Desmodium plants (D. pulchellum, D. sequax, and motorium) chosen for their potent radical scavenging capabilities, we observed promis neuroprotective effects against 6-OHDA-induced neuronal damage.NAC was also lized as a positive control.The neuroprotective activities of the three Desmodium pla and NAC against 6-OHDA neuronal damage are shown in Figure 6.Concentration pendent responses are evident, with all three Desmodium species (25-500 µg/mL) exh ing the ability to mitigate 6-OHDA-induced neuronal damage.Notably, D. pulchel demonstrated the most pronounced reversal of decreased cell viability induced b OHDA (p < 0.001), while D. motorium and D. sequax had the equivalent effects of revers Protective Effects against 6-OHDA-Induced or Rotenone-Induced Neurotoxicity in SH-SY5Y Cells The human neuroblastoma SH-SY5Y cell line, possessing catecholaminergic and neuronal properties which express tyrosine hydroxylase and dopamine β-hydroxylase to synthesize dopamine and noradrenaline, is often regarded as an important in vitro model for investigating the intracellular molecular pathways of neural cells and developing neuroprotective drugs for neurodegenerative diseases, especially Alzheimer's disease (AD) or PD [32].Utilizing the dopaminergic neurotoxin 6-OHDA is a common method to establish both in vitro and in vivo PD models, allowing for the assessment of the efficacy of potential treatments derived from plants, natural products, or compounds.Referring to previous studies [33], 25 µM 6-OHDA was used to induce neuronal damage in this study, resulting in a significant reduction in cell viability at 24 h post-treatment (58.4 ± 2.8%; p < 0.001) (Figure 6A).Among the tested Desmodium plants (D. pulchellum, D. sequax, and D. motorium) chosen for their potent radical scavenging capabilities, we observed promising neuroprotective effects against 6-OHDA-induced neuronal damage.NAC was also utilized as a positive control.The neuroprotective activities of the three Desmodium plants and NAC against 6-OHDA neuronal damage are shown in Figure 6.Concentration-dependent responses are evident, with all three Desmodium species (25-500 µg/mL) exhibiting the ability to mitigate 6-OHDA-induced neuronal damage.Notably, D. pulchellum demonstrated the most pronounced reversal of decreased cell viability induced by 6-OHDA (p < 0.001), while D. motorium and D. sequax had the equivalent effects of reversing cell viability (p < 0.01, p < 0.001).Additionally, NAC showed comparable efficacy in restoring cell viability at the tested concentrations (1 and 2 mM), with its effect at 2 mM equivalent to that of D. motorium or D. sequax at 500 µg/mL (p < 0.001; Figure 6A).possessed concentration-dependent neuroprotective effects against rotenone-induced neuronal damage.Once again, D. pulchellum demonstrated the most potent reversal of decreased cell viability induced by rotenone (p < 0.05, p < 0.01, p < 0.001), followed by D. motorium and D. sequax (p < 0.01, p < 0.001).NAC also demonstrated significant efficacy in restoring cell viability, with its effect at 1 mM equivalent to that of D. motorium or D. sequax at 500 µg/mL (p < 0.001; Figure 6B). Intracellular ROS Levels and Antioxidant Enzymes Activities in 6-OHDA-Treated SH-SY5Y Cells As the generation of ROS is an intermediate process in 6-OHDA-or rotenone-induced neurotoxicity, this study employed a non-fluorescent probe, 2′,7′-dichloro-dihydro fluorescein diacetate (DCFH-DA), to label intracellular ROS.DCFH-DA has been widely utilized as a fluorescent probe for detecting intracellular ROS levels as it can freely penetrate cell membranes and is hydrolyzed by intracellular esterases to produce dichlorodihydro fluorescein (DCFH).Upon the production of intracellular ROS from 6-OHDA or rotenone treatment, DCFH is oxidized by intracellular ROS and converted into the fluorescent compound 2′,7′-fluorescein (DCF).Subsequently, the intensity of DCF fluorescence can be quantified using a fluorescent microplate reader [34].Given that D. pulchellum demonstrated the most significant reversing effects on the decreased cell viability induced by 6-OHDA in the SH-SY5Y cells, we focused solely on evaluating the effects of D. Rotenone, a natural lipophilic pesticide derived from the Fabaceae family, readily penetrates the blood-brain barrier (BBB) to reach the nigrostriatal dopaminergic neuronal pathway.It can specifically induce neurotoxicity in dopaminergic neurons through interrupting the mitochondrial complex I of the electron transport chain, leading to the generation of ROS.We further evaluated the neuroprotective properties of the three Desmodium plants against rotenone-induced neuronal damage in the SH-SY5Y cells.Following rotenone exposure for 24 h, cell viability decreased significantly (47.2 ± 1.5%; p < 0.001; Figure 6B).Similar to the observations with 6-OHDA, all three Desmodium plants (25-500 µg/mL) possessed concentration-dependent neuroprotective effects against rotenone-induced neuronal damage.Once again, D. pulchellum demonstrated the most potent reversal of decreased cell viability induced by rotenone (p < 0.05, p < 0.01, p < 0.001), followed by D. motorium and D. sequax (p < 0.01, p < 0.001).NAC also demonstrated significant efficacy in restoring cell viability, with its effect at 1 mM equivalent to that of D. motorium or D. sequax at 500 µg/mL (p < 0.001; Figure 6B). Intracellular ROS Levels and Antioxidant Enzymes Activities in 6-OHDA-Treated SH-SY5Y Cells As the generation of ROS is an intermediate process in 6-OHDA-or rotenone-induced neurotoxicity, this study employed a non-fluorescent probe, 2 ′ ,7 ′ -dichloro-dihydro fluorescein diacetate (DCFH-DA), to label intracellular ROS.DCFH-DA has been widely utilized as a fluorescent probe for detecting intracellular ROS levels as it can freely penetrate cell membranes and is hydrolyzed by intracellular esterases to produce dichloro-dihydro fluorescein (DCFH).Upon the production of intracellular ROS from 6-OHDA or rotenone treatment, DCFH is oxidized by intracellular ROS and converted into the fluorescent compound 2 ′ ,7 ′ -fluorescein (DCF).Subsequently, the intensity of DCF fluorescence can be quantified using a fluorescent microplate reader [34].Given that D. pulchellum demonstrated the most significant reversing effects on the decreased cell viability induced by 6-OHDA in the SH-SY5Y cells, we focused solely on evaluating the effects of D. pulchellum on the increase in the intracellular ROS levels caused by the 6-OHDA treatment.At 24 h after the 6-OHDA treatment, the fluorescent intensity in the SH-SY5Y cells was enhanced by 6-OHDA to 590.5 ± 37.2%, compared to the control group (100%; see Figure 7).D. pulchellum (25-250 µg/mL) effectively attenuated the increase in the intracellular ROS levels caused by 6-OHDA in a concentration-dependent manner, with the highest concentration (250 µg/mL) reducing approximately half of the increase in the fluorescence intensity (p < 0.001).NAC at 2 mM resulted in a reduction of approximately 80% in the increase in the fluorescent intensity induced by 6-OHDA (p < 0.001; Figure 7).pulchellum on the increase in the intracellular ROS levels caused by the 6-OHDA treatment.At 24 h after the 6-OHDA treatment, the fluorescent intensity in the SH-SY5Y cells was enhanced by 6-OHDA to 590.5 ± 37.2%, compared to the control group (100%; see Figure 7).D. pulchellum (25-250 µg/mL) effectively attenuated the increase in the intracellular ROS levels caused by 6-OHDA in a concentration-dependent manner, with the highest concentration (250 µg/mL) reducing approximately half of the increase in the fluorescence intensity (p < 0.001).NAC at 2 mM resulted in a reduction of approximately 80% in the increase in the fluorescent intensity induced by 6-OHDA (p < 0.001; Figure 7).In general, intracellular ROS levels and antioxidant defense systems maintain a delicate balance to sustain normal cellular function and survival.However, the overproduction of intracellular ROS in response to external infections, internal inflammation, or various stimuli can disrupt this balance, leading to intracellular oxidative stress.ROS indiscriminately attack intracellular molecules such as carbohydrates, nucleic acids, lipids, and proteins, resulting in oxidative damage and alterations in cell structure and function.Ultimately, this oxidative damage can lead to cellular injury and contribute to the development of various diseases, including aging, cancer, and neurodegenerative disorders [35].Intracellular antioxidant defense systems can be categorized into nonenzymatic and enzymatic systems.Glutathione (GSH), composed of glycine, cysteine, and glutamic acid, is a prominent intracellular nonenzymatic antioxidant.Given its ability to directly scavenge ROS, the intracellular GSH content serves as a crucial indicator of intracellular oxidative stress and the efficacy of antioxidant defense mechanisms [36].Under the conditions of oxidative stress, lipids, particularly polyunsaturated fatty acids, are highly susceptible to In general, intracellular ROS levels and antioxidant defense systems maintain a delicate balance to sustain normal cellular function and survival.However, the overproduction of intracellular ROS in response to external infections, internal inflammation, or various stimuli can disrupt this balance, leading to intracellular oxidative stress.ROS indiscriminately attack intracellular molecules such as carbohydrates, nucleic acids, lipids, and proteins, resulting in oxidative damage and alterations in cell structure and function.Ultimately, this oxidative damage can lead to cellular injury and contribute to the development of various diseases, including aging, cancer, and neurodegenerative disorders [35].Intracellular antioxidant defense systems can be categorized into nonenzymatic and enzymatic systems.Glutathione (GSH), composed of glycine, cysteine, and glutamic acid, is a prominent intracellular nonenzymatic antioxidant.Given its ability to directly scavenge ROS, the intracellular GSH content serves as a crucial indicator of intracellular oxidative stress and the efficacy of antioxidant defense mechanisms [36].Under the conditions of oxidative stress, lipids, particularly polyunsaturated fatty acids, are highly susceptible to oxidative damage within cells.Malondialdehyde (MDA) serves as a reliable biomarker of cellular oxidative stress, as it is one of the end-products of polyunsaturated fatty acid peroxidation [37].To assess the impact of antioxidant capacity on the neuroprotective effects of D. pulchellum against 6-OHDA-induced neuronal damage, we evaluated the levels of GSH and MDA in the SH-SY5Y cells.Our findings, obtained 24 h after the 6-OHDA treatment, revealed decreased intracellular GSH levels and increased intracellular MDA levels in the 6-OHDA-treated SH-SY5Y cells compared to the control group (p < 0.001) (Figure 8).The treatment with D. pulchellum (50-250 µg/mL) effectively reversed this phenomenon induced by 6-OHDA in a concentration-dependent manner (p < 0.05, p < 0.01, p < 0.001).Similarly, the treatment with NAC at 2 mM significantly attenuated the changes induced by 6-OHDA (p < 0.001; Figure 8). oxidative damage within cells.Malondialdehyde (MDA) serves as a reliable biomarker of cellular oxidative stress, as it is one of the end-products of polyunsaturated fatty acid peroxidation [37].To assess the impact of antioxidant capacity on the neuroprotective effects of D. pulchellum against 6-OHDA-induced neuronal damage, we evaluated the levels of GSH and MDA in the SH-SY5Y cells.Our findings, obtained 24 h after the 6-OHDA treatment, revealed decreased intracellular GSH levels and increased intracellular MDA levels in the 6-OHDA-treated SH-SY5Y cells compared to the control group (p < 0.001) (Figure 8).The treatment with D. pulchellum (50-250 µg/mL) effectively reversed this phenomenon induced by 6-OHDA in a concentration-dependent manner (p < 0.05, p < 0.01, p < 0.001).Similarly, the treatment with NAC at 2 mM significantly attenuated the changes induced by 6-OHDA (p < 0.001; Figure 8).Superoxide dismutase (SOD), catalase, glutathione peroxidase (GPx), and glutathione reductase (GR) represent the primary enzymatic defense system against intracellular oxidative stress.SOD functions to neutralize superoxide radicals, converting them into hydrogen peroxide (H2O2), which is further broken down to water by catalase and GPx.To investigate the impact of antioxidant capacity on the neuroprotective effects of D. pulchellum against 6-OHDA-induced neuronal damage, we evaluated the activities of SOD, catalase, GPx, and GR in the SH-SY5Y cells.At 24 h following the 6-OHDA treatment, our findings indicated a reduction in antioxidant enzyme activities in the 6-OHDA-treated SH-SY5Y cells compared to the control group (p < 0.001; Figure 9).The treatment with D. pulchellum (50-250 µg/mL) significantly increased the activities of SOD and GPx in a concentration-dependent manner when compared to the 6-OHDA-treated cells (p < 0.05, p < 0.01, p < 0.001).Notably, a significant elevation in the activities of catalase and GR was observed only with D. pulchellum pretreatment at 100-250 µg/mL (p < 0.01, p < 0.001).Additionally, the treatment with NAC at 2 mM effectively enhanced the antioxidant enzyme activities compared to those observed in 6-OHDA-treated cells (p < 0.001; Figure 9).Superoxide dismutase (SOD), catalase, glutathione peroxidase (GPx), and glutathione reductase (GR) represent the primary enzymatic defense system against intracellular oxidative stress.SOD functions to neutralize superoxide radicals, converting them into hydrogen peroxide (H 2 O 2 ), which is further broken down to water by catalase and GPx.To investigate the impact of antioxidant capacity on the neuroprotective effects of D. pulchellum against 6-OHDA-induced neuronal damage, we evaluated the activities of SOD, catalase, GPx, and GR in the SH-SY5Y cells.At 24 h following the 6-OHDA treatment, our findings indicated a reduction in antioxidant enzyme activities in the 6-OHDA-treated SH-SY5Y cells compared to the control group (p < 0.001; Figure 9).The treatment with D. pulchellum (50-250 µg/mL) significantly increased the activities of SOD and GPx in a concentration-dependent manner when compared to the 6-OHDA-treated cells (p < 0.05, p < 0.01, p < 0.001).Notably, a significant elevation in the activities of catalase and GR was observed only with D. pulchellum pretreatment at 100-250 µg/mL (p < 0.01, p < 0.001).Additionally, the treatment with NAC at 2 mM effectively enhanced the antioxidant enzyme activities compared to those observed in 6-OHDA-treated cells (p < 0.001; Figure 9). Discussion Based on the previous findings regarding the Desmodium genus, various species such as D. caudatum, D. gangeticum, D. pulchellum, and D. triquestrum have been associated with cardiovascular protection, liver protection, and anticancer effects due to their antioxidant properties.The primary active phytoconstituents in these species are phenolic compounds, flavonoids, and phenylpropanoids [8,10,14,15].Our previous report has also highlighted that D. triflorum exhibited radical scavenging activities against DPPH and TEAC radicals [38].Chidambaram et al. have indicated that D. motorium exhibited radical scavenging activities against DPPH and TEAC radicals [39].Another study by Tsai et al. compared the antioxidant phytoconstituents contents and antioxidant activities of 10 Desmodium species in Taiwan, identifying that D. sequax had the highest total phenolic content and antioxidant activity [40].To further explore these findings, we collected seven Taiwanese Desmodium species, three of which overlapped with Tsai's study, namely, D. gangeticum, D. sequax, and D. triflorum.Our aim was to compare the antioxidant Discussion Based on the previous findings regarding the Desmodium genus, various species such as D. caudatum, D. gangeticum, D. pulchellum, and D. triquestrum have been associated with cardiovascular protection, liver protection, and anticancer effects due to their antioxidant properties.The primary active phytoconstituents in these species are phenolic compounds, flavonoids, and phenylpropanoids [8,10,14,15].Our previous report has also highlighted that D. triflorum exhibited radical scavenging activities against DPPH and TEAC radicals [38].Chidambaram et al. have indicated that D. motorium exhibited radical scavenging activities against DPPH and TEAC radicals [39].Another study by Tsai et al. compared the antioxidant phytoconstituents contents and antioxidant activities of 10 Desmodium species in Taiwan, identifying that D. sequax had the highest total phenolic content and antioxidant activity [40].To further explore these findings, we collected seven Taiwanese Desmodium species, three of which overlapped with Tsai's study, namely, D. gangeticum, D. sequax, and D. triflorum.Our aim was to compare the antioxidant phytoconstituents, radical scavenging abilities, and relative reducing power among these species.Among the seven common Desmodium species, D. pulchellum exhibited the highest contents of total phenolics and phenylpropanoids, as well as superior ABTS radical scavenging capacity and relative reducing power.Conversely, D. motorium showed the highest flavonoid content but lower total phenolic and phenylpropanoid contents when compared to D. pulchellum.Additionally, its radical scavenging capacity against DPPH and ABTS radicals was inferior to that of D. pulchellum and D. sequax.Although D. sequax had lower total phenolic and phenylpropanoid content compared to D. pulchellum and D. motorium, it displayed the best DPPH radical scavenging capacity.It is worth noting that the total phenolic and flavonoid contents of D. triflorum and D. sequax differed from those in our previous findings and the values reported by Tsai [38,40].This variation could be attributed to factors such as harvesting time, collection location, extraction method, and experimental procedures.Furthermore, in our study, the content of phytoconstituents refers to the original medicinal materials of D. triflorum and D. sequax, whereas in previous studies and Tsai's report, it pertained to the content of phytoconstituents in the methanolic extract of these species.Considering the oxidative stress hypothesis as a neuropathological mechanism in neurodegenerative disorders, it is crucial to acknowledge that ROS are the primary radicals within cells and are responsible for inducing intracellular oxidative stress.Therefore, we further evaluated the ROS scavenging capacities of the seven common Desmodium species.Among the evaluated species, D. sequax exhibited the best superoxide scavenging capacity.D. motorium had the best hydrogen peroxide scavenging capacity, and D. triquestrum had the best hydroxyl radical scavenging capacity.The radical scavenging capacities of D. pulchellum on superoxide and hydroxyl radical were inferior to those of D. sequax and D. triquestrum, respectively. The above-mentioned techniques used to evaluate the total phytoconstituent contents and radical scavenging capacities of plants or natural products involve the chemical principles of hydrogen atom transfer (HAT) or single electron transfer (SET).For example, the FCP assay and RRP assay predominantly rely on SET mechanisms; conversely, the TEAC assay combines aspects of both SET and HAT mechanisms, and the DPPH assay is primarily based on the SET mechanism, with minor contributions from HAT marginal reactions [25].Numerous studies have revealed a strong linear correlation between the results of DPPH and ABTS assays and those obtained from the FCP assay.Similarly, a significant correlation has been observed between the RRP and the FCP assays [17,19,25].The findings of Tsai and Chidambaram have suggested a high correlation between total phenolic contents and ABTS scavenging capacities in Desmodium species.Additionally, the ABTS scavenging capacities of Desmodium species demonstrated a good linear relationship with the FRAP assay of Desmodium species [39,40].In line with these previous studies, our results also confirmed a high correlation between the total phenolic contents and radical scavenging capacity against DPPH radical, ABTS radical, or superoxide in Desmodium species.Moreover, the contents of total phenylpropanoid in the Desmodium species also exhibited a noteworthy correlation with the radical scavenging capacity against DPPH radical, ABTS radical, or superoxide in the Desmodium species.Furthermore, the radical scavenging capacity against DPPH radical, ABTS radical, or superoxide in Desmodium species is closely associated with their reducing ability.Considering that the primary reaction mechanisms of the aforementioned techniques involve SET reactions, the radical scavenging capacity of Desmodium species is expected to be linked to their ability in the SET reactions of redox processes.As per Perez's augmentation regarding the FCP assay [19], phenolic acids, especially phenylpropanoids (hydroxycinnamic acid derivatives), and flavonols exhibit higher reducing properties.In this investigation, we also observed a significant correlation between the total phenolic content of the Desmodium species obtained through the FCP method and the phenylpropanoid content of these species obtained using Arnow's method.Recent research reports on the phytochemical compositions of various Desmodium species have indicated that D. pulchellum is rich in flavonoids and phenylpropanoids, particularly (-)-catechin, (−)-epicatechin, (−)-gallocatechin, and (−)-epigallocatechin [15].D. sequax has been found to contain phenolic acids (e.g., chlorogenic acid) and flavonoid glycosides (e.g., vitexin) [40].D. triflorum also possesses vitexin derivatives [38].Additionally, D. gangeticum and D. triquestrum have been identified as containing flavonoids including the derivatives of quercetin and kaempferol [14,41].Catechin polyphenols exhibit the highest activity among the above compounds in terms of their radical scavenging abilities against DPPH and ABTS radicals, as indicated in the comparative analysis and previous reports [42,43].As a result, we propose that D. pulchellum exhibits superior radical scavenging capacity and reducing power primarily due to its high contents of catechin polyphenols. Given the pivotal role of MAO-B as a vital enzyme in dopamine metabolism and oxidative damage in the central dopaminergic system, MAO-B inhibitors stand out as a promising pharmaceutical avenue for treating PD, in contrast to MAO-A inhibitors.Consequently, this investigation also assessed the MAO-A and MAO-B inhibition activities of the Desmodium species for comparison purposes.The findings revealed that among the seven Desmodium species, D. motorium displayed the highest inhibition activity against MAO-A, while D. sequax exhibited the most significant inhibition activity against MAO-B.When comparing the IC 50 of the Desmodium species against both MAO-A and MAO-B, D. sequax emerged as the most selective inhibitor of MAO-B.Concerning the inhibition activity of D. pulchellum against MAO-A and MAO-B, the results were similar to those of Cai [16], indicating a more specific inhibition of MAO-A.Additionally, there was no discernible correlation between the inhibition activity of the Desmodium species against MAO-A and MAO-B and their antioxidant components (total phenols and phenylpropanoids).In a prior study, Cai identified alkaloid components such as 5-methoxy-N,N-dimethyltryptamine and N,N-dimethyltryptamine as the primary phytoconstituents responsible for the MAO-A inhibition activity of D. pulchellum [16].In summary, this study marks the initial identification of D. sequax from the Desmodium species as exhibiting the most selective inhibition of MAO-B, suggesting its potential for further development, while underscoring the need for additional research into its active components. Based on the findings presented above, we continued our investigation into the neuroprotective properties of two plants (D. pulchellum and D. motorium) known for their higher antioxidant phytoconstituent contents and one plant (D. sequax) recognized for its selective inhibition of MAO-B activity.Adhering to the mitochondrial oxidative stress theory of PD, we conducted experiments utilizing the PD models in the SH-SY5Y neuroblastoma cells through the administration of the dopamine neurotoxin 6-OHDA or the mitochondrial complex-1 inhibitor rotenone, both of which are known to induce neuronal cell damage.Our results demonstrated that all three Desmodium plants exhibit concentrationdependent neuroprotective effects against 6-OHDA-or rotenone-induced neuronal damage, with D. pulchellum showing the most promising efficacy.Given that the neuronal damage induced by 6-OHDA primarily arises from the intermediate product p-quinone and the subsequent ROS generated during auto-oxidation [31], our study revealed that all three Desmodium plants also effectively inhibit the 6-OHDA-induced auto-oxidation in a concentration-dependent manner, with D. pulchellum exhibiting the most significant impact.Notably, the positive control drug NAC demonstrated neuroprotective effects against 6-OHDA-or rotenone-induced neuronal cell damage, consistent with findings from previous studies [33], and similarly inhibited 6-OHDA auto-oxidation.Furthermore, this study delved into the neuroprotective mechanism of D. pulchellum against 6-OHDA-induced neuronal damage based on the mitochondrial oxidative stress neuropathological theory.In line with the existing literature [33], 6-OHDA triggered an increase in the intracellular ROS levels, leading to a reduction in the intracellular non-enzymatic antioxidant GSH levels, the diminished activity of antioxidant enzymes (SOD, catalase, GPx, and GR), and an elevation in the concentration of the lipid oxidation end-product MDA, ultimately culminating in neuronal damage characterized by reduced cell viability.D. pulchellum, however, demonstrated the ability to restore the balance in intracellular ROS and antioxidant defense systems disrupted by the 6-OHDA treatment, thereby mitigating the occurrence of neuronal damage.Numerous previous reports have indicated that catechin polyphenols possess protective effects against neuronal damage and apoptosis provoked by 6-OHDA, rotenone, or MPTP [44,45].This protective mechanism is primarily attributed to their robust ability to scavenge radicals and trigger intracellular antioxidant enzymes, thereby engaging various intracellular signaling pathways.Consolidating our current discoveries with those of fellow researchers [15,16], D. pulchellum emerges as a standout species among the seven commonly utilized Taiwanese Desmodium species, showcasing notable proficiency in scavenging radicals and exerting neuroprotective effects.Its principal constituents are presumed to be catechin polyphenols, including (-)-catechin, (−)-epicatechin, (−)-gallocatechin, and (−)-epigallocatechin [15].Nevertheless, it presented slight selectivity towards MAO-A, with its active phytoconstituents possibly encompassing alkaloids such as 5-methoxy-N,Ndimethyltryptamine and N,N-dimethyltryptamine [16].Our forthcoming endeavors for research on D. pulchellum will concentrate on devising a standardized preparation method for its neuroprotective catechin polyphenols, quantifying its primary active ingredients, assessing its neuroprotective efficacy, and delving deeper into its molecular mechanisms.D. sequax also exhibited notable radical scavenging ability and neuroprotective effects; moreover, it displayed a significant and selective ability to inhibit MAO-B.These effects are similar to the action of known MAO-B inhibitors such as selegiline and rasagiline [1,3].Tsai's report has suggested that D. sequax contains chlorogenic acid and vitexin [40].Chlorogenic acid-but not vitexin-exhibits inhibitory effects on MAO-B [44,46], indicating that the active ingredients might comprise compounds more akin to chlorogenic acid.However, further phytochemical investigation is needed to isolate and confirm the activities of these ingredients. Plant Collection and Filtrate Preparation Seven Desmodium species including D. caudatum, D. gangeticum, D. motorium, D. pulchellum, D. sequax, D. triflorum, and D. triquetrum were collected from low-altitude regions in Taiwan and identified by Prof. Hung-Chi Chang from Chaoyang University of Technology.Entire plants of these seven Desmodium species were dried and ground into a coarse powder.Subsequently, each powder (1 g) was extracted with 10 mL of methanol and sonicated for 90 min, then filtered using a 0.22 µm filter, and adjusted to a total volume of 10 mL to obtain a methanolic standardized filtrate with a concentration of 100 mg/mL.This filtrate was further diluted with distilled water to evaluate the contents of antioxidant phytoconstituents and radical scavenging activities.For the 6-OHDA auto-oxidation and neurotoxicity experiments, the methanolic standardized filtrate of the Desmodium plants was dried with nitrogen, then dissolved again with phosphate-buffered solution (PBS) or Dulbecco's modified Eagle's medium (DMEM). Total Phenolic Contents The determination of total phenolic contents in the Desmodium plants was conducted using a microtiter spectrophotometric reader (PowerWave X340, Bio-Tek Inc., Winooski, VT, USA) following the procedure outlined in our previous report [24].The methanolic standardized filtrate (5 mg/mL) or a reference standard (+)-catechin (0-250 µg/mL) was combined with 50% (v/v) FCP reagent and 10% sodium carbonate solution in triplicate.This reaction mixture was shaken for 20 s, then incubated at room temperature (RT) for 30 m. Absorbance was measured at 725 nm.The total phenolic contents in the Desmodium plants are expressed as the relative amounts of (+)-catechin per gram of the Desmodium plants (mg catechin/g sample). Total Flavonoid Contents The total flavonoid contents in the Desmodium plants were determined according to our previously reported method [24].The methanolic standardized filtrate (10 mg/mL) or a reference standard quercetin (0-100 µg/mL) was mixed with 2% (w/v) aluminum nitrate and 0.2 M potassium acetate in triplicate.This reaction solution was shaken for 20 s, then incubated at RT for 15 min.Absorbance was recorded at 415 nm.The total flavonoid contents in the Desmodium plants are expressed as the relative amounts of quercetin per gram of the Desmodium plants (mg quercetin/g sample). Total Phenylpropanoid Contents The assessed method of total phenylpropanoid contents in the Desmodium plants followed our previously reported method [24].The methanolic standardized filtrate (1 mg/mL) or a reference standard verbascoside (0-100 µg/mL) was mixed with 10% (w/v) Arnow reagent and 2N sodium hydroxide in triplicate.This reaction solution was shaken for 30 s, then incubated at RT for 10 min.Absorbance was recorded at 525 nm.The total phenylpropanoid contents in the Desmodium plants are expressed as the relative amounts of verbascoside per gram of the Desmodium plants (mg verbascoside/g sample). Radical Scavenging Capacities 4.3.1. DPPH Radical Scavenging Capacities According to our previously reported method [24], the DPPH radical scavenging capacities of the Desmodium plants were assessed.The methanolic standardized filtrate (0-5 mg/mL) or a reference standard (+)-catechin (0-25 µg/mL) was pipetted into each well of a 96-well plate and mixed with 300 µM DPPH methanol solution or methanol in triplicate.This reaction solution was shaken for 15 s, then incubated at RT for 30 min in the dark.Absorbance was recorded at 517 nm.The DPPH radical scavenging capacity of the Desmodium plants is expressed as reference standard (+)-catechin equivalents in milligrams per gram of the Desmodium plants (abbreviated as CEDSC, mg catechin/g sample). ABTS Radical Scavenging Capacities The ABTS radical scavenging capacities of the Desmodium plants were also assessed following the procedure outlined in our previous report [24].The methanolic standardized filtrate (0-5 mg/mL) or a reference standard Trolox (0-50 µM) was dispensed into each well of a 96-well plate and mixed with a diluted ABTS solution (mix 8 mM ABTS solution and 8.4 mM potassium persulfate solution in a ratio of 2:1, dilute it 50 times with ethanol the next day) or ethanol in triplicate.This reaction solution was shaken for 15 s, then incubated at RT for 4 min in the dark.Absorbance was recorded at 734 nm.The ABTS radical scavenging capacity of the Desmodium plants is expressed as reference standard Trolox equivalents in millimole per gram of the Desmodium plants (abbreviated as TEAC, mmol Trolox/g sample). RRP Assay The RRP assay of the Desmodium plants followed the procedure outlined in Vasyliev's report [47].The methanolic standardized filtrate (0-2.5 mg/mL) or a reference standard ascorbic acid (0-20 µM) was dispensed into each well of a 96-well plate and mixed with 1% potassium ferricyanide (dissolved in 50 mM HCl), 20 mM ferric chloride, and 50 mM sodium phosphate-buffered solution (pH 6.6) in triplicate.This reaction solution was then incubated at 50 • C for 20 min and subsequently treated with 1% trichloroacetic acid and 5 mM ferric chloride.Absorbance was recorded at 700 nm.The RRP value of the Desmodium plants is expressed as reference standard ascorbic acid equivalents in micromole per gram of the Desmodium plants (relative reducing power, abbreviated as RRP, µmol ascorbate/g sample). Superoxide Scavenging Capacities The superoxide scavenging capacities of the Desmodium plants were evaluated following the methodology outlined in our previous study [31].In brief, the methanolic standardized filtrate (0-10 mg/mL) or a reference standard SOD (0-500 mU/mL) was added into each well of a 96-well plate, then mixed with a solution containing 800 µM xanthine, 240 mU/mL xanthine oxidase, and 168 µM nitroblue tetrazolium (dissolved in 50 mM PBS; pH 7.4) in triplicate.After 15 s of agitation, the kinetic slope of the absorbance Plants 2024, 13, 1742 20 of 24 values was measured over a period of 5 min at 560 nm at RT.The superoxide scavenging capacity of the Desmodium plants is expressed as reference standard SOD equivalents in units per gram of the Desmodium plants (SOD equivalent, U SOD/g sample). Hydrogen Peroxide Scavenging Capacities The hydrogen peroxide scavenging capacities of the Desmodium plants were also conducted following the procedure outlined in our previous study [31].Briefly, the methanolic standardized filtrate (0-5 mg/mL) or a reference standard Trolox (0-100 µM) was dispensed into each well of a 96-well plate, then mixed with 500 µM H 2 O 2 , 8 U/mL horseradish peroxidase, and 5 mM homovanillic acid (dissolved in 50 mM PBS (pH 7.4)) in triplicate.This reaction solution was shaken for 15 s, then incubated at RT for 25 min in the dark.Fluorescence intensity was measured using a fluorescence spectrometric reader (FLX800, Bio-Tek Inc., Winooski, VT, USA) at an excitation wavelength of 315 nm and an emission wavelength of 425 nm.The hydrogen peroxide scavenging capacity of the Desmodium plants is expressed as reference standard Trolox equivalents in millimole per gram of the Desmodium plants (Trolox equivalent, mmol Trolox/g sample). Hydroxyl Radical Scavenging Capacities The hydroxyl radical scavenging capacities of the Desmodium plants were conducted with modifications based on the procedure outlined in Cheng's report [48].The methanolic standardized filtrate (0-5 mg/mL) or a reference standard Trolox (0-500 µM) was dispensed into each well of a 96-well plate, then mixed with 20 mM H 2 O 2 , 3 mM ferrous sulfate (dissolved in 1 mM EDTA solution), and 2 mM luminol salt solution in triplicate.After 15 s of agitation, the reaction mixture was incubated at RT for 1 min in the dark.The intensity of chemiluminescence was recorded.The hydroxyl radical scavenging capacity of the Desmodium plants is expressed as reference standard Trolox equivalents in millimole per gram of the Desmodium plants (Trolox equivalent, mmol Trolox/g sample). MAO Inhibitory Activities The MAO inhibitory activities of the Desmodium plants were assessed based on the procedure outlined in Haraguchi's report [49] with modifications.Initially, brain mitochondrial homogenates were prepared by homogenizing rat cerebral hemispheres in ice-cold 50 mM PBS solution (containing 250 mM sucrose and 0.5 mM EDTA, pH 7.4).This homogenization process was conducted using a Digital Homogenizer, followed by centrifugation with a sucrose concentration gradient in two different solutions (30 and 60 mM sucrose containing 3-6% Ficoll, 120-240 mM mannitol, and 25-50 µM EDTA sequentially, pH 7.4) at 14,000 rpm for 30 min.The resulting mitochondrial homogenate was then suspended in 50 mM PBS solution (pH 7.4).Subsequently, these mitochondrial homogenate suspensions were preincubated with either clorgyline (2 µM) or pargyline (2 µM) to assess MAO-A or MAO-B activity, respectively.Following preincubation, the enzyme solution was transferred into each well of a 96-well plate and mixed with the methanolic standardized filtrate (0-25 mg/mL) and 2.5 mM kynuramine solution in triplicate.This reaction solution was shaken for 15 s, then incubated at 37 • C for 20 min.The termination of the reaction was achieved by adding 10% ZnSO 4 and 2 N NaOH.The fluorescence of the final product 4-hydroxyquinoline was measured at an excitation wavelength of 315 nm and an emission wavelength of 380 nm.For reference, kynuramine was replaced with 4-hydroxyquinoline as a standard, and a calibration curve of 4-hydroxyquinoline (0-12.5 µM) was established to quantify the amount produced in the reaction described above.The inhibitory percentage of 4-hydroxyquinoline formation caused by the Desmodium plants was calculated to determine the IC 50 values of the Desmodium plants against MAO-A or MAO-B. 6-OHDA Auto-Oxidation Inhibition The 6-OHDA auto-oxidation inhibitory assay of the Desmodium plants was conducted according to the procedure outlined in our previous report [31].This assay was performed in a cell-free system designed to mimic the conditions corresponding to 6-OHDA treatment in cell experiments.In brief, the PBS-redissolved filtrate (0-500 µg/mL) or a reference standard NAC (1 or 2 mM) was pipetted into each well of a 96-well plate and mixed with 25 µM 6-OHDA (dissolved in 10 mM PBS solution) in triplicate.This reaction solution was shaken for 15 s at 37 • C in the dark.Subsequently, the absorbance at 490 nm, which reflects the levels of the formed intermediate product p-quinone was monitored for 3 min at 30 s intervals at 37 • C. 4.6.Protection against 6-OHDA-Induced Neurotoxicity in SH-SY5Y Cells 4.6.1.Cell Culture Human neuroblastoma SH-SY5Y cells were procured from the American Type Culture Collection (ATCC; Manassas, VA, USA).Following the guidelines provided by the ATCC, the cells were thawed and cultured in a 25 cm 3 culture flask using the DMEM medium, supplemented with 10% FBS, penicillin (100 U/mL), and streptomycin (100 µg/mL).The cells were incubated in a water-saturated atmosphere incubator with 5% CO 2 , and the culture medium was replenished every 2-3 days once they reached 70−80% confluency.Upon achieving confluency, the cells were washed with a fresh 0.25% trypsin solution (containing 0.53 mM EDTA), then seeded into 96-well sterile clear-bottom plates (2 × 10 4 cells/well), 6-well sterile clear-bottom plates (8 × 10 5 cells/well), or 90 mm sterile clear-bottom dishes (4 × 10 6 cells/dish).The cell experiments were conducted 24 h after seeding. Estimation of Cell Viability Following exposure to 6-OHDA (25 µM) or rotenone (1 µM) for 24 h [33,50], the viability of the cells was evaluated using the MTT assay in accordance with our previous protocols [31].To assess the protective effects of the methanolic extracts from three Desmodium plant species, namely D. pulchellum, D. sequax, and D. motorium, on the SH-SY5Y cells, a 15 min treatment was administered prior to the addition of 6-OHDA or rotenone.NAC was utilized as a positive control, which was also pretreated before the addition of 6-OHDA or rotenone [33].The methanolic standardized filtrate from the three Desmodium plants was dried with nitrogen and subsequently reconstituted with DMEM.The solution was then filtered using a 0.22 µM sterile filter to obtain the stock solution of the methanolic extracts.The working solution (25-500 µg/mL) was freshly prepared for experimentation. Estimation of Intracellular ROS Levels The intracellular ROS levels were assessed using the ROS-sensitive cell-permeant fluorophore DCFH-DA (Sigma-Aldrich Chemical Co., St. Louis, MO, USA) following our established protocol [31].The SH-SY5Y cells were seeded into clear-bottom black 96-well culture plates and subjected to treatment with 6-OHDA (with or without pretreatment with the methanolic extract of Desmodium pulchellum or NAC) for 24 h.Subsequently, DCFH-DA (100 µM) was added to the cells and incubated for 30 min at 37 • C in the dark.The cells were then washed with PBS and placed in DMEM without phenol red.The fluorescence intensity was measured at Ex 485/Em 530 nm using a fluorescent microplate reader.The data are expressed as a percentage relative to untreated cells, which served as the control group (designated as 100%). Estimation of Intracellular Antioxidant Makers The activities of intracellular antioxidant makers, such as SOD and catalase, as well as the levels of GSH and MDA, were assessed following the protocol described in our previous publication [51].The activities of GPx and GR were determined using commercially available enzyme-linked immunosorbent assay (ELISA) kits (Cayman Chemical Company, Ann Arbor, MI, USA), according to the manufacturer's instructions.The SH-SY5Y cells were seeded into 90 mm sterile clear-bottom dishes and treated with 6-OHDA (with or without pretreatment with the methanolic extract of Desmodium pulchellum or NAC) for 24 h.Subsequently, the cells were collected, sonicated, and centrifuged for 15 min at 4 • C. The Statistical Analysis The data of the phytoconstituent contents, radical scavenging capacities, MAO inhibitory activities, and 6-OHDA auto-oxidation inhibitory activities obtained from three repeated experiments are expressed as mean ± standard deviation (SD).Similarly, the data of the cell viability, intracellular ROS levels, and oxidative markers, obtained from four repeated experiments, are also presented as mean ± SD.Statistical analysis was performed using one-way analysis of variance (ANOVA), followed by Turkey's test, using the statistical software SPSS 20.0 for Windows.Probability values less than 0.05 were considered statistically significant. Conclusions This study compared the radical scavenging activity, reducing power, and MAO inhibitory activity of seven Taiwanese Desmodium species.Among them, D. pulchellum, D. sequax, and D. motorium showed the highest radical scavenging activity and reducing power, effectively neutralizing superoxide anions.Notably, D. sequax demonstrated the most selective inhibitory effect on MAO-B activity.Their radical scavenging activity was primarily correlated with their total phenolic contents, particularly the phenylpropanoids, while the MAO inhibitory activity was not reliant on these compounds.Additionally, this study unveiled that the aforementioned three Desmodium species, especially D. pulchellum, exhibit neuroprotective effects against 6-OHDA-induced neurotoxicity.D. pulchellum was observed to counteract oxidative damage and restore the intracellular antioxidant defense system impaired by 6-OHDA.Consequently, we suggest that D. pulchellum, among the assessed Desmodium plants, has the potential to be developed into a therapeutic drug for Parkinson's disease, which is likely due to its abundance of total phenolic compounds. resulting cell supernatant was utilized to assay the aforementioned antioxidant markers.The activities of SOD are expressed as U per milligram of protein, while the activities of GR are expressed as mU per milligram of protein.The activities of GPx are expressed as µmol NADPH oxidation/min/milligram of protein.The catalase activities are expressed as µmol H 2 O 2 degradation/min/milligram of protein.The levels of GSH and MDA were expressed as nmol per milligram of protein. Table 1 . Pearson correlation coefficients (r) between the contents of antioxidant phytoconstituents and radical scavenging capacities of the Desmodium plants. Table 2 . The MAO inhibitory activities of the methanolic filtrates of the seven Desmodium plants.
2024-06-26T15:11:26.036Z
2024-06-24T00:00:00.000
{ "year": 2024, "sha1": "535c8b925b5d067405a455206e5aed1b93a719cb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/13/13/1742/pdf?version=1719220847", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "827a6b051356acc751010f6d7112d658fc7105c9", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [] }
52932934
pes2o/s2orc
v3-fos-license
Intravascular papillary endothelial hyperplasia along the intrinsic muscle after hand contusion injury: Report of two cases Intravascular papillary endothelial hyperplasia (IPEH), also known as Masson's tumor or vegetant intravascular hemangioendothelioma, is a reactive condition representing an exuberant organization and recanalization of a thrombus. It can occur in normal blood vessels or in vascular malformations, perhaps in response to blood vessel injury or thrombosis. In this report, we present the diagnostic and therapeutic courses of a 55 year-old woman and an 18 year-old man, who had a progressive protruding hand mass following a hand contusion. The pathological examination confirmed the diagnosis of IPEH in both patients. Introduction Intravascular papillary endothelial hyperplasia (IPEH), also known as Masson's tumor or vegetant intravascular hemangioendothelioma, is a reactive condition representing an exuberant organization and recanalization of a thrombus. 1 It can occur in normal blood vessels or in vascular malformations, perhaps in response to blood vessel injury or thrombosis. However, the exact pathogenesis of IPEH is still unknown. IPEH was first described by the French pathologist Masson as a neoplastic lesion termed vegetant intravascular hemangioendothelioma. 2 In 1976, Clearkin and Enzinger proposed the term IPEH, which is currently known as a non-neoplastic reactive endothelial proliferation. 3 IPEH tumors are characterized by multiple endothelial-lined small papillary structures with hyaline stalks. They are identified as partially involving an organizing thrombus within a vein and have been detected in pure and mixed forms. The pure form typically occurs within dilated vascular spaces, and the mixed form develops in a preexisting vascular lesion (i.e., as a hemangioma). 1,4 Case reports Case 1 A 55-year-old female patient suffered from a progressive protruding mass lesion on the left palm that had persisted for several months. She claimed to have had a left-hand contusion with ecchymosis 2 years previously. On physical examination, the mass was soft, indolent, and located at the left palmar aspect between the 3rd and 4th metacarpals. All laboratory investigations were within normal limits. A lobulated and heterogeneous lesion was observed between the 3rd and 4th flexor digitorum tendons in magnetic resonance images (Fig. 1A). On excision, the soft tumor measured 3.3 Â 1 Â 0.6 cm in size ( Fig. 2A) and was confirmed to be located between the 3rd and 4th flexor digitorum tendons and along the lumbrical muscle (Fig. 3). Histopathology confirmed IPEH, including a dilated vascular wall with focal hemorrhage, and a small percentage of fibrinous material, granulation tissue, focal papillary endothelial hyperplasia, and recanalization of the soft tissue (Fig. 4). Case 2 An 18-year-old male patient suffered from a progressive protruding mass lesion on the ulnar side of the left hand with rapid enlargement (Fig. 5) in the recent 2 months after a hand contusion injury caused by a fall. There was a heterogeneously lobulated lesion in the subcutaneous layer of ulnar side of the 5th metacarpal shaft along the hypothenar muscle in magnetic resonance images (Fig. 1B). On excision, the soft tumor measured 2.2 Â 0.8 Â 0.6 cm in size (Fig. 2B) and was confirmed to be located along the hypothenar muscle. The final histopathology of tumor tissue sections was compatible with a diagnosis of IPEH. Discussion The exact pathogenesis of IPEH is unknown, but an unusual organization of the thrombus following trauma is considered to play a role. 5,6 In a pathologic series of 91 cases, patient ages varied from 9 months to 78 years. 4 Moreover, younger patients were more likely to present with the mixed form of IPEH tumors. 4 In addition, IPEH most often arises in female patients, and a history of trauma only occurs in about 4% of all patients. 4 Because its clinical signs and symptoms are nonspecific and variable, it is a diagnostic challenge. The patient's history of any trauma, a physical examination, ultrasound and computed tomography can help to distinguish IPEH from other kinds of vascular hand lesions. While the exact pathogenesis of IPEH is still unclear, an unusual form of thrombus organization following a trauma is considered to play a role. The release of beta-fibroblast growth factor (b-FGF) from macrophages attracted to the site can trigger endothelial cell proliferation, which in turn, induces the release of more b-FGF, leading to a vicious cycle. 7 According to our analysis of the literature, IPEH usually presents as a palpable mass that most commonly occurs in the fingers, head, and neck. 1,4,7e9 Moreover, an IPEH lesion over the hand along the intrinsic muscle, including lumbrical muscle or hypothenar muscle, is relatively rare. Therefore, distinguishing between reactive and neoplastic intravascular lesions of the hand is important. We hope that the detailed images presented for the two cases will enable the visual differentiation of an IPEH from other common hand tumors.
2018-10-22T06:13:30.805Z
2018-10-03T00:00:00.000
{ "year": 2018, "sha1": "57d24152d9e829ca454f334acf05ea793dd402f9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.aott.2018.07.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57d24152d9e829ca454f334acf05ea793dd402f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208535931
pes2o/s2orc
v3-fos-license
Priming primary care providers to engage in evidence-based discussions about cannabis with patients Cannabis use has become increasingly common in the U.S. in recent years, with legalization for medical and recreational purposes expanding to more states. With this increase in use and access, providers should be prepared to have more conversations with patients about use. This review provides an overview of cannabis terminology, pharmacology, benefits, harms, and risk mitigation strategies to help providers engage in these discussions with their patients. Current evidence for the medical use of cannabis, cannabis-related diagnoses including cannabis use disorder (CUD) and withdrawal syndromes, and the co-use of opioids and cannabis are discussed. It is crucial that providers have the tools and information they need to deliver consistent, evidence-based assessment, treatment, prevention and harm-reduction, and we offer practical guidance in these areas. Introduction Cannabis use has become more accepted by society over the last two decades, and legalization is now supported by a majority of Americans [1]. Currently 33 states and the District of Columbia (DC) have legalized cannabis for certain medical indications, and in 10 of these states plus DC, it is legal for recreational use [2]. The prevalence of cannabis use has increased in recent years, with past-month use in the United States (U.S.) increasing from 5.8 to 8.4% between 2007 and 2014 [3]. Over 90% of individuals who use cannabis report recreational use, though many patients do not draw a clear distinction between medical and recreational use [4,5]. The prevalence of frequent cannabis use has also increased over time; from 2002 to 2014, the prevalence of past-year daily cannabis use in the U.S. increased from 1.3 to 2.5% [6]. Increasingly, providers are likely to encounter patients who use cannabis, or who have questions about cannabis use. Thus, in this review article, we aim to provide information about terminology, pharmacology, benefits, harms, and risk mitigation strategies to help providers frame these discussions. Terminology, pharmacology, and formulations The two main classes of compounds in the cannabis plant flower are cannabinoids and terpenes. There are over 100 cannabinoids, but the best studied are delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD). The effects of cannabis on the body and mind are mediated through the endocannabinoid system, which includes receptors (CB1 and 2) found throughout the body [7]. CB1 receptors are highly concentrated in the brain, though not in the lower brainstem which may explain why cannabis has not been associated with respiratory depression [8,9]. Most of the psychogenic effect-or the "high"-of cannabis is mediated through the interaction of THC with the CB1 receptor. On the other hand, CBD does not cause potent psychogenic effects due to its low binding affinity for CB1; it also has lowbinding affinity for CB2. Terpenes have been largely unstudied, but contribute the aroma and flavor of cannabis and are thought to potentially influence the psychogenic effects of the cannabinoids [7]. The term cannabis is typically used when referring to the Addiction Science & Clinical Practice whole plant or extracts of the whole plant, while the term cannabinoids is used to refer to plant extraction or synthetic products that are isolates of THC, CBD, or a combination of the two. Cannabis is available and used in many forms. The flower of the plant is the most commonly used form, and there are also numerous plant extract formulations including edibles, resins, oils, tinctures, oromucosal sprays, and topical applications (see Table 1). Two synthetic cannabinoids-dronabinol and nabilone-are FDA-approved for use in chemotherapy induced nausea and vomiting, and AIDS-related cachexia, and one recently-FDA-approved drug, Epidiolex ® , used for rare forms of seizure disorders, contains CBD extracted from cannabis plants [10]. There are two main subspecies of the Cannabis sativa plant-Sativa and Indica-though they are largely now indistinguishable because of extensive cross breeding over time [11]. These terms are used colloquially to characterize the expected effects of a given product: Sativa products are purported to have energizing, uplifting, and creative effects (a "mind high"), while Indica strains tend to be sedating, and relaxing physically and mentally (a "body high"). While these terms are commonly used, they are not scientifically grounded. The degree to which a product will have energizing, intoxicating, or relaxing effects is most likely determined by the relative amounts of THC and CBD in the product [3]. There are three main routes of administration: inhalation, ingestion, and topical application (see Table 2 for a summary of the pharmacokinetics and side effects for each) [7]. Smoking and vaping produce rapid peaks in central nervous system THC concentration, and the duration of peak effect is often limited to 30-60 min. "Dabbing, " another method of inhalation, refers to the flash vaporizing of hash oil that is extracted with chemicals like butane and concentrated into a wax or resin that is smoked, produces very high concentrations of THC and an intense intoxicating effect [12]. On the other hand, with the ingestion of edibles (e.g. brownies, cookies, candies, etc.) the peak effect is delayed, often for 2-3 h or more, and the duration of effect is longer. Furthermore, the oral metabolite of THC, 11-hydroxy-THC, is several-fold more potent in its psychogenic effect than THC. Taken together, these characteristics of cannabis edibles explain their role in numerous case reports of acute cannabis-induced psychosis requiring hospitalization after cannabis naïve patients took repeat doses of cannabis edibles because they did not feel an initial effect [13,14]. Topical applications of cannabis are also widely available but have not been well studied. While they are thought to be associated with minimal systemic absorption and to have lower potential for central nervous system effects, there is little empiric data documenting the potential benefits and harms of topical preparations. The potency of cannabis is typically defined as the amount of THC in the product. Given the innumerable formulations and strengths of cannabis available today, it is challenging to define the average potency of cannabis patients will encounter. Moreover, the THC concentration of cannabis has changed over time. For example, the THC concentration of black-market cannabis increased threefold, from 4 to 12%, over the last two decades. Several states have defined a "dose" of THC as 5-10 mg [3]. The average cannabis cigarette (i.e., joint) weighs about 0.32 to 0.66 g [15,16]. High potency products in dispensaries can contain 15-20% THC, so the amount of THC in a high potency product might be anywhere from 48 to 132 mg. In a recent study of dispensaries in several states, the median amount of THC in a 1.5-g edible product was 54 mg, and the median THC:CBD ratio was 36:1 [17]. Of note, cannabis product labels are often inaccurate. One multi-state study examining THC content of edible products sold in cannabis dispensaries found that only 17% of product labels were accurate when compared to lab measured content [17]. CBD extracts are available over the counter in many locations, as well as online. A study of 84 CBD products purchased online found that only 31% were labeled accurately [18]. In 21% of the CBD products, THC was detected, often in concentrations high enough to produce intoxication, especially in children [18] (which underscores the need to store all cannabis products, including CBD products, out of the reach of children). Cannabis for medical use Medical cannabis laws and implementation of programs vary greatly by state, but in many states with medical cannabis laws, physicians can "certify" that a patient has a condition for which cannabis has been indicated and state-administered programs grant certified patients medical cannabis "cards" [2]. The medical cannabis card then allows patients to enter medical cannabis dispensaries where an employee (often referred to as a "budtender" [19]) discusses the various available products. However, most budtenders have little or no medical training [20]. Most of these states also allow limited home cultivation of cannabis for medical purposes. In states in which cannabis is recreationally legal (all of which also have medical cannabis laws), the medical and recreational dispensaries are often in the same store. Of the states with medical cannabis laws, most have legalized cannabis for the following indications: chronic pain, muscle spasticity and/or multiple sclerosis, epilepsy, anorexia/cachexia, HIV/AIDS, glaucoma, cancer, inflammatory bowel disease, and post-traumatic stress disorder (PTSD) [21]. In a recent nationwide review of state registry data, the most commonly listed qualifying conditions for medical cannabis certification were chronic pain, followed by multiple sclerosis, nausea, cancer, then PTSD [22]. Additionally, one survey study in California found that the most common self-reported indications for cannabis use were anxiety, chronic pain, stress, insomnia, and depression [23]. There are many gaps in the evidence examining the health effects of cannabis, though the literature is expanding rapidly. Many studies have been conducted outside the U.S. in part because of regulatory hurdles associated with the status of cannabis as a Schedule 1 controlled substance at the federal level, which designates it as having "no currently accepted medical use and a high potential for abuse" [24]. A recent National Academy of Sciences report provided a broad and high-level overview of contemporary knowledge about the pharmacology, regulation, benefits, and harms of cannabis for a variety of conditions [3]. It concluded that there is "substantial evidence that cannabis is an effective for chronic pain in adults. " [3]. However, the report did not distinguish the evidence for different types of chronic pain, did not use standard systematic review methods and terminology, and also cautioned that little was known about the efficacy and dosing of cannabis preparations available in the United States. Two recent systematic reviews that focused on the effects of cannabis for chronic pain found limited evidence that cannabis is effective for neuropathic pain (i.e. pain from peripheral or central nervous system lesions) and for multiple sclerosis-related pain and spasticity in studies of short duration, but insufficient evidence to draw conclusions about the effects of cannabis on nociceptive pain (i.e. pain from tissue damage such as from inflammation, trauma or arthritis) [25,26]. In placebocontrolled trials of patients with neuropathic pain from a variety of causes, cannabis was associated with a small reduction in pain (about 1 point on a 10-point visual analog scale), and more patients treated with cannabis experienced a 30% or more improvement in neuropathic pain than those assigned to placebo [25]. In patients with multiple sclerosis, cannabis formulations with a combination of THC/CBD in a 1:1 or 2:1 ratio were associated with improvements in pain and spasticity at up to 12-15 weeks of follow up [25,27]. However, there are numerous limitations to this evidence base. First, findings were not consistent across studies. Second, most studies showing benefit reported outcomes after only a matter of hours to days with the longest study being 15 weeks. Providers should be cautious in extrapolating data from short-term studies to the management of chronic pain, which often requires treatment for many months. Third, the formulations tested differed across studies, and often differed from formulations that are commonly available to, and used by, patients in dispensaries [25]. For example, the best studied formulation of cannabis is nabiximols, a cannabis extract that comes as an oromucosal spray of 2.5 mg THC/2.5 mg CBD; while it is licensed in other parts of the world, it is not available in the U.S. In studies that showed a benefit from nabiximols, participants used an average of 25 mg of THC over 24 h [25]. Oral cannabinoids may improve chemotherapy-induced nausea and vomiting [3]. The synthetic, prescription cannabinoids nabilone and dronabinol appear to be as effective in reducing chemotherapy induced nausea and vomiting as older prescription anti-emetics, but the evidence is limited by methodologic flaws, and not representative of currently available antiemetics and chemotherapeutic regiments. The authors of one review cautiously suggest cannabinoids may be a reasonable second-line option for patients who have failed or cannot tolerate conventional anti-emetics [28]. There are numerous clinical conditions for which cannabis or cannabinoids have been considered or even approved for use at the state level despite a lack of controlled studies in many cases. There is largely insufficient evidence to support or refute the effectiveness of cannabis and/or cannabinoids for the following conditions: irritable bowel syndrome, amyotrophic lateral sclerosis, Parkinson's disease, dystonia, post-traumatic stress disorder, and glaucoma. There are very few high-quality studies examining the effects of cannabis and oral cannabinoids on weight loss and anorexia associated with HIV/AIDS and cancer. While there is insufficient evidence for the effectiveness of cannabis or cannabinoids for most forms of epilepsy, the FDA recently approved a highly purified CBD plant extract called Epidiolex ® for the treatment of two rare childhood-onset epilepsy syndromes on the basis of trials showing significant reduction in seizure frequency [29,30]. There has been increased interest in additional potential therapeutic effects of CBD alone, in part because it is viewed as likely to have a more favorable adverse effect profile than THC-containing products. While there is not conclusive evidence supporting the health effects of CBD preparations beyond the epilepsy syndromes mentioned above, a growing body of evidence suggests CBD may have potential as an anxiolytic agent, though further research is needed [31,32]. Potential harms of cannabis use Cannabis and other substance use disorders Cannabis use is associated with the development of a cannabis use disorder (CUD), characterized by craving, tolerance, withdrawal, and continued use despite adverse social, vocational, or legal consequences of use (see Table 3) [33]. Authors examining data from multiple waves of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) sought to determine the association of cannabis use with subsequent mental health and substance use disorders and found that among survey participants with past-year cannabis use, 36% met criteria for CUD; among those reporting cannabis use in wave 1, the incidence of CUD 7 to 8 years later was 25% [34]. Nearly half of those with CUD had moderate to severe disease, and men, lowincome participants, adolescents and young adults were at highest risk of developing CUD [35]. Indeed, the risk of CUD was up to 50% greater among individuals who used daily and initiated use in adolescence [34,36]. Cannabis withdrawal is now considered a Diagnostic and Statistical Manual (DSM)-5 syndrome and is associated with symptoms that may develop up to 1 week after cannabis cessation and last up to several weeks [33]. Many of the symptoms of cannabis withdrawal such as anxiety, depression and insomnia overlap with symptoms that patients may be using cannabis to alleviate in the first place, which underscores the need for the public and providers to be aware of this withdrawal syndrome. Cannabis use is not only associated with increased risk of CUD, but also with a sixfold increase in risk of developing any substance use disorder (SUD) [34]. From an emerging body of evidence with mixed findings it is unclear whether those using cannabis only for medical purposes have a different risk of CUD and SUD from those using for recreational purposes [37,38]. Other mental health harms Substantial evidence suggests an association between cannabis use and the development of psychotic symptoms, with increasing risk among individuals with more frequent cannabis use [25,39]. Consumption of a large amount of THC can cause acute psychosis, but there is also data that regular cannabis use-especially in young adults and among those with a genetic predisposition to schizophrenia-is associated with the development of chronic psychosis [39,40]. It is unclear whether cannabis use is associated with other mental health outcomes such as suicidal ideation, depression, anxiety, and mania [3,25]. Table 3 Stepwise assessment of cannabis use disorder Step 1. Do you currently use cannabis? YES/NO Step 2. IF YES, cannabis use disorder-short form [78] 1. How often during the past 6 months did you find that you were not able to stop using cannabis once you had started? A large and growing body of evidence demonstrates an association between active, long-term cannabis use and small but significant negative effects on all domains of cognitive function, though it is unclear whether cognitive deficits persist in those who have stopped using cannabis [25,39]. Concerns for cognitive dysfunction are magnified in adolescents with cannabis use because brain development continues into young adulthood. Findings from a birth cohort study suggested that ongoing cannabis use during adolescence is associated with neuropsychological decline broadly across domains of functioning, even after controlling for years of education. The study also found that cessation of cannabis use did not fully attenuate cognitive decline in adulthood [41]. Other studies focus more on the acute neuropsychological implications of cannabis use. The National Institute of Health (NIH) is currently embarking on a large population study, The Adolescent Brain Cognitive Development (ABCD) Study, which will add significantly to the evidence base in this area [42]. Physical health harms The literature does not show a direct association between prolonged cannabis use and measures of airflow obstruction or development of lung cancer [25]. There are data suggesting respiratory harms in those with frequent smoked cannabis use (i.e. more than weekly), in older adults, and those with medical comorbidities [25]. A few large epidemiologic studies suggest that prolonged daily cannabis smoking may be associated with a decline in lung function over two decades, and systematic reviews note an association between cannabis smoking and symptoms of chronic bronchitis [3,43,44]. An emerging case series of severe pulmonary illness associated with inhalation of vaporized products typically bought illicitly, and often containing THC, has recently been described [45]. The cause is unclear, but is thought to be related to chemicals in vaping products or equipment, [45] and underscores the uncertainty and potential danger of vaporizing unregulated cannabis products. While cannabis can increase heart rate, supine blood pressure, and postural hypotension, there is insufficient evidence to draw conclusions about its effects on cardiovascular health outcomes [46,47]. There is likewise insufficient evidence examining effects on most types of cancer [25]. Driving under the influence of cannabis poses significant public health risks. Systematic reviews and metaanalyses suggest a low-to-moderate association between cannabis use and motor vehicle crashes, even when accounting for alcohol or other substance use [48,49]. Most of the studies included in these reviews are casecontrol studies or culpability studies; therefore, more research is needed to understand the complex relationship between driving under the influence of cannabis and vehicular injuries especially as more states enact recreational cannabis legislation. Cannabis hyperemesis syndrome is a form of cyclic vomiting that is thought to be associated with chronic, regular cannabis use [50]. Most patients also present with abdominal pain and many patients report improvement of symptoms with hot showers [50]. The treatment is discontinuation of cannabis, though improvement may take many weeks and the effectiveness of abstinence has not been well studied [50]. There are a variety of case series suggesting potential efficacy for a variety of pharmacotherapies, though there are no trial data to guide treatment choice [51]. Another concern regarding recreational cannabis laws is the risk of pediatric exposure and overdose, especially related to edible products. In states that have enacted medical and recreational cannabis laws, the absolute number of hospitalizations and poison control center calls are low, but the rates of unintentional pediatric exposures continue to increase compared with states that have not enacted these laws [52][53][54]. This should prompt policymakers to pass more stringent regulations regarding cannabis product packaging, labeling and marketing. Synthetic cannabinoid harms Synthetic cannabinoids are man-made cannabis derivatives often marketed as safe, legal alternatives especially in states without recreational cannabis laws. The route of administration is most often inhaled, but they can be ingested as well. They are often substantially more potent CB1 receptor agonists than plant-derived THC [55]. Clinical effects may be similar to intoxication with cannabis but may present more acutely and include tachycardia, vomiting, ataxia, violent behavior, suicidal ideation, sedation, and slurred speech. There have been case reports and case series of "outbreaks" of mass intoxication with synthetic cannabinoids such as FUBINACA and K2 [55][56][57]. Cannabis and opioids Pain is a leading symptom driving non-recreational cannabis use, whether medically authorized or not. Despite recent efforts to reduce prescribed opioid therapy for pain in the U.S., and to increase treatment for opioid use disorder (OUD), opioids remain a prominent treatment for both acute and chronic pain, and untreated illicit opioid use has reached epidemic levels [58]. Given widespread opioid and cannabis use, both illegal and prescribed, cannabis and opioid co-use is increasingly common. Potential benefits of co-use, opioid sparing effects Widely-cited ecological studies have demonstrated an association between states allowing medical and/or recreational cannabis use and declining opioid prescription [59]. OUD hospitalization [60] and overdose rates [61] driving a hypothesis that opioid users may be substituting cannabis and decreasing risky opioid use. However, a more recent analysis demonstrated increased rates of opioid overdose death among states with more liberal cannabis laws, including those permitting recreational cannabis use [62]. At the individual level, there are studies suggesting that cannabis augments the analgesia produced by opioids, [63,64] potentially allowing opioid dose-lowering, thus possibly enhancing safety. One cross-sectional study of 244 individuals with chronic pain who were receiving cannabis from a dispensary found that these individuals reported decreased opioid use over time because of the benefits of cannabis for pain [65]. An ongoing online survey of 1321 participants, found 53% reported substituting cannabis for opioids, citing fewer side effects and better symptom management as their rationale for doing so [66]. Prospective, individual-level studies are needed to examine cannabis' potential role as an opioid sparing agent. Potential harms of co-use With respect to overdose, the most common opioidrelated cause of death, there is a dearth of data on the potential additive risk of cannabis co-use [67]. As for non-overdose harms, the synergistic effects on psychomotor slowing, depressed sensorium, and delirium (neurocognitive effects) of co-use may lead to increased risk of motor vehicle accidents, falls, and trauma, all known dose-dependent risks of opioids by themselves [67,68]; more epidemiologic studies are needed. With respect to concerns of association with SUD, observational data suggest that cannabis use is associated with opioid misuse among patients on long-term opioid therapy [69] with a more recent study demonstrating nearly sixfold increased odds of non-medical opioid use among those with cannabis use [70]. A practical challenge that can lead to unintended harm in co-use of cannabis and opioids is that opioid prescribing guidelines typically recommend a single prescriber (or team) who makes treatment decisions with the patient [71,72]. In practice, a provider who offers medical cannabis certification is likely to be someone other than the opioid prescriber, giving rise to potential conflicts in treatment philosophies. The opioid prescriber's assessment of potential benefit and risk may differ from the cannabis certifier and the prescribers sometimes abruptly discontinue opioid therapy with evidence of cannabis use, whether certified or not [73,74]. Finally, issues of provider liability may become complex if a patient who has both an opioid prescriber and a medical cannabis certifier were to overdose. Cannabis for treatment of OUD Recently, some states have introduced the treatment of OUD as a medical indication for cannabis [75]. However, there is very little research examining the effectiveness of cannabis for this indication. Moreover, there are three evidence-based medication treatments for OUDmethadone, buprenorphine and naltrexone-each of which have robust data from randomized controlled trials (RCTs) demonstrating improvement in important patient-and public-health outcomes [14]. Given the existence of evidence-based therapies for OUD, the lack of research examining cannabis for this indication, and the potential harms we agree with other commentators that the use of cannabis for this indication is premature [75]. Provider advice for individuals using cannabis There is currently not enough evidence to suggest that the long-term benefits of cannabis are likely to outweigh the harms, though our understanding of the balance of benefits and harms of cannabis is very likely to change as research accumulates. While there is no conclusive evidence to support providers' endorsement of widespread medicinal use of cannabis, the reality of clinical practice today is that patients have access to and are using cannabis, and it is the provider's duty to play a role in reducing any likelihood of harm. When asking patients about cannabis use, it is important to do so in a routine, non-judgmental fashion, and to ask about it separately from illicit drug history because cannabis is legal in many parts of the country. The provider may also want to ask about the reasons for usewhether recreational, medical, or both-because the reason for use may influence its frequency and route of administration; for medical use, this also allows for follow-up to assess whether or not use for medical purposes is having the intended effect [76,77]. When a provider encounters a patient who uses cannabis, the first step is to determine the quantity, frequency, and route of administration, as well as assessing for CUD. There are no widely validated short CUD screeners, but one option is use of the three-item Cannabis Use Disorders-Short Form to assess for CUD [78], and if the results are positive the provider should follow up by assessing for CUD with DSM-5 criteria (see Table 3). Treatment of CUD should include evidence-based behavioral therapies, such as cognitive-behavioral therapy and motivational enhancement therapy along with abstinence-based incentives, targeted at cannabis cessation [79]. While a number of pharmacotherapies for CUD have been evaluated-including gabapentin, N-acetylcysteine, antidepressants, and cannabinoids-the evidence is currently not strong enough to support routine use of pharmacotherapy for CUD [80]. For patients who use cannabis, but do not meet criteria for CUD, providers should counsel them on potential harms and provide psychoeducation and behavioral support. Given the risk of withdrawal with CUD, providers should counsel patients who continue to use cannabis to avoid frequent, heavy use. Indeed, simply outlining the symptoms of withdrawal such as depression, anxiety, insomnia, and restlessness may help patients avoid a dangerous cycle of self-medication for withdrawal symptoms. It is likely that THC in particular is responsible for many of the potential mental health harms, so providers should suggest that patients avoid high-THC products. Patients at risk for mental illness, especially psychotic spectrum disorders, should be counseled to avoid cannabis. The formulation of cannabis and route of administration influence its potential adverse effects. Patients should be made aware of the potent metabolite and the delayed onset of action associated with edible products, and accordingly use low doses, and avoid rapid dose escalation. Cannabis naïve patients should be especially cautious with edible products, and all individuals should avoid dabbing altogether given the high potency and risk for adverse effects. Individuals should avoid frequent and long-term cannabis smoking, and long, deep breath holds during inhalation. Providers should warn against use of any unregulated product obtained outside of dispensaries, including products advertised as CBD-only, given the existence of dangerous synthetic cannabinoids, labeling inaccuracies, and the risk of severe illness such as vaping-related pulmonary illness. For patients who are prescribed other central nervous system (CNS)-acting agents, including, for example opioids, benzodiazepines, muscle relaxants, and gabapentinoids, we would encourage prescribers of those medications to caution patients that the additive effect of cannabis on psychomotor slowing and other CNS side effects has not been well studied. We recommend a conservative approach whereby providers decrease CNS-acting agent polypharmacy (in safe, patient-centered ways) prior to or concomitant with patients initiating cannabis regimens. Providers' responsibility to public health With the changing landscape of cannabis legislation throughout the U.S., and more states legalizing recreational cannabis, medical providers need to mindfully frame discussions of cannabis use with patients and families. To counter the inaccurate public perception of cannabis as harmless, the medical community must be prepared to synthesize the evidence and deliver clear public messaging on the potential harms of cannabis use. Current evidence suggests that recreational cannabis use is associated with other substance use and SUDs; mental health conditions; impairment in memory, learning and attention; respiratory symptoms; motor vehicle crashes; and overdose injuries in pediatric populations [3,25,81]. Conversely, advocates of legalization argue that the criminalization of cannabis has had substantial social justice and public health implications through contact of millions of Americans with the criminal justice system [82]. There were 8.2 million cannabis-related arrests from 2001 to 2010, with African Americans four times more likely to be arrested; most of these arrests were for simple possession [83]. The downstream effects of these arrests can be detrimental as the existence of a criminal record can act as barrier to employment, housing, and public services for those arrested [83]. Ultimately, legalization will help to mitigate some of these structural inequities. However, legalization must be accompanied by a concerted public health effort to counter the potential harms of cannabis use. This is particularly important for adolescent and young adult populations because we know most harms are related to use of smoked cannabis products among individuals who began smoking in a heavy, habitual way in adolescence. Development of prevention messages geared to adolescents and young adults is vital. Data from the Monitoring the Future study, which is an annual survey of 8th, 10th and 12th grade students in the U.S., suggests that for three decades there was an inverse relationship between perceived risk of cannabis and cannabis use among high school students. Over the last few years, perceived risk has continued to decline steeply but there has not been a concomitant further rise in use, creating an opportunity for the medical community to focus efforts on effective prevention interventions [84]. Conclusions The changing landscape of cannabis legislation throughout the U.S. has had an impact on the prevalence and perceptions of safety and harms of cannabis use. With increasing frequency, patients are turning to providers to inquire about a full spectrum of cannabis effects: from medical use to potential harms. Given the widespread use of cannabis to manage symptoms of chronic pain at the same time as the country is facing an opioid epidemic, it is not uncommon for patients to have questions about use of cannabis and opioids for pain. It is imperative that primary care providers understand the evidence to engage in balanced discussions with patients. Additionally, primary care providers need to have tools
2019-12-03T14:02:07.885Z
2019-12-02T00:00:00.000
{ "year": 2019, "sha1": "12049e5bc6134d49a3f58dc2e92a79284659d18a", "oa_license": "CCBY", "oa_url": "https://ascpjournal.biomedcentral.com/track/pdf/10.1186/s13722-019-0171-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a2b19659a44de881b2c7886a57043b14877cff7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225530281
pes2o/s2orc
v3-fos-license
Optimal Configuration and Sizing of an Integrated Renewable Energy System for Isolated and Grid-Connected Microgrids: The Case of an Urban University Campus : Although renewable technologies are progressing fast, there are still challenges such as the reliability and availability of renewable energy sources and their cost issues due to capital intensity that hinder their broad adoption. This research aims at developing a configuration-sizing approach to enhance the cost e ffi ciency and sourcing reliability of renewable energies integrated in microgrids. To achieve this goal, various technologies were considered, such as solar PV, wind turbines, converters, and batteries for system configuration with minimization of net present cost (NPC) as the objective. Grid connection scenarios with up to 100% renewable contribution were analyzed. The results show that the integration of renewable technologies with some grid backup could reduce the levelized cost of energy (LCOE) to about half of the price of the electricity that the university purchases from the grid. Also, di ff erent kinds of solar tracker systems were studied. The outcome shows that by using a vertical axis solar tracker, the LCOE of the system could be reduced by more than 50 percent. This research can help the decision-maker to opt for the best scenarios for generating reliable and cost-e ffi cient electricity. Introduction Achieving the global aim for access to reliable energy supply and mitigation of emissions requires incrementing renewable technology usage as the only alternative. The worldwide energy sector is facing a quick transition from traditional large centralized electricity generation to decentralized small generation units that can be operated as microgrids [1]. Typically, an urban microgrid is connected to the centralized utility grid, but, in case of power failure or grid stress, it could act as a completely isolated system [2]. Microgrids using integrated renewable energy systems (IRES) in urban areas is a crucial strategy for attaining emission reduction or even net negative emission cities that can seize more carbon than what they emit in total, through supplying their own carbon-free electrical energy [3]. There are several agreements around the world (such as the Paris agreement in 2015) that aim at keeping the global average temperature below 2 • C by changing the ratio of renewable energy and fossil fuels [4]. Those systems commonly use renewable energy resources such as wind, biomass, and solar radiation. Also, by the integration of battery storage (e.g., Li-Ion Batteries), the systems can operate with more flexibility, shift peaks and/or generate electricity during grid outages. An interesting feature of microgrids is to let prosumers (consumers that also produce energy) actively trade energy in their community and make the local generation and consumption of reliable In 2018, Carballo et al. [26] provided a robust tool for the solar tracking field, decreasing the capital cost associated with typical control systems. They designed a new solar tracking system based on computer vision that implemented in developed economical open-source hardware. Optimal sizing of an islanded PV system under different sun tracking system has been investigated by Krishan et al. [27] to supply electricity for a household in India with a good potential for solar power generation. Considering six different types of solar trackers, the results show that, although the system using dual axis adjustment has slightly higher cost of energy, it acts better in terms of electricity generation by absorbing more solar radiation. Algarani et al. [28] designed an optimal PV grid-connected system considering different types of solar trackers in order to enhance the performance of the proposed grid-connected system. Their results show that utilizing two-axis solar trackers could generate 34% more electricity in comparison with a fixed system. However, the feasibility of using other sources of energy such as wind and battery storage, and a comparison of standalone systems with grid-connected ones was not assessed in this research. Designing and sizing an optimal integrated renewable energy system, for both isolated and grid-connected microgrids, and comparing their economic and technological performance is the main objective of this paper. In addition, the economic feasibility of grid-connected systems subject to the choice of solar trackers will be evaluated. To the best of the authors' knowledge, this is a first attempt in providing a comparison between grid-connected and isolated integrated energy systems with and without solar trackers. This paper is structured in five sections. Section 1 presents the topic and a comprehensive literature review on the integration of renewable technologies and microgrid. In Section 2, the methodology of the research is explained. Section 3 provides a case study. The results, including the economic, technical, and sensitivity analyses, are explained in Section 4. Finally, the conclusions provide a summary of the paper and suggestions for future work, as presented in Section 5. Methodology The optimal planning of components in isolated and grid-connected microgrids using solar trackers was considered in this research, accompanied by a sensitivity analysis. The aim is to determine the optimal size of the components for different system configuration scenarios. There are several available energy systems optimization tools such as HOMER (Hybrid Optimization of Multiple Energy Resources) [29], TRNSYS [30], iHOGA [29], and RETScreen [31]. HOMER is a comprehensive software that has been used extensively over the last decade due to its high capability in designing optimal systems and its powerful optimization engine [32]. Considering the flexibility of HOMER to input load data (HOMER can use the input load from different time steps), it was chosen over other tools for this study. Optimization Framework The load demand of the buildings, meteorological data, system configuration, and components, economic factors (e.g., initial cost, operation and maintenance (O & M) cost and replacement cost of the components and also inflation and discount rate) and other constraints such as search space for components are among the inputs to the optimization process. Figure 1 illustrates the proposed framework for optimal sizing and configuration of the microgrid systems with renewable energy integration. The input section that is the feed of the optimization part includes electricity consumption demand of case study, climatic data, selected technologies and their related technical constraints and some pricing information like capital costs, operation and maintenance cost and replacement costs of each technology. In the optimization stage, the feasibility of generated configurations was analyzed through an iterative process. The evaluation of the optimal configuration is accomplished by optimizing the objective function, which is the net present cost (NPC) of the system. The decision variables are specified based on the available resources of the location and also the amount of energy consumption. In this research, number of wind turbines, size of PV panels and converter, and number of batteries are considered as the decision variables. The defined objective function then is minimized subject to technical constraints of components and load balance, which guarantees the minimum deficiency of the power supply. Finally, in case of the feasibility of results, the optimizer lists the different configurations starting from least NPC. In the sensitivity analysis, different types of solar trackers are investigated to improve the performance of the system and reduce the cost. Also, an evaluation of the amount of solar fraction using trackers was done. In addition, another sensitivity analysis has been done on the land requirement and also the price of PV. Energies 2020, 13, x FOR PEER REVIEW 4 of 18 specified based on the available resources of the location and also the amount of energy consumption. In this research, number of wind turbines, size of PV panels and converter, and number of batteries are considered as the decision variables. The defined objective function then is minimized subject to technical constraints of components and load balance, which guarantees the minimum deficiency of the power supply. Finally, in case of the feasibility of results, the optimizer lists the different configurations starting from least NPC. In the sensitivity analysis, different types of solar trackers are investigated to improve the performance of the system and reduce the cost. Also, an evaluation of the amount of solar fraction using trackers was done. In addition, another sensitivity analysis has been done on the land requirement and also the price of PV. Economic Factors The net present cost (NPC) is used as the main economic metric or objective to rank different configurations. NPC is a subtraction of the current value of all the costs incurred by the system over its lifetime from the present value of all the earnings it gains over its lifetime: where is the total annual cash flow of the system, is the interest rate, and n is project lifetime. ( , ) is a recovery factor that gives the present value as a function of the annuity and can be expressed as Equation (2): The other economic factor used for comparing different configurations is the levelized cost of energy (LCOE) that illustrates the cost of electricity generated from each IRES over the lifetime per kWh of electrical energy and it can be determined as: Economic Factors The net present cost (NPC) is used as the main economic metric or objective to rank different configurations. NPC is a subtraction of the current value of all the costs incurred by the system over its lifetime from the present value of all the earnings it gains over its lifetime: where C ta is the total annual cash flow of the system, i is the interest rate, and n is project lifetime. CRF(i, n) is a recovery factor that gives the present value as a function of the annuity and can be expressed as Equation (2): Energies 2020, 13, 3527 of 18 The other economic factor used for comparing different configurations is the levelized cost of energy (LCOE) that illustrates the cost of electricity generated from each IRES over the lifetime per kWh of electrical energy and it can be determined as: where EL p is the primary electricity load used as input data and EL gs is the total electricity sold to the grid utility. Components Due to the land requirement limitations in urban areas, there are a limited number of options for components configuration for the purpose of generating renewable energy on-site. In this research, solar PV, wind turbine, and converter were considered as the main technologies that are coupled with batteries. Figure 2 shows the schematic of the assumed components in both scenarios. Energies 2020, 13, x FOR PEER REVIEW 5 of 18 where is the primary electricity load used as input data and is the total electricity sold to the grid utility. Components Due to the land requirement limitations in urban areas, there are a limited number of options for components configuration for the purpose of generating renewable energy on-site. In this research, solar PV, wind turbine, and converter were considered as the main technologies that are coupled with batteries. Figure 2 shows the schematic of the assumed components in both scenarios. PV Panels The generated electricity by PV panels is supplied to the DC line and converted by an inverter to AC power. The total installed cost of PV panels depends on the type of the project (small, medium, or large scale), PV type (polycrystalline, monocrystalline, thin-film), manufacturer, etc. Based on a recent study of eight large scale projects [33], the total installed cost of PV panels ranges from 750 to 1320 USD/kW (excluding inverter price). On that basis, in this paper, 750 USD/kW is considered as a lower estimate for the installed PV panel cost, and 14 USD per year [34] is set for the maintenance of each kW of the installed PV. To provide a sufficient plan for decision-makers, the sensitivity of the installed PV cost with average and maximum limits are also evaluated in the results section. For dealing with the power output reduction caused by dust, shading, and temperature fluctuation and other losses, a derating factor of 80% is presumed. The amount of power output of PV arrays is expressed by the below equation [35][36][37]: where is the rated capacity of PV array (kW), is the derating factor, ( ) is the real solar radiation ( / ) and , ( ) is solar radiation under standard condition ( / ), is power temperature coefficient, and , are PV temperatures under real and standard conditions, respectively. PV Panels The generated electricity by PV panels is supplied to the DC line and converted by an inverter to AC power. The total installed cost of PV panels depends on the type of the project (small, medium, or large scale), PV type (polycrystalline, monocrystalline, thin-film), manufacturer, etc. Based on a recent study of eight large scale projects [33], the total installed cost of PV panels ranges from 750 to 1320 USD/kW (excluding inverter price). On that basis, in this paper, 750 USD/kW is considered as a lower estimate for the installed PV panel cost, and 14 USD per year [34] is set for the maintenance of each kW of the installed PV. To provide a sufficient plan for decision-makers, the sensitivity of the installed PV cost with average and maximum limits are also evaluated in the results section. For dealing with the power output reduction caused by dust, shading, and temperature fluctuation and other losses, a derating factor of 80% is presumed. The amount of power output of PV arrays is expressed by the below equation [35][36][37]: Energies 2020, 13, 3527 where P rated is the rated capacity of PV array (kW), f d is the derating factor, I T (t) is the real solar radiation (kW/m 2 ) and I T,St (t) is solar radiation under standard condition (kW/m 2 ), µ P is power temperature coefficient, T C and T C,St are PV temperatures under real and standard conditions, respectively. Wind Turbines Several factors should be considered to correctly choose a wind turbine for a given location [38]. Studying the power curve of turbines is needed to evaluate the power output range for various wind speeds and various wind turbine hub heights (rotors height above the ground). For the urban area of the case study with low wind speeds, a 10 kW EO10 wind turbine (Eocycle EO10) with 2.75 m/s and 20 m/s cut-in and cut-out wind speed and with 16 m hub height was chosen. The initial cost of purchasing as well as the replacement cost of each 10 kW wind turbine was assumed as 30,000 USD and 600 USD considering operation and maintenance cost of a turbine per year [34]. The power output of the wind turbine was evaluated according to the following equation [34]: where ρ is air density (kg/m 3 ), A is the rotor cross-sectional area (m 2 ), η g is the efficiency of the generator, η b is gearbox efficiency and v hub is the wind speed at the hub height calculated according to Equation (5) [37]: where, v an is the wind speed at anemometer height (m/s), h hub is the hub height of the turbine, h an is anemometer height and h 0 is surface roughness length. Converter To convert the power supplied to DC line to AC, a converter should be considered. The maximum input voltage of the power converters should meet the maximum output voltage of the components. In this study, a power converter with initial and replacement cost of 90 USD per kW, 95 percent efficiency, and 15 years of lifetime is considered [26]. Battery In an isolated microgrid scenario, a battery is required due to the intermittency of a 100 percent renewable energy system. A generic 1 kWh lithium-ion battery with a voltage of 6 V and 90 percent efficiency is considered in the case study. The initial and replacement cost were both set as 156 USD per kWh [39], and the operation and maintenance is assumed as 10 USD per year for each kWh [40]. Battery lifetime is considered 15 years, while the minimum state of charge during the lifetime is specified as 20 percent. The battery capacity is calculated using the following equation: where DOD is the depth of discharge that is the proportion of the battery's discharge to its total capacity, η b and η con are battery and converter efficiency, respectively, L is the overall load demand (kWh/day) and AD is autonomy day that indicates the period of time that battery will last during power outage. Solar Tracker With the fast progress of technologies regarding renewable energies, solar trackers are often used to increase the energy yield of solar PV [41]. A tracker can improve the energy output of the PV arrays by up to 40 percent. There are multiple numbers of trackers available in the market. In this paper, the following trackers were considered [42]: Horizontal axis solar tracker with continuous adjustment (HAST): horizontal east-west axis rotation with continuous adjustment of the elevation angle; Vertical axis solar tracker with continuous adjustment (VAST): the elevation angle is fixed, and panel rotates continuously around the vertical axis; Two-axis solar tracker (TAST): panels rotating in both horizontal and vertical axis. The price of the horizontal, vertical, and two-axis trackers were set 870, 255, and 1000 USD, respectively [28]. Case Study One of the largest buildings at Concordia University, the so-called EV building, located in Sir George Williams Campus in the downtown of Montreal (Quebec), is considered as the case study. Although there have been remarkable efforts to achieve the slogan of Quebec's most energy-efficient major university for over twenty sequential years [43], the amount of electricity consumption is still very high and also the contribution of local renewable energy in comparison to the total consumed energy is negligible. There are three metering systems in the EV building that record the electricity consumption of the building every 15 min. The past six years' records of historical electricity load demand is provided in this study. Solar Radiation and Wind Speed Several valid sources could be used for receiving climatic information like solar radiation and wind speed. In this study, monthly averaged values of global horizontal irradiance (GHI) were downloaded from NASA's surface meteorology and solar energy database provided based on the records of the last 22 years [44]. As shown in Figure 3, the annual average GHI is 3.52 kWh/m 2 /day, and, from March to September, the amount of irradiance is more than the average. The amount of solar radiation in summer is high and reaches to 5.61 kWh/m 2 /day in June, whereas in winter it goes down to less than 2 kWh/m 2 /day in December. With respect to the wind speed profile, the monthly average meteorological wind data was estimated at an anemometer height of 50 m above the surface of the earth. As Barrington-Leigh and Ouliaris pointed in their study [45], although some parts of Quebec province have good potential for onshore wind power developments, Montreal, with an annual average wind speed of 4.29 m/s, does not present the ideal weather condition for generating electricity from wind. Nevertheless, there are still good chances of covering the irradiance deficiency in cold seasons since the lowest wind speed occurs in summer and highest in winter, while, from May to September, when the GHI is relatively high, the wind speed is below average. Figure 3 displays how renewable energies can operate complimentarily. Load Data Although a complete set of load data was provided by facility management of Concordia University for a whole year, there was still some data preparation required before the raw data could be used. Data Preparation Twelve datasets for each month were received as load data. Each dataset includes 15 features: the two first attributes indicate the date and time step (every 15 min), and the other 13 attributes refer to the electricity that was being consumed in different floors or sections of the building. For example, one of the features declares the amount of electricity consumption on the floor where all Heating, Ventilation and Air Conditioning (HVAC) systems are installed, and the major part of the electricity consumption belongs to this floor. Dealing with more than 45,000 data points in each dataset can be considered as big data. The sizing optimization model needs the total hourly load demand of the building for a whole year using just two attributes of time step (h) and load demand (kW). In addition, the missing data and outliers should be identified before using the data. Data preparation and preprocessing techniques such as outlier detection and removing missing values were done in the Python programming language. Load Profile After preprocessing, the final electricity load curve of the building was obtained, as shown in Figure 4a. The load profile follows a similar trend throughout the year with some of the outliers in November, December, and January related to holiday periods with lower electricity consumption. This could be explained by the fact that Concordia University mainly uses natural gas to generate heat during cold seasons, and also that there is an HVAC system that consumes electricity during summer. Load Data Although a complete set of load data was provided by facility management of Concordia University for a whole year, there was still some data preparation required before the raw data could be used. Data Preparation Twelve datasets for each month were received as load data. Each dataset includes 15 features: the two first attributes indicate the date and time step (every 15 min), and the other 13 attributes refer to the electricity that was being consumed in different floors or sections of the building. For example, one of the features declares the amount of electricity consumption on the floor where all Heating, Ventilation and Air Conditioning (HVAC) systems are installed, and the major part of the electricity consumption belongs to this floor. Dealing with more than 45,000 data points in each dataset can be considered as big data. The sizing optimization model needs the total hourly load demand of the building for a whole year using just two attributes of time step (h) and load demand (kW). In addition, the missing data and outliers should be identified before using the data. Data preparation and preprocessing techniques such as outlier detection and removing missing values were done in the Python programming language. Load Profile After preprocessing, the final electricity load curve of the building was obtained, as shown in Figure 4a. The load profile follows a similar trend throughout the year with some of the outliers in November, December, and January related to holiday periods with lower electricity consumption. This could be explained by the fact that Concordia University mainly uses natural gas to generate heat during cold seasons, and also that there is an HVAC system that consumes electricity during summer. Grid Schedule and Feed-In Tariff One of the two scenarios analyzed was a grid-connected system. In this system, no storage was considered, and electricity can always be supplied by grid utility if needed. Since the EV building of Concordia University is located in the middle of downtown, and it was already connected to the grid, there is no cost for grid extension and interconnection charge. Also, the capacity of grid sale and purchase was assumed as infinite. The grid rate schedule was planned at two different rates. The time interval of the highest electrical power demand from the grid, known as peak demand, refers to Hydro Quebec's definition for peak demand events, and is from 6 AM to 9 AM and 4 PM to 8 PM [46]. The other time spans of the day were considered off-peak. Considering a tariff DP, the average price of electricity in Montreal is 0.0894 USD per kWh in off-peak hours, while the cost of purchasing power was assumed 0.12 USD per kWh in peak hours. In addition, the sell-back price of excess electricity to the grid was set at 0.045 and 0.06 USD per kWh for off-peak and peak hours, respectively [46]. Results and Discussion The results of comparing grid-connected and isolated microgrids, as well as analysis of the effect of using solar trackers, will be discussed in the following sections. Grid Schedule and Feed-In Tariff One of the two scenarios analyzed was a grid-connected system. In this system, no storage was considered, and electricity can always be supplied by grid utility if needed. Since the EV building of Concordia University is located in the middle of downtown, and it was already connected to the grid, there is no cost for grid extension and interconnection charge. Also, the capacity of grid sale and purchase was assumed as infinite. The grid rate schedule was planned at two different rates. The time interval of the highest electrical power demand from the grid, known as peak demand, refers to Hydro Quebec's definition for peak demand events, and is from 6 a.m. to 9 a.m. and 4 p.m. to 8 p.m. [46]. The other time spans of the day were considered off-peak. Considering a tariff DP, the average price of electricity in Montreal is 0.0894 USD per kWh in off-peak hours, while the cost of purchasing power was assumed 0.12 USD per kWh in peak hours. In addition, the sell-back price of excess electricity to the grid was set at 0.045 and 0.06 USD per kWh for off-peak and peak hours, respectively [46]. Results and Discussion The results of comparing grid-connected and isolated microgrids, as well as analysis of the effect of using solar trackers, will be discussed in the following sections. Comparing Grid-Connected and Isolated Scenarios The three best integrated renewable systems proposed by the optimization engine are shown in Table 1. For the grid-connected scenario, the best configuration is using just PV systems and no wind turbine. As mentioned in the grid feed-in tariff section, the cost of electricity that Concordia University pays is 0.0894 USD per kWh. Employing the best configuration of grid connected scenario, this cost can be decreased to 0.0472 USD per kWh. This means that by investing 15.7 million USD as the initial capital cost required for purchasing and installation of PV panels and converters and connecting the whole system to the grid, the cost of energy could be about 50 percent less than grid electricity cost that currently is being used in the university. By adding three 10 kW wind turbine in the second proposed configuration, the cost of energy will be even slightly less than the first configuration. As the net present cost of the system was slightly increased, this configuration is ranked second. In the last proposed configuration, the wind turbine is selected as the only component, with no converter, having the wind turbine electrical output directly transferred to the AC line. Although the third proposed system has the highest cost of energy among the three configurations, its cost is still close to the current grid price. The proportion of electrical generation by PV panels in the G1 configuration in grid-connected scenario varies in different months of the year, depending on solar radiation. Regarding Figure 5, which illustrates the solar fraction in monthly average electrical production, it is clear that the solar fraction is always higher than 50%, even in winter when the amount of solar irradiance is meager. Comparing Grid-Connected and Isolated Scenarios The three best integrated renewable systems proposed by the optimization engine are shown in Table 1. For the grid-connected scenario, the best configuration is using just PV systems and no wind turbine. As mentioned in the grid feed-in tariff section, the cost of electricity that Concordia University pays is 0.0894 USD per kWh. Employing the best configuration of grid connected scenario, this cost can be decreased to 0.0472 USD per kWh. This means that by investing 15.7 million USD as the initial capital cost required for purchasing and installation of PV panels and converters and connecting the whole system to the grid, the cost of energy could be about 50 percent less than grid electricity cost that currently is being used in the university. By adding three 10 kW wind turbine in the second proposed configuration, the cost of energy will be even slightly less than the first configuration. As the net present cost of the system was slightly increased, this configuration is ranked second. In the last proposed configuration, the wind turbine is selected as the only component, with no converter, having the wind turbine electrical output directly transferred to the AC line. Although the third proposed system has the highest cost of energy among the three configurations, its cost is still close to the current grid price. The proportion of electrical generation by PV panels in the G1 configuration in grid-connected scenario varies in different months of the year, depending on solar radiation. Regarding Figure 5, which illustrates the solar fraction in monthly average electrical production, it is clear that the solar fraction is always higher than 50%, even in winter when the amount of solar irradiance is meager. The PV panels in the system produce electricity during the daylight period, and this interval varies over the year (see Figure 6). The PV panels in the system produce electricity during the daylight period, and this interval varies over the year (see Figure 6). On the other hand, for the standalone scenario with 100% renewables and no grid backup, the situation is different, and the cost of energy for the best-proposed configuration (I1) is more than three times the price that Concordia University now pays for grid electricity. Furthermore, the net present cost of the system is extremely high, which is not appealing for investors. The best configuration of the isolated scenario maybe even economically feasible in some locations where the price of electricity is more than 0.3 USD per kWh, but, as Quebec has the lowest electricity cost compared to other provinces in Canada [47], this system is not economically feasible. Figure 7 presents a comparison between the two explained scenarios considering the net present cost of each component. The main reason for the high required investment for an isolated microgrid is the high cost for battery storage. Although the progress of battery technology has been very fast during the last decade and the price of batteries has diminished tremendously, storage cost is still one of the main challenges of isolated microgrids. On the other hand, for the standalone scenario with 100% renewables and no grid backup, the situation is different, and the cost of energy for the best-proposed configuration (I1) is more than three times the price that Concordia University now pays for grid electricity. Furthermore, the net present cost of the system is extremely high, which is not appealing for investors. The best configuration of the isolated scenario maybe even economically feasible in some locations where the price of electricity is more than 0.3 USD per kWh, but, as Quebec has the lowest electricity cost compared to other provinces in Canada [47], this system is not economically feasible. Figure 7 presents a comparison between the two explained scenarios considering the net present cost of each component. The main reason for the high required investment for an isolated microgrid is the high cost for battery storage. Although the progress of battery technology has been very fast during the last decade and the price of batteries has diminished tremendously, storage cost is still one of the main challenges of isolated microgrids. On the other hand, for the standalone scenario with 100% renewables and no grid backup, the situation is different, and the cost of energy for the best-proposed configuration (I1) is more than three times the price that Concordia University now pays for grid electricity. Furthermore, the net present cost of the system is extremely high, which is not appealing for investors. The best configuration of the isolated scenario maybe even economically feasible in some locations where the price of electricity is more than 0.3 USD per kWh, but, as Quebec has the lowest electricity cost compared to other provinces in Canada [47], this system is not economically feasible. Figure 7 presents a comparison between the two explained scenarios considering the net present cost of each component. The main reason for the high required investment for an isolated microgrid is the high cost for battery storage. Although the progress of battery technology has been very fast during the last decade and the price of batteries has diminished tremendously, storage cost is still one of the main challenges of isolated microgrids. Solar Tracker After opting for the first configuration of the grid-connected scenario as the most economical feasible option, there were still some components that could be added to the system to improve its operation and efficiency. The effect of using three different kinds of solar trackers on the economic aspect of the system is summarized in Table 2. The following interesting points can be interpreted from Table 2: Horizontal and two-axis solar tracker were not economically feasible and raised the cost of energy and net present cost of the system. Using vertical trackers not only reduces LCOE and NPC by about 10 cents and more than one million USD, respectively, but also can improve the total performance of the system by increasing the renewable fraction of the IRES from 68% to 73.5%. It can also make this system much more appealing for investors by lowering the return on investment from 12 years to 7.4 years. For the horizontal and two-axis trackers, the excess electricity and the payback period of the system were decreased. Regarding Figure 8, with these trackers, the configurations change to the grid centered with a low renewable fraction. On the other hand, for the vertical trackers, due to their good economics, the optimal decision accounts for the use of more kW of PV panels, and thus, the PV output increases, as seen in Figure 9. Solar Tracker After opting for the first configuration of the grid-connected scenario as the most economical feasible option, there were still some components that could be added to the system to improve its operation and efficiency. The effect of using three different kinds of solar trackers on the economic aspect of the system is summarized in Table 2. The following interesting points can be interpreted from Table 2:  Horizontal and two-axis solar tracker were not economically feasible and raised the cost of energy and net present cost of the system. Using vertical trackers not only reduces LCOE and NPC by about 10 cents and more than one million USD, respectively, but also can improve the total performance of the system by increasing the renewable fraction of the IRES from 68% to 73.5%. It can also make this system much more appealing for investors by lowering the return on investment from 12 years to 7.4 years.  For the horizontal and two-axis trackers, the excess electricity and the payback period of the system were decreased. Regarding Figure 8, with these trackers, the configurations change to the grid centered with a low renewable fraction. On the other hand, for the vertical trackers, due to their good economics, the optimal decision accounts for the use of more kW of PV panels, and thus, the PV output increases, as seen in Figure 9. Land Requirement The evaluation of the required space for local renewables is one of the essential tasks of designing microgrids for urban areas. Typically, the roof area of a building is the main zone used for installing the components. Concordia University's available roof area for its main buildings in the downtown campus was extracted from the CityGML 3D data model of Montreal, as listed in Table 3. Regarding Table 1, 19,248 kW of PV was needed for the best configuration of the grid-connected scenario. Assuming 1 kW/m 2 irradiance and 16.4% efficiency of the panels, about 6 m 2 area is needed for installing 1 kW PV panels [48]. Hence, the land requirement will be about 115,488 m 2 . As presented in Table 3, the total available roof area of all campus buildings is only 24,487 m 2 . Therefore, it seems that land requirement is one of the substantial challenges that need to be addressed. Reducing the size of the local PV generation leads to lower solar fractions and less amount of excess electricity generated. A sensitivity analysis was done for different amounts of excess electricity and the resulting sales capacity to the grid, as summarized in Figure 10. Land Requirement The evaluation of the required space for local renewables is one of the essential tasks of designing microgrids for urban areas. Typically, the roof area of a building is the main zone used for installing the components. Concordia University's available roof area for its main buildings in the downtown campus was extracted from the CityGML 3D data model of Montreal, as listed in Table 3. Regarding Table 1, 19,248 kW of PV was needed for the best configuration of the grid-connected scenario. Assuming 1 kW/m 2 irradiance and 16.4% efficiency of the panels, about 6 m 2 area is needed for installing 1 kW PV panels [48]. Hence, the land requirement will be about 115,488 m 2 . As presented in Table 3, the total available roof area of all campus buildings is only 24,487 m 2 . Therefore, it seems that land requirement is one of the substantial challenges that need to be addressed. Reducing the size of the local PV generation leads to lower solar fractions and less amount of excess electricity generated. A sensitivity analysis was done for different amounts of excess electricity and the resulting sales capacity to the grid, as summarized in Figure 10. Although the LCOE of the system will be nearly doubled with decreasing the amount of sales to the grid, the required land could be reduced from 192,480 m 2 to about 30,000 m 2 . Another option of reducing land requirements is the use of solar trackers. In this sense, a sensitivity analysis was also done to capture the impact of vertical solar trackers. Figure 11 shows that using a vertical solar tracker reduces the required land (to about 24,000 m 2 ) as well as LCOE in comparison with not using a tracker. Although the LCOE of the system will be nearly doubled with decreasing the amount of sales to the grid, the required land could be reduced from 192,480 m 2 to about 30,000 m 2 . Another option of reducing land requirements is the use of solar trackers. In this sense, a sensitivity analysis was also done to capture the impact of vertical solar trackers. Figure 11 shows that using a vertical solar tracker reduces the required land (to about 24,000 m 2 ) as well as LCOE in comparison with not using a tracker. The outcome of land requirement sensitivity analysis shows that in case of land limitation, that usually occurs in dense urban areas, by reducing the sales to the grid, the system size and required space can be decreased accordingly. In addition, the results show that, even with zero sales of Although the LCOE of the system will be nearly doubled with decreasing the amount of sales to the grid, the required land could be reduced from 192,480 m 2 to about 30,000 m 2 . Another option of reducing land requirements is the use of solar trackers. In this sense, a sensitivity analysis was also done to capture the impact of vertical solar trackers. Figure 11. Land requirement and LCOE change of the system using vertical solar tracker with different sales capacity. Figure 11 shows that using a vertical solar tracker reduces the required land (to about 24,000 m 2 ) as well as LCOE in comparison with not using a tracker. The outcome of land requirement sensitivity analysis shows that in case of land limitation, that usually occurs in dense urban areas, by reducing the sales to the grid, the system size and required space can be decreased accordingly. In addition, the results show that, even with zero sales of The outcome of land requirement sensitivity analysis shows that in case of land limitation, that usually occurs in dense urban areas, by reducing the sales to the grid, the system size and required space can be decreased accordingly. In addition, the results show that, even with zero sales of electricity to the grid, the designed system is still economically feasible and competitive with current means of supplying electricity to university buildings. Furthermore, using a vertical solar tracker could also make the designed system more viable and practical as the required land is even less than the available roof area of the university. The Installed Cost of PV As mentioned in prior sections, the cost of PV depends on different factors creating various ranges. Therefore, as the impact of PV cost on the final cost of energy and configuration of the system is considerable, the evaluation of this effect is essential. In a recent study in 2018 [33], the total installed cost of PV panels ranges from 750 to 1320 USD/kW. In this research, 750 USD/kW was considered as the final price of installed PV panels. The effect of changing this price to the average or maximum values of the above-mentioned range of LCOE is evaluated and presented in Figure 12. electricity to the grid, the designed system is still economically feasible and competitive with current means of supplying electricity to university buildings. Furthermore, using a vertical solar tracker could also make the designed system more viable and practical as the required land is even less than the available roof area of the university. The Installed Cost of PV As mentioned in prior sections, the cost of PV depends on different factors creating various ranges. Therefore, as the impact of PV cost on the final cost of energy and configuration of the system is considerable, the evaluation of this effect is essential. In a recent study in 2018 [33], the total installed cost of PV panels ranges from 750 to 1320 USD/kW. In this research, 750 USD/kW was considered as the final price of installed PV panels. The effect of changing this price to the average or maximum values of the above-mentioned range of LCOE is evaluated and presented in Figure 12. By using the maximum value of the above-mentioned range, the cost of energy doubles, however, the final LCOE still remains comparable/competitive to the grid electricity price. Conclusions This paper analyzed the economic optimization of renewable energy supply for a University Campus Building in Montréal, Canada, for microgrids with variable renewable fractions. Different sizing and various configuration scenarios of renewable energy systems were evaluated, showing that a grid-connected scenario with high renewable fractions and no battery storage is the best option considering the economic aspect as the objective of the research. Using the grid backup scenario could reduce the cost of electricity by about 50 percent compared with the grid electricity price in Quebec. The impact of solar trackers was also analyzed, considering that the land requirements for high renewable fractions are mostly higher than the available roof spaces. While horizontal and two-axis solar trackers were not economically feasible, vertical trackers improved the system performance in both technical and economic aspects and also raised the fraction of renewable generation. The impact of reducing excess electricity sales to the grid and using solar trackers on the needed area for PV panels was also discussed. The results show that decreasing the required land for the installation of PV panels is possible, and although it raises the final LCOE of the system, the system still remains economically feasible. This study can be extended in a number of directions. The land requirement could be considered as a constraint in the optimization model, and its sensitivity and impact on the final sizing and By using the maximum value of the above-mentioned range, the cost of energy doubles, however, the final LCOE still remains comparable/competitive to the grid electricity price. Conclusions This paper analyzed the economic optimization of renewable energy supply for a University Campus Building in Montréal, Canada, for microgrids with variable renewable fractions. Different sizing and various configuration scenarios of renewable energy systems were evaluated, showing that a grid-connected scenario with high renewable fractions and no battery storage is the best option considering the economic aspect as the objective of the research. Using the grid backup scenario could reduce the cost of electricity by about 50 percent compared with the grid electricity price in Quebec. The impact of solar trackers was also analyzed, considering that the land requirements for high renewable fractions are mostly higher than the available roof spaces. While horizontal and two-axis solar trackers were not economically feasible, vertical trackers improved the system performance in both technical and economic aspects and also raised the fraction of renewable generation. The impact of reducing excess electricity sales to the grid and using solar trackers on the needed area for PV panels was also discussed. The results show that decreasing the required land for the installation of PV panels is possible, and although it raises the final LCOE of the system, the system still remains economically feasible. This study can be extended in a number of directions. The land requirement could be considered as a constraint in the optimization model, and its sensitivity and impact on the final sizing and configuration of the system can be further investigated. Moreover, the feasibility of using other components such as fuel cells or a small scale biomass gasifier plant in the microgrids could be explored. Also, the model is based on electricity supply and pricing in Quebec, with the presence of a Energies 2020, 13, 3527 16 of 18 centralized (and monopoly) firm, Hydro Quebec. As such, the configuration-sizing model could be extended to account for dynamic, oligopolistic, or decentralized pricing schemes. Finally, a multicriteria decision-making approach can be adopted as an extension to the proposed model to incorporate technical aspects such as capacity and area requirement, flexibility and reliability; economical aspects such as initial and operating cost; and environmental aspect such as reduction of carbon emission collectively to design a robust building-integrated green energy system in urban areas [49]. Author Contributions: Modeling with software, data collection and preprocessing, and preparation and writing the original draft has been done by N.S., and writing review and editing, supervision, and project administration have been done by F.N. and U.E. All authors were involved in the visualization, methodology, and conceptualization of this paper. All authors have read and agreed to the published version of the manuscript.
2020-07-09T09:09:42.387Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "d6b7895ac2f1b02ac2ce801263df43b338fcea2d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/14/3527/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2bd6d6bd9b7f813b31420ddd19d630cded3bc6e9", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
228969633
pes2o/s2orc
v3-fos-license
Enhancing the Solubility and Dissolution Performance of Safinamide Using Salts : Safinamide (SAF) is an anti-Parkinson’s disease (PD) drug that has selective monoamine oxidase type-B (MAO-B) inhibition activity. In 2017, SAF was approved by the U.S. Food and Drug Administration (FDA) as safinamide mesylate (SAF-MS, marketed as Xadago). Owing to its poor solubility in water, SAF is a Biopharmaceutics Classification System BCS Class II compound. In this study, four salts of safinamide (with hydrochloric acid (HCl), hydrobromic acid (HBr), and maleic acid (MA)) were obtained and characterized using single crystal X-ray diffraction (SCXRD), powder X-ray diffraction (PXRD), differential scanning calorimetry (DSC), and thermogravimetry (TG). The solubility and dissolution rate of all salts were systematically studied in water and phosphate buffer (pH 6.86) solutions. The accelerated stability tests indicated that all salts, except SAF-MA, had good stability under high humidity conditions. Safinamide (SAF, Figure 1) is an anti-Parkinson's disease drug with selective monoamine oxidase type-B inhibition activity [22][23][24][25][26]. Safinamide is a Biopharmaceutics Classification System (BCS) II drug with low solubility and high permeability, and it has been commercialized in its salt form, safinamide mesylate (marketed as Xadago) and was approved, in 2017, by the U.S. Food and Drug Administration. To date, there have been no relevant studies undertaken on the crystal structure and physicochemical properties of cocrystals and salts of safinamide. In the present study, a series of conformers containing organic and inorganic acids were used to interact with safinamide (Supplementary Materials Table S1), and four novel salts of safinamide with hydrochloric acid (HCl), hydrobromic acid (HBr), and maleic acid (MA) were prepared and characterized, and their solubilities, IDRs and stability were evaluated. The results indicated that all novel salts had improved solubilities and IDRs. Instrumentations and Materials Safinamide (SAF) was purchased from Shanghai BioChemPartner Co. Ltd (Shanghai, China). The coformers HCl, HBr, and MA were purchased from Aladdin Biochemical Technology Co. Ltd (Shanghai, China). All solvents used were purchased from Ghtech Co. Ltd (Guangzhou, China). The data from the X-ray diffraction of the powdered chemicals were measured using a powder diffractometer model D8 ADVANCE (Karlsruhe, Germany) with a tube of Cu Kα radiation (λ = 1.5418 Å) at 40 mA and 40 kV and data were collected within the range of 3-60°. Differential scanning calorimetry (DSC) was performed on a Mettler-Toledo machine at a rate of heating of 10 °C/min within the range of 25-250 °C under nitrogen flow at a rate of 20 mL/min. Thermogravimetric (TG) analysis was performed on a Perkin-Elmer TGA 4000 (Shanghai, China) equipment with a heating rate of 10 °C/min in the 25-500 °C range under nitrogen flow at a rate of 20 mL/min. All single crystal X-ray diffraction (SCXRD) data were collected on a Bruker Apex II CCD diffractometer (Karlsruhe, Germany) with Mo Kα radiation (λ = 0.71073 Å) at 293 K. The X-ray generator was operated at 50 kV and 30 mA. All the structures were solved by direct methods and refined using SHELX-97 (SHELX97 version) [27,28]. All atoms were placed at the geometrically calculated positions, except for the O-H and N-H hydrogen atoms. The crystallographic parameters of all the salts are summarized in Table 1 and the hydrogen bonds are listed in Table 2. SAF-HCl (1:1) salt: In a 25 mL round-bottom flask, SAF (40 mg) and HCl (30 μL) were dissolved in 4 mL of ethyl-acetate-water saturated solution and stirred constantly for 1 h. The flask was, then, kept at room temperature to slowly evaporate, and needle single crystals were obtained after 7-10 days. SAF-MA (1:1) salt: In a 25 mL round-bottom flask, SAF (20 mg) and MA (7.7 mg) were dissolved in 4 mL of methanol and stirred constantly for 1 h. The flask was, then, kept at room temperature to slowly evaporate, and needle single crystals were obtained after 4-5 days. Solubility and Intrinsic Dissolution Rate (IDR) Studies Previous studies on the solubility of SAF and related salts have been conducted using the excessive powder dissolution method [29]. SAF absorbance was determined at a wavelength λmax = 225 nm in water and phosphate buffer (pH 6.86) solutions. The curve of intensity versus the concentration calibration was plotted using this absorbance value for the SAF and related salts. For the solubility experiments, excessive safinamide and its salts were placed in water and pH 6.86 phosphate buffer medium, respectively. In all solubility experiments, the solutions were stirred at a speed of 500 rpm using a magnetic stirrer at a temperature of 37 °C and the stirring process lasted for 24 h. During the process, subsamples (5 mL) were removed and filtered using membrane filters (0.22 μm). Then, based on UV-spectroscopy, the absorbance was determined. The undissolved residues were vacuum dried and further characterized by powder X-ray diffraction (PXRD). All tests were repeated six times. The IDRs of the SAF and its related salts were determined using a PJ-3 LAB Tablet Four-Usage Tester (Tianjin Guoming Medical Equipment Co. LTD (Tianjin, China)). A total of 150 mg sample (SAF and its salts) was compressed with a smooth surface by applying a pressure of 1.8 MPa for 5 min in an area of 1.3 cm 2 . The pellets were dipped in a dissolution medium (500 mL), i.e., water or pH 6.86 phosphate buffer medium. The temperature was set at 37 °C and a rotating paddle was used to stir the medium at 100 rpm. The dissolution medium (5 mL) was withdrawn at the following intervals: 5, 10, 15, 20, 30, 45, 60, 90, and 120 min, and replaced each time with the same amount of fresh medium. Samples were filtered through 0.22 μm membrane filters, and then the absorbance was determined using a UV-spectroscopy. All tests were performed in triplicate. Stability Studies The stability studies of SAF and its salts was performed under different humidity environments at 25 °C. The different relative humidity treatments were 75% (sodium chloride in saturated state), 85% (potassium chloride in saturated state), and 97% (potassium sulfate in saturated state). The samples were placed in an incubator after 14 days and assessed using PXRD data. Powder X-ray Diffraction (PXRD) Analyse The PXRD diffractograms of SAF, SAF-HCl, SAF-HBr, SAF-MA and SAF-MA-H2O are shown in Figure 6. The results demonstrated that the experimental patterns were consistent with the simulated patterns calculated by Mercury using the refined single crystal X-ray data. Thermal Analyses Differential scanning calorimetry and thermogravimetry methods are usually employed to assess the thermodynamic stability of solid forms. Herein, the DSC and TG curves of SAF and all salts are shown in Figures 7 and S1. The DSC curves of SAF, SAF-HCl, SAF-HBr, and SAF-MA showed an endothermic peak at 136 °C, 231 °C, 228 °C, and 182 °C, respectively, with the temperature attributed to the melting point of the solid forms. The DSC curve of SAF-MA-H2O exhibited abroad peak in the range of 50-110 °C, which may be attributed to the release of water in the crystal cell. Another endothermic peak at 182 °C was observed, which was assigned to the melting point of SAF-MA-H2O. The TG curves of SAF, SAF-HCl, SAF-HBr, and SAF-MA began to decompose with the formation of volatile compound(s) at 200 °C, 224 °C, 222 °C, and 177 °C, respectively. The TG curves of SAF-MA-H2O showed a weight loss of 4.38% at the range of 50-110 °C, which was attributed to water loss (calculated 4.13%). SAF-MA-H2O began to decompose at 177 °C, which meant that the melting observed was a melting/degradation due to the fact that the baseline was not reached after the phase transition, and then we continued the TGA experiments that confirmed the decomposition phenomenon. Solubility and Dissolution Studies Solubility is the intrinsic property of a drug and affects its absorption in the body, whereas the dissolution rate is a dynamical parameter that is a measure within a specific time. The solubility and IDR at 37 °C for the SAF and its salts are listed in Table 3. PXRD analyses of the undissolved residue at the completion of the solubility experiments indicated that SAF, SAF-HCl, SAF-HBr, and SAF-MA-H2O salts were stable in water and phosphate buffer (pH 6.86) solution ( Figures S2-S5), whereas SAF-MA salt was completely transformed to SAF-MA-H2O ( Figure S6). The solubility profiles for SAF and its salts are shown in Figure 8 and the result showed the SAF-HCl, SAF-HBr, and SAF-MA-H2O salts exhibited a significant solubility advantage over that of SAF either in water or phosphate buffer (pH 6.86). The solubility values followed the order of SAF-HCl > SAF-HBr > SAF-MA-H2O > SAF in water and pH 6.86 phosphate buffer. Interestingly, the IDR values followed the order of SAF-HBr > SAF-HCl > SAF-MA > SAF-MA-H2O > SAF in water and phosphate buffer (pH 6.86) (Figures S7 and S8). The IDR results indicated that SAF-HBr showed better dissolution dynamics than that of SAF-HCl, although SAF-HCl had a higher solubility. The intrinsic dissolution profiles in water and phosphate buffer (pH 6.86) are shown in Figures 9 and 10. The results confirmed that the dissolution rates of the four salts produced in the present study were superior to that of SAF. The SAF-HBr salt exhibited the best dissolution enhancement as compared with the others in water and phosphate buffer (pH 6.86). Thus, SAF-HCl salt had a better dissolution rate that that of SAF-HBr salt from 90 min to 120 min. This may be attributed to the solubility advantage of SAF-HCl. Stability Studies Drug stability is an important index for evaluating drug shelf life. The accelerated stability tests of SAF and all four salts under different humidity conditions (25 °C/75%, 25 °C/85%, 25 °C/97%) were tested for 14 days. The results from the PXRD analysis indicated that all salts except SAF-MA had good stability under the three humidity conditions (Figures S9-S12). However, the SAF-MA salt was not stable at high humidity ( Figure S13) and some of this salt transformed to SAF-MA-H2O. The evidence for this was the characteristic peaks at 7.96, 11.16, 15.76 and 16.22°, which were attributed to the SAF-MA-H2O salt. Conclusions In summary, four pharmaceutical salts of anti-Parkinson's disease drug safinamide were successfully synthesized and characterized using slow solvent evaporation techniques. The crystal structures and physicochemical properties of these salts were investigated. The solubility studies revealed that SAF-HCl, SAF-HBr, and SAF-MA-H2O salts improved the solubility of this API in water and phosphate buffer (pH 6.86). All salt forms exhibited better dissolution rates than that of SAF. Furthermore, SAF-MA salt completely transformed to SAF-MA-H2O in water and phosphate buffer (pH 6.86). The accelerated stability tests indicated that all salts, except SAF-MA, had good stability. Considering the solubility, dissolution rate, and stability advantages, SAF-HCl, SAF-HBr, and SAF-MA-H2O salts have the potential to be developed into effective oral preparations. Supplementary Materials: The following are available online at www.mdpi.com/2073-4352/10/11/989/s1, Figure S1: Thermogravimetry (TG) curves of SAF, SAF-HCl, SAF-HBr, SAF-MA, and SAF-MA-H2O salts, Figure S2: PXRD analysis of the residual materials of SAF after 24 h solubility in aqueous and pH 6.86 phosphate buffer medium, Figure S3: PXRD analysis of the residual materials of SAF-HCl after 24 h solubility in aqueous and pH 6.86 phosphate buffer medium, Figure S4: PXRD analysis of the residual materials of SAF-HBr after 24 h solubility in aqueous and pH 6.86 phosphate buffer medium, Figure S5: PXRD analysis of the residual materials of SAF-MA-H2O after 24 h solubility in aqueous and pH 6.86 phosphate buffer medium, Figure S6: PXRD analysis of the residual materials of SAF-MA after 24 h solubility in aqueous and pH 6.86 phosphate buffer medium, Figure S7: IDR profiles for SAF, SAF-HCl (20 min shown), SAF-HBr (20 min shown), SAF-MA and SAF-MA-H2O salts in water at 37 °C, Figure S8: IDR profiles for SAF, SAF-HCl (30 min shown), SAF-HBr (30 min shown), SAF-MA, and SAF-MA-H2O salts in phosphate buffer pH 6.86 at 37 °C, Figure S9: PXRD profiles for SAF after storage at 25 °C under various RH conditions for 14 days, Figure S10: PXRD profiles for SAF-HCl salt after storage at 25 °C under various RH conditions for 14 days, Figure S11: PXRD profiles for SAF-HBr salt after storage at 25 °C under various RH conditions for 14 days, Figure
2020-11-05T09:10:39.055Z
2020-10-31T00:00:00.000
{ "year": 2020, "sha1": "74d8df12cc5e6c95be1809ee7310574f22771fb4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/10/11/989/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "513c29ea4b84be06daade84acf576a52cb7a0118", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
211555809
pes2o/s2orc
v3-fos-license
Dental Implants with Different Neck Design: A Prospective Clinical Comparative Study with 2-Year Follow-Up The present study was conducted to investigate whether a different implant neck design could affect survival rate and peri-implant tissue health in a cohort of disease-free partially edentulous patients in the molar–premolar region. The investigation was conducted on 122 dental implants inserted in 97 patients divided into two groups: Group A (rough wide-neck implants) vs. Group B (rough reduced-neck implants). All patients were monitored through clinical and radiological checkups. Survival rate, probing depth, and marginal bone loss were assessed at 12- and 24-month follow-ups. Patients assigned to Group A received 59 implants, while patients assigned to Group B 63. Dental implants were placed by following a delayed loading protocol, and cemented metal–ceramic crowns were delivered to the patients. The survival rates for both Group A and B were acceptable and similar at the two-year follow-up (96.61% vs. 95.82%). Probing depth and marginal bone loss tended to increase over time (follow-up: t1 = 12 vs. t2 = 24 months) in both groups of patients. Probing depth (p = 0.015) and bone loss (p = 0.001) were significantly lower in Group A (3.01 vs. 3.23 mm and 0.92 vs. 1.06 mm; Group A vs. Group B). Within the limitations of the present study, patients with rough wide-neck implants showed less marginal bone loss and minor probing depth, as compared to rough reduced-neck implants placed in the molar–premolar region. These results might be further replicated through longer-term trials, as well as comparisons between more collar configurations (e.g., straight vs. reduced vs. wide collars). Introduction The scientific debate on dental implant macro-design is a well-known topic in the field of implant dentistry. The ideal fixture design should bring together the most suitable and distinctive characteristics for implant osseointegration, such as type of material (zirconium or titanium), body shape (cylindrical or conical), neck geometry (straight, reduced, or wide), threads depth, width, and pitch, as well as tapered or non-tapered apical portion, body length, and diameter. Although there is no perfect implant design [1,2], nor a best surface treatment [3], scientific evidence has consistently demonstrated that different dental implant macro-designs affect long-term implant success [4,5] and also accelerate the healing process, to allow implant therapy in the population of patients who are more prone to failure [6,7]. Implant collar, being the portion of the implant that connects the fixture with the oral cavity throughout a prosthetic device, is a very important feature related to the peri-implant tissue's health conditions. Several studies about implant neck design and marginal bone loss can be found in the literature, but the results are controversial. In vivo animal studies reported a greater crestal bone height and thickness of surrounding implant tissue in dental implants with triangular neck designs [8]; smaller crestal bone loss but similar peri-implant tissue thickness in narrow ring extra-shorts implants [9]; and greater bone loss in dental implants with micro-rings on the neck, as compared to open-thread implant collars [10]. Human model studies reported improved biomechanical behavior for stress/strain distribution pattern in dental implants with divergent collar design [11]; no additional bone loss in non-submerged dental implants with a short smooth collar compared to similar but longer implant collar design [12]. Other clinical findings suggest that specific implant neck design might be suitable in anterior areas, where bone loss, even if acceptable, can lead to adverse aesthetic results [13,14]. The purpose of the present study is to compare peri-implant hard-and soft-tissue health conditions in partially edentulous patients who received the same dental implants but with two different implant neck designs, at a two-year follow-up. In this study, the null hypothesis led to the expectation of no differences in survival rate, probing depth, and marginal bone loss among patients who received dental implants with wide or reduced collar morphology. Patients Study participants were selected from patients who attended the Dental Department of IRCCS San Raffaele Hospital, Milan, Italy asking for partial fixed implant-prosthetic rehabilitation. Recruitment occurred from February 2016 to November 2017, and the investigation was conducted following all the ethical regulations related to the institution. Patients had to meet the following inclusion criteria: (1) hopeless teeth to be extracted at least four months prior to surgery in molar/premolar region; (2) no previous dental implants already in place adjacent to surgical site; (3) natural antagonistic teeth (composite resin restorations allowed); (4) absence of diabetes, periodontitis, bruxism, and smoking; (5) absence of chemotherapy or radiation therapy of head and neck district, as well as anti-resorptive drug therapy (i.e., bisphosphonates); and (6) neither mucosal lesions (lichen planus, epulis fissuratum) nor bone lesions (i.e., simple bone cyst or odontomas). Eligible areas for surgery of edentulous maxilla or mandible were selected to receive 1 to a maximum of 3 dental implants. Participants were verbally informed about the purpose of the study but not assigned to a specific group, as they were randomly chosen either to receive a wide-neck implant (Group A) or a reduced-neck implant (Group B). Patients were assigned to conditions according to a computer-generated random list, prescribing the use of the reduced vs. wide implant. Clinical measures (i.e., survival rate, peri-implant probing depth, and mean marginal bone loss) were taken at 12 and 24 months. Thus, the design amounted to a 2 (implant: wide vs. reduced) X 2 (time: 12 vs. 24 month follow-up) mixed factorial design, following the Consolidated Standards of Reporting Trials (CONSORT) guidelines available as supplementary material to this manuscript and on http://www.consort-statement.org/. Written informed consent was signed before the start of the study; patients were allowed to leave the research at any time, without any consequence. Implant macrogeometry regarding the two different collar designs used in the present study is shown in Figure 1 Implant Surgery The study was based on a single blind design, with patients being unaware of which type of implant neck design (wide or reduced) was used for the therapy. Local anesthesia was induced with local infiltration of lidocaine 20 mg/mL with 1:50.000 adrenaline (Ecocain, Molteni Dental, Firenze, Italy). A crestal horizontal incision was made, with buccal relieving incisions in the medial and distal portions of the main incision. A full-thickness flap was raised, and dental implants were placed in edentulous sites of 0.5 mm, subcrestally, with a minimum insertion torque of 35 Ncm. Cover screw was positioned, and a periosteal incision was performed in order to allow flap passivation in search for primary intention healing of the wound. Vertical mattress suturing technique was used with a 4-0 coated braided absorbable suture (Vicryl, ETHICON, Johnson & Johnson). Sterile dry gauze compression was performed on the wound to control post-operative bleeding. Ice packages were delivered to the patients immediately after surgery, with instruction to apply cold to the surgical area for the following 24 hours. Semi-liquid cold diet was recommended for the first 48 h. At-home pharmacological therapy prescribed was amoxicillin 1 g, every 12 hours, for six days, and non-steroid anti-inflammatory drug ibuprofen 400 mg, every 12 hours, for four days, post-operatively. All implants were loaded after a 4-month healing period, through a delayed loading protocol, with a composite resin temporary restoration, followed by metal-ceramic cemented crowns. Definitive abutments used for both Group A and B were the same and had conical connection with Double Action Tight (DAT), a system that presents a conical interface between the abutment and the implant, plus one more conical interface between the screw and the abutment. Clinically, abutment screws were tightened at 25 Ncm by using a dental torque wrench. Parameters Dental implant survival rate was defined as the fixtures being osseointegrated and staying in situ; and capable to guarantee stability for prosthetic support along the 2-year observation period following the surgical placement. Peri-implant probing depth was estimated through a CP12 University of North Carolina color-coded periodontal probe (Hu Friedy, Chicago, IL, USA), in the mesial, distal, buccal, and lingual/palatal surfaces of the fixture. Distance in mm between the mucosal margin and the tip of the probe was considered as pocket depth. A line was traced parallel to the long axis of the implant in order to measure in mm the distance between the crestal bone level at the margin of the implant neck and the top of the apical portion of the implant. Implant Surgery The study was based on a single blind design, with patients being unaware of which type of implant neck design (wide or reduced) was used for the therapy. Local anesthesia was induced with local infiltration of lidocaine 20 mg/mL with 1:50.000 adrenaline (Ecocain, Molteni Dental, Firenze, Italy). A crestal horizontal incision was made, with buccal relieving incisions in the medial and distal portions of the main incision. A full-thickness flap was raised, and dental implants were placed in edentulous sites of 0.5 mm, subcrestally, with a minimum insertion torque of 35 Ncm. Cover screw was positioned, and a periosteal incision was performed in order to allow flap passivation in search for primary intention healing of the wound. Vertical mattress suturing technique was used with a 4-0 coated braided absorbable suture (Vicryl, ETHICON, Johnson & Johnson, New Brunswick, NJ, USA). Sterile dry gauze compression was performed on the wound to control post-operative bleeding. Ice packages were delivered to the patients immediately after surgery, with instruction to apply cold to the surgical area for the following 24 h. Semi-liquid cold diet was recommended for the first 48 h. At-home pharmacological therapy prescribed was amoxicillin 1 g, every 12 hours, for six days, and non-steroid anti-inflammatory drug ibuprofen 400 mg, every 12 hours, for four days, post-operatively. All implants were loaded after a 4-month healing period, through a delayed loading protocol, with a composite resin temporary restoration, followed by metal-ceramic cemented crowns. Definitive abutments used for both Group A and B were the same and had conical connection with Double Action Tight (DAT), a system that presents a conical interface between the abutment and the implant, plus one more conical interface between the screw and the abutment. Clinically, abutment screws were tightened at 25 Ncm by using a dental torque wrench. Parameters Dental implant survival rate was defined as the fixtures being osseointegrated and staying in situ; and capable to guarantee stability for prosthetic support along the 2-year observation period following the surgical placement. Peri-implant probing depth was estimated through a CP12 University of North Carolina color-coded periodontal probe (Hu Friedy, Chicago, IL, USA), in the mesial, distal, buccal, and lingual/palatal surfaces of the fixture. Distance in mm between the mucosal margin and the tip of the probe was considered as pocket depth. A line was traced parallel to the long axis of the implant in order to measure in mm the distance between the crestal bone level at the margin of the implant neck and the top of the apical portion of the implant. Statistical Analysis All analyses were run at the implant level. Peri-implant probing depth and marginal bone loss were submitted to separate 2 (follow-up: t 1 = 12 vs. t 2 = 24 months) X 2 (neck design: reduced vs. wide) multivariate analyses of variance (MANOVA s ), in order to distinguish the effects of follow-up time, implant neck design, and additionally assess any interactive effect(s) of the two factors. Mean values were complemented by standard errors of the mean (Se) and 95% confidence intervals (CI). Results A total of 97 patients (56 men and 41 women) aged between 33 and 75 years (mean 58.2 ± 6.22 years) were selected for the present study. None of them withdrew from the research, and 122 fixtures were placed in the molar/premolar region. Fixtures made of titanium grade 4 had a standard length (≥10 mm) and a diameter of 3.8 and 4.2 mm for wide-neck implants and 4.2 and 5.0 mm for reduced-neck ones. Dental implants received the same subtraction procedure, according to the Zir-Ti full-surface treatment (Zirconium Oxide Sand-Blasted and Acid Etched Titanium). The apical portion was tapered with 50 • accentuated triangular threads and four longitudinal incisions, to increase penetration ability and anti-rotation features. Fifty patients formed Group A (rough wide-neck design) and received 59 implants. Group B (rough reduced-neck design) was composed of forty-eight patients, who received 63 implants. The two groups were compared at one-year and two-year follow-ups. Survival rate, probing depth, and marginal bone loss were recorded through clinical and radiological checkups. Radiological records for different dental implants placed in Group A and B patients are shown in Figures 2 and 3. Statistical Analysis All analyses were run at the implant level. Peri-implant probing depth and marginal bone loss were submitted to separate 2 (follow-up: t1 = 12 vs. t2 = 24 months) X 2 (neck design: reduced vs. wide) multivariate analyses of variance (MANOVAs), in order to distinguish the effects of follow-up time, implant neck design, and additionally assess any interactive effect(s) of the two factors. Mean values were complemented by standard errors of the mean (Se) and 95% confidence intervals (CI). Results A total of 97 patients (56 men and 41 women) aged between 33 and 75 years (mean 58.2±6.22 years) were selected for the present study. None of them withdrew from the research, and 122 fixtures were placed in the molar/premolar region. Fixtures made of titanium grade 4 had a standard length (≥ 10 mm) and a diameter of 3.8 and 4.2 mm for wide-neck implants and 4.2 and 5.0 mm for reduced-neck ones. Dental implants received the same subtraction procedure, according to the Zir-Ti full-surface treatment (Zirconium Oxide Sand-Blasted and Acid Etched Titanium). The apical portion was tapered with 50° accentuated triangular threads and four longitudinal incisions, to increase penetration ability and anti-rotation features. Fifty patients formed Group A (rough wide-neck design) and received 59 implants. Group B (rough reduced-neck design) was composed of forty-eight patients, who received 63 implants. The two groups were compared at one-year and two-year follow-ups. Survival rate, probing depth, and marginal bone loss were recorded through clinical and radiological checkups. Radiological records for different dental implants placed in Group A and B patients are shown in Figures 2 and 3. The overall survival rate of CSR dental implants at the two-year follow-up was 96.72% (four implant failures out of 122 implants placed). Both groups showed similar outcomes: At 12 months, survival rate was 98.30% for Group A and 98.41% for Group B, while it decreased at 96.61% for Group A and 96.82% for Group B at the 24-month follow-up. Statistical Analysis All analyses were run at the implant level. Peri-implant probing depth and marginal bone loss were submitted to separate 2 (follow-up: t1 = 12 vs. t2 = 24 months) X 2 (neck design: reduced vs. wide) multivariate analyses of variance (MANOVAs), in order to distinguish the effects of follow-up time, implant neck design, and additionally assess any interactive effect(s) of the two factors. Mean values were complemented by standard errors of the mean (Se) and 95% confidence intervals (CI). Results A total of 97 patients (56 men and 41 women) aged between 33 and 75 years (mean 58.2±6.22 years) were selected for the present study. None of them withdrew from the research, and 122 fixtures were placed in the molar/premolar region. Fixtures made of titanium grade 4 had a standard length (≥ 10 mm) and a diameter of 3.8 and 4.2 mm for wide-neck implants and 4.2 and 5.0 mm for reduced-neck ones. Dental implants received the same subtraction procedure, according to the Zir-Ti full-surface treatment (Zirconium Oxide Sand-Blasted and Acid Etched Titanium). The apical portion was tapered with 50° accentuated triangular threads and four longitudinal incisions, to increase penetration ability and anti-rotation features. Fifty patients formed Group A (rough wide-neck design) and received 59 implants. Group B (rough reduced-neck design) was composed of forty-eight patients, who received 63 implants. The two groups were compared at one-year and two-year follow-ups. Survival rate, probing depth, and marginal bone loss were recorded through clinical and radiological checkups. Radiological records for different dental implants placed in Group A and B patients are shown in Figures 2 and 3. The overall survival rate of CSR dental implants at the two-year follow-up was 96.72% (four implant failures out of 122 implants placed). Both groups showed similar outcomes: At 12 months, survival rate was 98.30% for Group A and 98.41% for Group B, while it decreased at 96.61% for Group A and 96.82% for Group B at the 24-month follow-up. The overall survival rate of CSR dental implants at the two-year follow-up was 96.72% (four implant failures out of 122 implants placed). Both groups showed similar outcomes: At 12 months, survival rate was 98.30% for Group A and 98.41% for Group B, while it decreased at 96.61% for Group A and 96.82% for Group B at the 24-month follow-up. Discussion Our study focused on dental implants' macro-design, particularly on the clinical performance of the same type of fixture but with two different rough collar designs in partially edentulous patients, using a delayed loading protocol. Examined parameters were peri-implant probing depth, marginal bone loss, and survival rate at two-year follow-up. Both groups of patients showed an acceptable but almost similar implant survival rate. However, patients who received implants with a wide-neck design presented lower probing depth and minor marginal bone loss compared to reduced neck; thus, the null hypothesis of no differences between dental implants with different neck designs was partially rejected. From a clinical point of view, differences in probing depth and marginal bone loss between Group A and B were not relevant at the two-year follow-up. Since the absence of signs of soft-tissue inflammation and the absence of further additional bone loss following initial healing were found, according to peri-implant health definition by Renvert et al. [15], it can be affirmed that both groups of patients showed peri-implant tissue health conditions. Implant therapy is a very helpful discipline when it comes to rehabilitating dental patients. Even if bone loss around oral implants is described to be an unavoidable and physiologic foreign-body reaction of bone against titanium [16][17][18], the key for success resides in the neutralization of risk factors at multiple levels: patient level, implant level, and prosthetic level. Risk factors such as diabetes, periodontitis, bruxism, smoking, antidepressants intake, bone augmentation procedures, head and neck radiotherapy [19][20][21][22] play a principal role in long-term implants' outcome. These factors are found at the patient level, meaning that they are poorly controllable over time, as they can worsen along with local or systemic health conditions. Here, we must recall that patients included in the present study where disease-free individuals. Other factors that are set at prosthesis level also interfere with the success of implant therapy and should not be underestimated. According to Vazquez-Alvarez et al. [23], the distance between the implant platform and the horizontal component of the prosthesis has a significant influence on peri-implant bone loss, and to be adequate, it should range from 3.3 to 6 mm. According to Lemos et al. [24], the retention system for implant-supported prostheses may lead to a different bone-loss pattern, as cement-retained restorations showed less marginal bone loss than screw-retained restorations, and implant survival rate was in favor of cement-retained prosthesis. Restorations for the present study were cemented crowns where a minimum distance of 3.5 mm was kept between implant-abutment junction and horizontal prosthetic component, and where extreme attention was payed to remove any cement excess that could be found underneath them. Accuracy of dental impression used, whether traditionally or digitally taken, may lead to differences in the fit of the definitive restoration [25]. In our case, prosthetic rehabilitations were performed by passing through light and putty consistency polyvinylsiloxane materials. The type of prosthetic material itself is described to be capable of having an effect on the peri-implant tissues [26]. In this study, the decision for metal-ceramic crowns was supported by appropriate biomechanical properties, as it was demonstrated in the literature [27][28][29]. Occlusal forces were exerted against natural antagonistic teeth in the molar/premolar region, to standardize the procedure and avoid contact with previously installed dental restorations made of unknown or undefined material properties (e.g., a preexisting zirconium-based bridge in the antagonistic region). Finally, implant therapy risk factors are also found at the implant level, being the fixture macro-design capable to affect the osseointegration process, as reported by several authors [4,5,[30][31][32][33]. Fixture micro-and macro-designs can be adequately selected before treatment, and with the ideal concept design, implant success rate would be more predictable. Starting from the type of material from which implants are manufactured, different osseointegration processes (amount of bone attachment to the surface and strength of the bone-surface interaction) may occur at the bone level. Recently reported by Taek-Ka et al. [34], a qualitative different osseointegration was found through higher bone-surface interaction in commercially pure titanium grade 2 implants compared to grade 4. Apart from titanium, zirconia has also been proposed as an alternative material for oral fixtures. At the moment, despite its optimal biocompatibility, no definitive decision is available on the clinical performance of such implants [35,36]. Back to implant collar, the manner in which it is configured appears to be of relevant interest: The maximum loading stress distribution in bone is localized at the neck of the implants, as described by Anitua et al. [37] and Huang et al. [38]. Several studies are available in the literature, but no consensus on which collar design is more suitable for osseointegration was agreed on by the authors. Our study would qualify rough wide-neck implants to reduce bone loss over time, being conscious that a longer follow-up period is necessary to confirm these findings. This may be related to the platform-switching concept, which has been described to be beneficial for osseointegration [39][40][41][42][43]. In fact, even in the case that a platform-matched abutment is used in such implants, a minimal effect of switching platform still exists, being that the neck of the implant is wider in diameter with respect to the main body. Otherwise, reduced-neck implants are less likely to benefit from the platform-switching effect because of their narrower platform. According to Eshkol-Yogev et al. [44], round neck implants may significantly increase primary stability when compared to triangular neck design. In a paper by Mendoca et al. [45], bone remodeling showed to be of benefit around implants with rough collar design, in mandible but not in maxilla, if compared to machined collar surface implants. In a review by Koodaryan et al. [46], rough-surfaced micro-threaded neck implants appeared to lose less bone compared to polished and rough-surfaced neck implants. CSR implants placed in this study had roughened surface collars with no microthreads at the bone cervical region. Presence or absence of microthreads, as well as the amount of surface roughness, may have an effect on bone preservation. Despite that an implant collar with a microthread can help in the maintenance of peri-implant bone against prosthetic loading, [47] this study was focused on conventional rough-surface dental implants, not to add confounding aspects related to numerous available surface topography (e.g., smooth, polished neck vs. machined surface vs. microthread design). Furthermore, CSR implants had a moderate degree of roughness, as no beneficial effect seemed to be associated with an increase in surface roughness. In fact, a 20-year follow-up clinical trial by Donati et al. [48] reported no peri-implant bone preservation related to implants with an increased surface roughness. Another relevant issue to consider is the implant-abutment connection system. Implants in the present study were provided with DAT connection. Consisting of a double conical interface and internal hexagon for prosthetic repositioning, this type of connection follows the recent literature's outcomes. According to Caricasulo et al. [49], internal connection, particularly conical interfaces seem to better maintain crestal bone level around dental implants. As stated by Kim et al. [50], transmission of the occlusal load from the restoration to the implant, and then from the implant to the surrounding bone, is essential to stimulate osteoblasts activity. This is to say, to avoid minimum but regular and continuous bone resorption, described to be around 1 mm for the first year and of 0.2 mm per year thereafter [51], bone deposition must be encouraged. The concept of biocompatibility related to implant-prosthetic rehabilitation can be considered as the ultimate key for success: proper design of the fixture, together with a correct function of the implant-abutment connection, and optimal adaptation of the prosthetic restoration generates a self-defensive mechanism that guarantees long-term survival rates. Considering multiple and confounding aspects which affect implant failure, with risk factors set at patient, implant, and prosthetic level, it is important to affirm that bone loss in not solely determined by collar morphology. Further studies should be conducted on multiple heterogeneous implant collar design in different populations (e.g., diabetic vs. nondiabetic) and with different prosthetic restorations (e.g., screwed vs. cemented). Longer follow-up periods could highlight the enhancement of the clinical performance of dental implants with specific neck configurations. Conclusions Within the limitations of the present prospective clinical comparative study, peri-implant probing depth and marginal bone level around dental implants placed in edentulous sites in molar/premolar region were affected by different neck designs. Patients who received implants with rough wide-neck design presented lower probing depth and minor marginal bone loss compared to patients with rough reduced-neck implants. Reduced-neck implants showed a tendency to lose comparatively more bone over time if compared with wide-neck implants. However, dental implants' survival rate was acceptable and satisfactory for both groups of patients and showed no differences at the two-year follow-up. Conflicts of Interest: The authors declare no conflicts of interest.
2020-02-27T09:33:53.936Z
2020-02-25T00:00:00.000
{ "year": 2020, "sha1": "e444ed8baa4c79c11c2c180c317faea21b8b45f6", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc7084739?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "18ecf8d6e4becdf49b89a5cc79634906675d6719", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37500767
pes2o/s2orc
v3-fos-license
Classifying Single Trail Electroencephalogram Using Gaussian Smoothened Fast Hartley Transform for Brain Computer Interface during Motor Imagery : Problem statement: Brain-Computer Interface (BCI) is a emerging research area which translates the brain signals for any motor related actions into computer understandable signals by capturing the signal, processing the signal and classifying the motor imagery. This area of work finds various applications in neuroprosthetics. Mental activity leads to changes of electrophysiological signals like the Electroencephalogram (EEG) or Electrocorticogram (ECoG). Approach: The BCI system detects such changes and transforms it into a control signal which can, for example, be used as to control a electric wheel. In this study the BCI paradigm is tested by our proposed Gaussian smoothened Fast Hartley Transform (GS-FHT) which is used to compute the energies of different motor imageries the subject thinks after selecting the required frequencies using band pass filter. Results: We apply this procedure to BCI Competition dataset IVA, a publicly available EEG repository. Conclusion: The evaluations of preprocessed signals showed that the extracted features were interpretable and can lead to high classification accuracy by various mining algorithms. INTRODUCTION An emerging technology is Brain-Computer Interface (BCI) which enables paralyzed people to communicate with the external world. The changes in the brain signals are translated into operative control signals using Electroencephalogram (EEG)-based BCI. In analyzing the brain signals Motor Imagery (MI) is the state during which the depiction of a particular motor action is internally reactivated within the working memory without any overt motor output. This is governed by the principles of motor control (Sharma et al., 2006). Motor Imagery (MI) produces measurable potential changes in the EEG signals termed as Event-Related Desynchronization/Synchronization (ERD/ERS) patterns. The time, frequency and spatial non-stationarity of these patterns result in high inter subject and intra subject variability in MI-based BCIs (MI-BCIs). One of the most effective algorithms for MI-BCI is based on Common Spatial Pattern (CSP) technique Guger et al., 2000). The success of CSP in BCI application greatly depended on the proper selection of subject specific frequency bands. In the literature, common sparse spectral spatial pattern (CSSSP) (Dornhege et al., 2006) sub band CSP (SBCSP) (Novi et al., 2007); Filter bank CSP (FBCSP) (Ang et al., 2008) and adaptive FBCSP (Thomas et al., 2008) have been proposed for choosing the optimal frequency band automatically. The FBCSP (Ang et al., 2008) uses CSP features from a set of fixed band pass filters and feature selection algorithm based on mutual information to effectively choose the subject-specific features. This selection process selects features from the relevant frequency components. As the subject-specific frequency components carry distinct features, the proposed method uses a subject-specific FB selection before feature extraction to enhance the accuracy of the FBCSP framework. Classification algorithm is the core of a BCI in which the EEG signals are mapped into the space of epochs. They are then classified using decision functions learned on the training set composed of labeled signals. The classification performance depends on the choice of the pre processing techniques (Vautrin et al., 2009). A large training session becomes beneficial to lay down the decision rules that allow the classification of the user's intention (Birbaumer et al., 2008). The energy distribution over uniform frequency sub bands given by the Fourier transform is an example of apriori choice of signal features. In previous studies (Do Nascimento and Farina, 2008;Farina et al., 2007), it has been proposed the marginal of the Discrete Wavelet Transform (DWT) for feature extraction and the feature space was selected by optimizing the mother wavelet of the decomposition. The DWT (Birgale and Kokare, 2010) marginal reflects the average signal intensity over dyadic sub bands. The dyadic decomposition is well suited to describe and discriminate signals whose discriminative information is mainly at low frequencies since the frequency resolution is higher for low frequencies than for high frequencies. In this study we propose to measure energy of specific motor imageries in the brain signal using our proposed Gaussian Smoothened Fast Hartley Transform (GS-FHT) along with the Chebyshev filter and data resembling. The resultant data obtained was classified using IB1 and Alternating Decision tree. This study is organized as follows. Section 2 describes the features of the data set used in this study. Sections 3 and 4 describe the preprocessing techniques and the classification algorithms analyzed in this study respectively. Section 5 analyzes our results. MATERIALS AND METHODS Data set: We used the IV A dataset used in the brain computer interface competition provided by Intelligent Data Analysis Group. This data set consists of recordings from five healthy subjects who sat in a chair with arms resting on armrests. Visual cues indicated for 3.5 s which of the following 3 motor imageries the subject should perform: (L) left hand, (R) right hand, (F) right foot. The presentation of target cues was intermitted by periods of random length, 1.75-2.25 s, in which the subject could relax. Given are continuous signals of 118 EEG channels and markers that indicate the time points of 280 cues for each of the 5 subjects (aa, al, av, aw, ay). Subject aa was used in our study. Preprocessing of EEG signals: The regular Hartley transform's kernel is based on the cosine-and-sine function, defined as: cas (νt) = cos(νt) + sin(νt) Hartley transform compared to Fourier transforms is a real function. The Hartley transform pair can be defined as follows Eqn. 1 and 2: A very important property of Hartley Transform is its symmetry Eqn. 3: This has the advantage of using the same operation for computing the transform and its inverse. Another important feature is that the transform pairs are both real which provides good computational advantages for Hartley Transform (HT) over the Fourier Transform (FT). Many of the familiar complex relations in the Fourier domain have very similar counter parts in the Hartley domain. Let F (ω) and H(v) be the FT and HT of a function f(t) the n it is to verify the following Eqn. 4 and 5: where, ℜ, ℑ, ε, O denote real, imaginary, even and odd parts. Other properties in the Hartley domain are Eqn. 6: HT's discrete formulation DHT is given by: Which is applied to the discrete-time function x(n) with period N. The properties of the DHT are similar to those of the Discrete Fourier Transform (DFT) and Fast Hartley Transform (FHT) (Bracewell, 1984) which is similar to the familiar Fast Fourier Transform (FFT). Some of the properties of DHT are listed: Obtaining energy values using regular Fast Hartley Transform introduces artifacts associated with EEG signal measurement. To reduce the artifacts we propose a normalization of the obtained energy using Gaussian methods on the Fast Hartley Transform. The normalization provides the benefit to the system performance by desensitizing the system to the signal amplitude variability. The proposed model is defined as Eqn. 7: 2 1 x 2 N 1 x 0 Chebyshev filters are used to separate one band of frequencies from another. The EEG energy was computed in the 5-15 Hz region to primarily capture the Beta waves in the EEG signal which is closely linked to motor behavior and is generally attenuated during active movements. Chebyshev filter was primarily used for its speed. Chebyshev filters are fast because they are carried out by recursion rather than convolution. The design of these filters is based on the z-transform. Classification algorithms: Data mining (Poovammal and Ponnavaikko, 2009) involves the extraction of non trivial information from potentially large database. A primary function of data mining is classification with popular classification algorithms based on decision tree (Syurahbil et al., 2009), Neural network and support vector machine. Clustering can also be effectively used for unsupervised learning problems (Alfred et al., 2010). An Alternating Decision Tree (AD Tree) (Pfahringer et al., 2001) is a machine learning rule for classification and is a generalization of decision tree that have connections to boosting. It consists of decision nodes and prediction nodes. Decision nodes specify a predicate condition and Prediction nodes contain a single number. AD trees always have prediction nodes as both root and leaves. An epoch is classified through AD Tree by following all paths for which all decision nodes are true and summing any prediction nodes that are traversed. This is different from binary classification trees such as Classification and Regression Tree (CART) or C4.5 in which an instance follows only one path through the tree. The AD Tree algorithm's fundamental element is the rule which consists of a precondition, condition and two scores. A condition is a predicate which is in the form of attribute comparison value. The tree structure can be derived from a set of rules by making note of the precondition that is used in each successive rule. IB1 classifier is a simple instance-based learner that uses the class of the nearest k training instances for the class of the test instances. IB1 uses a weighted overlap of the feature values of test instance and a memorized example. The metric combines a per-feature value distance metric with global feature weights that account for relative differences in discriminative power of the features. RESULTS The results obtained are tabulated in Table 1 and Percentage of correctly classified instances DISCUSSION An application was built using labview and GS-FHT with Chebyshev filter was implemented. The epoch occurring for a time period of 3.5 at a sampling rate of 100Hz was input to the application. The maximum and average energy was computed. Screen shots of output are shown in Fig. 2 and 3. The energies were computed for 59 EEG electrodes for 280 instances of motor imagery cues of right hand and right foot. The energy values from each electrode were used as attributes for predicting the class label. A tenfold cross validation was used to train the algorithms. CONCLUSION The energy from the preprocessed EEG epoch was extracted using a combination of Fast Hartley transform and Chebyshev filter. The data was resembled and classified using AD tree and IB1. The classification result so obtained was tabled in the previous section. The proposed GS-FHT algorithm was implemented under the same setup and the result obtained is promising keeping in mind the goal of reducing the preprocessing time. Further work need to be done to improve the classification accuracy to bridge the manmachine gap by understanding the human semantic factor. Fuzzy logic could be an area of work to identify relevant feedback automatically and provide the necessary feed back to the classifier to improve classification.
2019-02-13T14:09:12.275Z
2011-05-07T00:00:00.000
{ "year": 2011, "sha1": "350d4dd7660978b0d0bcfa0a29ec5e2d51027d52", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/jcssp.2011.757.761", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3bf9b0cbf70a3de1124587a166e4d05b9394f7e8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
233210196
pes2o/s2orc
v3-fos-license
Applications of physics-informed scientific machine learning in subsurface science: A survey Geosystems are geological formations altered by humans activities such as fossil energy exploration, waste disposal, geologic carbon sequestration, and renewable energy generation. Geosystems also represent a critical link in the global water-energy nexus, providing both the source and buffering mechanisms for enabling societal adaptation to climate variability and change. The responsible use and exploration of geosystems are thus critical to the geosystem governance, which in turn depends on the efficient monitoring, risk assessment, and decision support tools for practical implementation. Fast advances in machine learning (ML) algorithms and novel sensing technologies in recent years have presented new opportunities for the subsurface research community to improve the efficacy and transparency of geosystem governance. Although recent studies have shown the great promise of scientific ML (SciML) models, questions remain on how to best leverage ML in the management of geosystems, which are typified by multiscality, high-dimensionality, and data resolution inhomogeneity. This survey will provide a systematic review of the recent development and applications of domain-aware SciML in geosystem researches, with an emphasis on how the accuracy, interpretability, scalability, defensibility, and generalization skill of ML approaches can be improved to better serve the geoscientific community. Introduction Compartments of the subsurface domain (e.g., vadose zone, aquifer, and oil and gas reservoirs) have provided essential services throughout the human history. The increased exploration and utilization of the subsurface in recent decades, exemplified by the shale gas revolution, geological carbon sequestration, and enhanced geothermal energy recovery, have put the concerns over geosystem integrity and sustainability under unprecedented public scrutiny [Elsworth et al., 2016;?]. In this chapter, geosystems are defined broadly as the parts of lithosphere that are modified directly or indirectly by human activities, including mining, waste disposal, groundwater pumping and energy production [National Research Council, 2013]. Anthropogenic-induced changes may be irreversible and have cascading social and environmental impacts (e.g., overpumping of aquifers may cause groundwater depletion, leading to land subsidence and water quality deterioration, and increasing the cost of food production). Thus, the sustainable management of geosystems calls for integrated site characterization, risk assessment, and monitoring data analytics that can lead to better understanding while promoting inclusive and equitable policy making. Importantly, system operators need to be able to explore and incorporate past experience and knowledge, gained either from the same site or other similar sites, to quickly identify optimal management Figure 1. The conventional subsurface modeling workflow consists of (from left to right) data collection and interpretation, geological modeling, fluid modeling, and system modeling. Machine learning has potential to automate all of these steps. actions, detect abnormal system signals, and to prevent catastrophic events (e.g., induced seismicity and leakage) from happening. Fast advances in machine learning (ML) technologies have revolutionized predictive and prescriptive analytics in recent years. Significant interests exist in harnessing this new generation of ML tools for Earth system studies [Reichstein et al., 2019;Sun and Scanlon, 2019;Bergen et al., 2019]. Unlike many other sectors, however, subsurface formations are often poorly characterized and scarcely monitored, thus relying extensively upon geological and geofluid modeling to generate spatially and temporally continuous "images" of the subsurface. A conventional workflow may consist of (a) geologic modeling, which seeks to provide a 3D representation of the geosystem under study by fusing qualitative interpretation of the geological structure, stratigraphy, and sedimentological facies, as well as quantitative data on geologic properties; and (b) fluid and geomechanical modeling, which describes the fluid flow, mass transport, and formation deformations through physics-based governing equations and the accompanying initial/boundary/forcing conditions. Once established, the workflow is used to generate 3D "images" of the subsurface processes for inference and/or prediction ( Figure 1). We argue that subsurface modeling is inherently a semisupervised generative modeling process [Chapelle et al., 2009], in which joint data distributions are learned via limited observations. A main difference is that in field-scale subsurface modeling, observations are always sparse and only indirectly related to data. For example, the observed quantities may be well logs, but the data of interest are fluid saturation and pore pressure (state variables); in this case, well logs are first analyzed to infer stratigraphy and parameter distributions (e.g., porosity and permeability), which are then mapped to predictions of the state variables. Physics-based modeling serves two purposes throughout this process, namely, mapping the parameter space to state space, and providing a spatiotemporal interpolation/extrapolation mechanism that is guided by first principles and prior knowledge. A main issue of this traditional workflow is that a significant amount of human processing time and computational time is involved, limiting its efficiency and potentially introducing significant latency and subjectivity during the process. On the other hand, machines are good at automating processes and learning proxy models after getting trained. Tremendous opportunities now exist to integrate physics-based modeling and data-driven ML to improve the accuracy and efficacy of the geoanalytics workflow. To help the geoscientific community better embrace and incorporate various ML methods, this chapter provides a survey of recent physics-based ML developments in geosciences, with a focus on three main aspects-a taxonomy of ML methods that have been used (Sec- tion 2), brief introduction to some of the commonly used ML methods 3), the types of use cases that are amenable to physics-based ML treatment (Section 4 and Table 2), and finally challenges and future directions of ML applications in geosystem modeling (Section 5). 2 Taxonomy of GeoML Methods Table 1 lists the main notations and symbols used in this survey. Physics-based ML in the geoscientific domain (hereafter GeoML) is a type of scientific machine learning (SciML) which, in turn, may be considered a special branch of AI/ML that develops applied ML algorithms for scientific discovery, with special emphases on domain knowledge integration, data interpretation, cross-domain learning, and process automation [Baker et al., 2019]. A main thrust behind the current SciML effort is to combine the strengths of physics-based models with data-driven ML methods for better transparency, interpretability, and explainability [Roscher et al., 2020]. Unless otherwise specified, we shall use the terms physics-based, process-based, and mechanistic models interchangeably in this survey. We provide three taxonomies of GeoML methods based on their design and use. First, existing GeoML methods may be classified according to the widely used ML taxonomy into unsupervised, supervised, and reinforcement learning methods. In Figure 2, this taxonomy is used to group the existing GeoML applications, an exposition of which will be deferred to Section 4. Another commonly used taxonomy is generative models vs. discriminative models. Generative models seek to learn the probability distributions/patterns of individual classes in a dataset, while discriminative models try to predict the boundary between different classes in a dataset [Goodfellow et al., 2016]. Thus, in a supervised learning setting and for given training samples of input variables X and label y, {(x i , y i )}, a generate model learns the joint distribution p(X, y) so that new realizations can be generated, while a discriminative model learns to predict the conditional distribution p(y | X) directly. Generative models can be used to learn from both labeled and unlabeled data in supervised, unsupervised, or supervised tasks, while discriminative models cannot learn from unlabeled data, but tend to outperform their generative counterparts in supervised tasks [Chapelle et al., 2009]. In GeoML, generative models are particularly appealing because of the strong need for understanding the causal relationships, and because the same underlying Bayesian frameworks are also employed in many physics-based frameworks. In the classic Bayesian inversion framework, for example, the parameter inference problem may be cast as [Sun and Sun, 2015], where the posterior distribution of model parameters θ are inferred from the state observations d obs and prior knowledge I. Many physics-based ML applications exploit the use of ML models for estimating the same distributions, but by fusing domain knowledge to form priors and constraints. On the basis of how physical laws and domain knowledge are incorporated, existing GeoML methods fall into pre-training, physics-informed training, residual modeling, and hybrid learning methods. In pre-training methods, which are widely used in ML-based surrogate modeling, prior knowledge and process-based models are mainly used to generate training samples for ML from limited real information. The physics is implicitly embedded in the training samples. After the samples are generated, an ML method is then used to learn the unknown mappings between parameters and model states through solving a regression problem. In physics-informed training, physics laws and constraints are utilized explicitly to formulate the learning problem, such that ML models can reach a fidelity on par to PDE-based solvers. Residual modeling methods use ML as a fine-tuning, post-processing step, under the assumption that the process-based models reasonably capture the large-scale "picture" but have certain missing processes, due to either conceptual errors or unresolved/unmodeled processes (e.g., subgrid processes). ML models are then trained to learn the mapping between model inputs and error residuals (e.g., between model outputs and observations), which are used to correct the effect of missing processes on model outputs [Sun et al., 2019a;Reichstein et al., 2019]. A main caveat of the existing ML paradigm is that models are trained offline using historical data, and then deployed in operations, in the hope that the future environment stays more or less under the same conditions. This is referred to as the closed-world assumption, namely, classes of all new test instances have already been seen during training [Chen and Liu, 2018]. In some situations, new classes of data may appear or the environment itself may drift over time; in other situations, it is desirable to adapt a model trained on one task to other similar tasks without training separate models. Hybrid learning methods focus on continual or lifelong learning, in which ML models and process-based models co-evolve to reflect new information available. The past knowledge is accumulated and then used to help future learning [Chen and Liu, 2018;Parisi et al., 2019]. Hybrid learning methods thus have elements of multitask learning, transfer learning, and reinforcement learning from the ML side, and data assimilation from the process modeling side. Understandably hybrid learning models are more difficult to formulate and train, but they represent important steps toward the "real" AI, in which agents learn to act reasonably well not in a single domain but in many domains [Bostrom and Yudkowsky, 2014]. Commonly Used GeoML Algorithms For completeness, we briefly review the common algorithms and application frameworks behind the GeoML use cases to be covered in Section 4 (see also Table 2). Most of the categories mentioned herein are not exclusive. Autoencoders, the generative adversarial networks, and graph neural networks are high-level ML algorithmic categories that include many variants, while spatial-temporal methods and physics-informed methods are application frameworks that may be implemented using any of the ML methods. Autoencoders A main premise of the modern AI/ML is in representation learning, which seeks to extract the low-dimensional features or to disentangle the underlying factors of variation from learning subjects that can support generic and effective learning [Bengio et al., 2013]. An important class of methods for representation learning is autoencoders, which are unsupervised learning methods that encode unlabeled input data into low-dimensional embeddings (latent space variables) and then reconstruct the original data from the encoded information. For input data x, the encoder maps it to a latent space vector, z = f (x; W e ), while the decoder reconstructs the input data,x = g(z; W d ),where W e and W d are weight matrices of the encoder and decoder, respectively. The standard autoencoder is trained by minimizing the reconstruction error, which implies a good representation should keep the information of the input data well. Once trained, the autoencoder may serve as a generative model (prior) for generating new samples, clustering, or for dimension reduction. Variants of autoencoders include variational autoencoder (VAE) and restricted Boltzmann machine (RBM) [Goodfellow et al., 2016;Doersch, 2016]. It is worth pointing out that the notion of representation learning has long been investigated in the context of parameterization and inversion of physics-based models, although the primary goal there is to make the inversion process less ill-posed by reducing the degree of unknowns. In geosciences, autoencoders are closely related to stochastic geological modeling, which is the main subject of study in geostatistics [Journel and Huijbregts, 1978]. In stochastic geological modeling, the real geological formation is considered one realization of a generative stochastic process that can only be "anchored" through a limited set of measurements. The classic principle component analysis (PCA) may be used to encode statistically stationary random processes, while other algorithms, such as the multipoint statistics simulators, have been commonly used to simulate more complex depositional environments [Caers and Zhang, 2004;Mariethoz et al., 2010]. Autoencoders, when implemented using the deep convolutional neural nets (CNNs) [Chan and Elsheikh, 2017;Yoon et al., 2019], provide a more flexible tool for parameterizing the complex geological processes and for generating (synthetic) training samples for the downstream tasks, such as surrogate modeling. From this sense, autoencoders fall in the category of pre-training methods. Generative adversarial networks The generative adversarial networks (GANs), introduced originally in [Goodfellow et al., 2014], have spurred strong interests in geosciences. The vanilla GAN [Goodfellow et al., 2014] trains a generative model (or generator) and a discriminator model (discriminator) in a game theoretic setting. The generatorx = G(z; W g ) learns the data distribution and generates fake samples, while the discriminator D(x; W d ) predicts the probability of fake samples being from the true data distribution, where W g and W d are trainable weight matrices. A minimax optimization problem is formulated, in which the generator is trained to minimize the reconstruction loss to generate more genuine samples, while the discriminator is trained to maximize its probability of distinguishing true samples from the fake samples, where p data (·) and p z (·) are data and latent variable distributions. In practice, the generator and discriminator are trained in alternating loops, the weights of one model is frozen when the weights of the other are updated. It has been shown that if the discriminator is trained to optimality before each generator update, minimizing the loss function is equivalent to minimizing the Jensen-Shannon divergence between data p data (·) and generator px(·) distributions [Goodfellow et al., 2016]. Many variants of the vanilla GAN have been proposed, such as the deep convolutional GAN (DCGAN) [Radford et al., 2015], superresolution GAN (SRGAN) [Ledig et al., 2017], Cycle-GAN [Zhu et al., 2017], StarGAN [Choi et al., 2018], and missing data imputation GAN (GAIN) [Yoon et al., 2018]. Recent surveys of GANs are provided in [Creswell et al., 2018;Pan et al., 2019]. So far, GANs have demonstrated superb performance in generating photo-realistic images and learning cross-domain mappings. Training of the GANs, however, are known to be challenging due to (a) larger-size networks, especially those involving a long chain of CNN blocks and multiple pairs of generators/discriminators, (b) the nonconvex cost functions used in GAN formulations, (c) diminished gradient issue, namely, the discriminator is trained so well early on in training that the generator's gradient vanishes and learns nothing, and (d) the "mode collapse" problem, namely, the generator only returns samples from a small number of modes of a mutimodal distribution [Goodfellow et al., 2016]. In the literature, different strategies have been proposed to alleviate some of the aforementioned issues. For example, to adopt and modify deep CNNs for improving training stability, the DCGAN architecture [Radford et al., 2015] was proposed by including stride convolutions and ReLu/LeakyRelu activation functions in the convolution layers. To ameliorate stability issues with the GAN loss function, the Wasserstein distance was introduced in the Wasserstein GAN (WGAN) Gulrajani et al., 2017] to measure the distance between generated and real data samples, which was then used as the training criterion in a critic model. To remedy the mode collapse problem, the multimodal GAN [Huang et al., 2018] was introduced, in which the latent space is assumed to consist of domain-invariant (called content code) and domain specific (called style code) parts; the former is shared by all domains, while the latter is only specific to one domain. The multimodal GAN is trained by minimizing the image space reconstruction loss, and the latent space reconstruction loss. In the context of continual learning, the memory replay GAN was proposed to learn from a sequence of disjoint tasks. Like the autoencoders, GAN represents a general formulation for supervised and semi-supervised learning, thus its implementation is not restricted to certain types of network models. Graph neural networks ML methods originating from the computer vision typically assume the data has a Euclidean structure (i.e., grid like) or can be reasonably made so through resampling. In many geoscience applications, data naturally exhibits a non-Euclidean structure, such as the data related to natural fracture networks and environmental sensor networks, or the point cloud data obtained by lidar. These unstructured data types are naturally represented using graphs. A graph G consists of a set of nodes V and edges E, G = (V, E). Each node v i ∈ V is characterized by its attributes and has a varying number of neighbors, while each edge e ij ∈ E denotes a link from node v j to v i . The binary adjacency matrix A is used to define graph connections, with its elements a ij = 1 if there is edge between i and j and a ij = 0 otherwise. Various graph neural networks (GNNs) have been introduced in recent years to perform ML tasks on graphs, a problem known as "geometric learning" [Bronstein et al., 2017]. The success (e.g., efficiency over deep learning problems) of CNN is owed to several nice properties in its design, such as shift-invariance and local connectivity, which lead to shared parameters and scalable networks [Goodfellow et al., 2016]. A significant endeavor in the GNN development has been related to extending these CNN properties to graphs using various clever tricks. The graph convolutional neural networks (GCNN) extend CNN operations to non-Euclidean domains and consist of two main classes of methods, the spectral-based methods and the spatial-based methods. In spectral-based methods, the convolution operation is defined in the spectral domain through the normalized graph Laplacian, L, defined as where A is adjacency matrix, I is identify matrix, D is the node degree matrix (i.e, d ii = j a ij ), and U and Λ are eigenvector matrix and diagonal eigenvalue matrix of the normalized Laplacian. Utilizing U and Λ, the spectral graph convolution on input x is defined by a graph filter g θ [Bruna et al., 2013] where * denotes the graph convolution operator and the graph filter g θ (Λ) is parameterized by the learnable parameters θ ij . Main limitations of the original graph filter given in Eqn. 4 are it is non-local, only applicable to a single domain (i.e., fixed graph topology), and involves the computationally expensive eigendecomposition (O(N 3 ) time complexity) [Bronstein et al., 2017;Wu et al., 2020]. Later works proposed to make the graph filter less computationally demanding by approximating g θ (Λ) using the Chebychev polynomials of Λ, which led to ChebNet [Defferrard et al., 2016] and Graph Convolutional Net (GCN) [Kipf and Welling, 2016]. It can be shown these newer constructs lead to spatially localized filters, such that the number of learnable parameters per layer does not depend upon the size of the input [Bronstein et al., 2017]. In the case of GCN, for example, the following graph convolution operator was proposed [Kipf and Welling, 2016], where θ is a set of filter parameters, and a renormalization trick was applied in the second equality in Eqn. 5 to improve the numerical stability,Ã = A + I andd ii = jã ij . The above graph convolutional operation can be generalized to multichannel inputs X ∈ R N ×C , such that the output is given byD −1/2ÃD−1/2 XΘ, where Θ is a matrix of filter parameters. In spatial-based methods, graph convolution is defined directly over a node's local neighborhood, instead via the eigendecomposition of Laplacian. In diffusion CNN (DCNN), information propagation on a graph is modeled as a diffusion process that goes from one node to its neighboring node according to a transition probability [Atwood and Towsley, 2016]. The graph convolution in DCNN is defined as where X is input matrix, P = D −1 A is transition probability matrix, k defines the power of P, K is the total number of power terms used (i.e., the number of hops or diffusion steps) in the hidden state extraction, and W and H are the weight and hidden state matrices, respectively. The final output is obtained by concatenating the hiddden state matrices and then passing to an output layer. In GraphSAGE [Hamilton et al., 2017] and message passing neural network (MPNN) [Gilmer et al., 2017], a set of aggregator functions are trained to learn to aggregate feature information from a node's local neighborhood. In general, these networks consist of three stages, message passing, node update, and readout. That is, for each node v and at the k-th iteration, the aggregation function f k combines the node's hidden representation with those from its local neighbors N (v), which is then passed to update functions to generate the hidden states for the next iteration, where h denotes a hidden-state vector. Finally, in the readout stage, a fixed-length feature vector is computed by a readout function and then passed to a fully connected layer to generate the outputs. In general, spatial-based methods are more scalable and efficient than the spectral methods because they do not involve the expensive matrix factorization, the computation can be performed in mini-batches and, more importantly, the local nature indicates that the weights can be shared across nodes and structures . Counterpart implementations of all well-established ML architectures (e.g., GAN, autoencoder, and RNN) can now be found in GNNs. Recent reviews on GNNs can be found in [Zhou et al., 2018;Wu et al., 2020]. For subsurface applications, a main challenge is related to graph formulation, namely, given a set of spatially discrete data, how to connect the nodes. Common measures calculate certain pairwise distances (e.g., correlation, Euclidean, city block), while other methods incorporate the underlying physics (e.g., discrete fracture networks [Hyman et al., 2018]) to identify the graphs. Spatiotemporal ML methods In this and the next subsection, we review two methodology categories that use one or more of the aforementioned methods as construction blocks. Spatiotemporal processes are omnipresent in geosystems and represent an important area of study [Kyriakidis and Journel, 1999]. For gridded image-like data, the problem bears similarity to the video processing problem in computer vision. In general, two classes of ML methods have been applied, those involving only CNN blocks and those combining with recurrent neural nets (RNNs). Fully-connected CNNs can be used to model temporal dependencies by stacking the most recent sequence of images/frames in a video stream. In the simplest case, the channel dimension of the input tensor is used to hold the sequence of images and CNN kernels are used to extract features like in a typical CNN-based model (i.e., 2D kernels for a stack of 2D images). In other methods, for example, C3D [Tran et al., 2015] and temporal shift module (TSM) , an extra dimension is added to the tensor variable to help extract temporal patterns. C3D uses 3D CNN operators, which generally leads to much larger networks. TSM was designed to shift part of the channels along the temporal dimension, thus facilitating information extraction from neighboring frames while adding almost no extra computational costs compared to the 2D CNN methods ]. The hybrid methods use a combination of RNNs with CNNs, using the former to learn long-range temporal dependencies and the latter to extract hierarchical features from each image. The convolutional long short-term memory (ConvLSTM) network [Shi et al., 2015] represents one the most well known methods under this category. In ConvLSTM, features from convolution operations are embedded in the LSTM cells, as described by the following series of operations [Shi et al., 2015] where X, H, and C are input, hidden, and cell output matrices, W and b represent learnable weights and biases, σ and tanh denote activation functions, and i, f ,and o are the input, forget, and output gates. The symbols * and • denote the convolution operator and Hadamard (element-wise) product, respectively. Because of its complexity and size, ConvLSTM networks may be more difficult to train than the CNN-only methods. Geological processes are known to exhibit certain correlation in space and time. The convolution operations are like a local filter and not good at catching large scale features, which is especially the case for relatively shallow CNN-based models. In recent years, attention mechanisms have been introduced to better capture the long-range dependencies in space and time, and to give higher weight to most relevant information [Vaswani et al., 2017]. In the location-based attention mechanism, for example, input feature maps are transformed and used to calculate a location-dependent attention map Zhang et al., 2019] where X ∈ R C×H×W are the input feature maps, C, H, and W are the channel and spatial dimensions of the input feature map, N = CW is the total number of features in the feature map, F and G are two transformed feature maps obtained by passing the inputs to separate 1 × 1 convolutional layers, and the attention weights α ji measure the influence of remote location i on region j. The resulting attention map is then concatenated with the input feature maps to give the final outputs from the attention block. A similar attention mechanism may be defined for the temporal dimension to catch the temporal correlation [Zhu et al., 2018]. The attention-based ML models thus offer attractive alternatives to many parametric geostatistical methods for 4D geoprocess modeling. For unstructured data, GNNs can be used to learn spatial and temporal relationships. For example, spatiotemporal graph convolution network (ST-GCN) and spatiotemporal multi-graph convolution network [Geng et al., 2019] were used for skeleton-based action recognition and for ride share forecast, respectively. The spatial-based GNNs may also be suitable for the missing data problem, where the neighborhood information can be used to estimate missing nodal values. For problems that can be treated using GNNs, the resulting learnable parameter sizes are generally much smaller. Physics-informed methods As mentioned in the last subsection, all GeoML applications that incorporate certain domain knowledge or use process-based models in the workflow may be considered physics informed. Recently, a number of SciML frameworks have been developed to incorporate the governing equations in a more principled way. In general, these methods may be divided into finite-dimensional mapping methods, neural solver methods, and neural integral operator methods [Li et al., 2020]. All these methods seek to either parameterize the solution of a PDE, u = M(a), where M is model operator, u ∈ U is the solution and a ∈ A are parameters, or to approximate the model operator itself. Finite-dimensional methods learn mappings between finite-dimensional Euclidean spaces (e.g., the discretized parameter space and solution space), which is similar to many use cases in computer vision. The main difference is that additional PDE loss terms related to the PDE being solved are incorporated. For example, Zhu et al. [2019] considered the steadystate flow problem in porous media (an elliptic PDE) and used the variational form of the PDE residual as a loss term, in addition to the data mismatch term. Many of the existing methods (e.g., U-Net) from the computer vision can be directly applied in these methods. A main limitation of the finite-dimensional methods is they are grid specific (without resampling) and problem specific. In neural solver methods, such as physics-informed neural networks (PINNs) [Raissi et al., 2019;Zhu et al., 2019;Lu et al., 2019], universal differential equation (UDE) [Rackauckas et al., 2020], and PDE-Net [Long et al., 2019], the neural networks (differentiable functions by design) are used to approximate the solution and the PDE residual is derived for the given PDE by leveraging auto differentiation and neural symbolic computing. In general, these approaches assume the PDE forms/classes are known a priori, although some approaches (e.g., PDE-Net) can help to identify whether certain terms are present in a PDE or not under relatively simple settings. The neural integral operator methods [Fan et al., 2019;Winovich et al., 2019;Li et al., 2020] parameterize the differential operators (e.g., Green's function, Fourier transform) resulting from the solution of certain types of PDEs. These methods are mesh independent and learn mappings between infinite-dimensional spaces. In other words, a trained model has the "super-resolution" capability to map from a low-dimensional grid to a high-dimensional grid. All the physics-informed methods may provide accurate proxy models. The advantages of these differential equation oriented methods are (a) smooth solutions, by enforcing derivatives as constraints they effectively impose smoothness in the solution, (b) extrapolation, by forcing the NN to replicate the underlying differential equations these methods also inherit the extrapolation capability of physics-based models, which is lacking in purely data-driven methods, (c) closure approximations, by parameterizing the closure terms using hidden neurons they allow the unresolved processes to be represented and "discovered" in the solution process, and (d) less data requirements, which comes as the result of the extensive constraints used in those frameworks. On the other hand, the starting point of many methods are differential equations, which means extensive knowledge and analysis are still required to select and formulate the equations, a process that is well known for its equifinality issue [Beven and Freer, 2001]. Future works are still required to make the physics-informed methods less PDE-class specific and be able to handle flexible initial/boundary conditions and forcing terms. Applications The number of GeoML publications has grown exponentially in recent years. Here we review a selected set of recent GeoML applications according to the taxonomy discussed under Section 2, and plotted in Fig. 2. The list of publications is also summarized in Table 2, according to their ML model class, model type, use case, and the way physics was incorporated. In making the list, we mainly focused on reservoir-scale studies. Reviews of porous flow ML applications in other disciplines (e.g., material science and chemistry) can be found in [Alber et al., 2019;Brunton et al., 2020]. Early works adopting the deep learning methods explored their strong generative modeling capability for geologic simulation. In [Chan and Elsheikh, 2017], WGAN was used to generate binary facies realizations (bimodal). Training samples were generated using a training image, which has long been used in multipoint geostatistics as a geology-informed guide for constraining image styles [Strebelle, 2002]. WGAN was trained to learn the latent space encoding of the bimodal facies field. The authors showed that WGAN achieved much better performance than PCA, which is a linear feature extractor that works best on single-modal, Gaussian-like distributions. In , a convolutional encoder was trained to reconstruct a complex geologic model from its PCA parameterization. A key idea there was to learn the mismatch between the naïve PCA representations and the original high-fidelity counterparts such that new high-resolution realizations can be generated using latent variables obtained from PCA. Recently, DCGANs have been applied to generate drainage networks by transforming the training network images to directional information of flow on each node of the network [Kim et al., 2020]. The generated network has been dramatically improved by optimal decomposition of the drainage connectivity information into multiple binary layers with the connectivity constraints stored. A large number of GeoML applications fall under surrogate modeling, which is not surprising given that the geocommunity has long been utilizing surrogate models in model-based optimization, sensitivity analysis, and uncertainty quantification [Forrester and Keane, 2009;Razavi et al., 2012]. Because reservoir models are 2D or 3D distributed models, many ML studies entailed some type of end-to-end, cross-domain learning architecture, which translates an image of input parameter to state variable maps. In general, these methods utilize physics-based porous flow models to generate training samples. In [Mo et al., 2019b], a convolutional autoencoder was trained to learn the cross-domain mapping between permeability maps and reservoir states (pressure and saturation distributions) at different times. In [Zhong et al., 2019[Zhong et al., , 2020a, a U-Net based convolutional GAN was trained to solve a similar problem. Both studies also demonstrated the strong skill of ML-based models in uncertainty quantification. In Tang et al. [2020], a hybrid U-Net and ConvLSTM model was trained to learn the dynamic mappings in multiphase reservoir simulations. In [Mo et al., 2020], multi-level residual learning blocks were used to implement a GAN model for surrogate modeling. ML techniques have also been combined with model reduction techniques (e.g., proper orthogonal decomposition or POD) to first reduce the dimension of models states before applying ML [Jin et al., 2020]. The Darcy's flow problem has also been used as a classic test case in many physics-informed studies, but generally under relatively simple settings [Zhu et al., 2019;Winovich et al., 2019;Li et al., 2020] Model calibration and parameter estimation represent an integral component of the closed-loop geologic modeling workflow. A general strategy has been using autoencoders to parameterize the model parameters as random fields, the resulting latent variables are then "calibrated" using observation data in an outer-loop optimization, such as Markov chain Monte Carlo [Laloy et al., 2018] and ensemble smoothers [Canchumuni et al., 2019;Liu et al., 2019;Mo et al., 2020;Liu and Grana, 2020]. In Bayesian terms, this workflow yields the so-called conditional realizations of the uncertain parameters, which are simply samples of a posterior distribution informed by observations and priors (see Eqn 1). Other studies approached the inversion problem directly using cross-domain mapping. For example, DiscoGAN and CycleGAN were used to learn bidirectional [Sun, 2018;Wu and Lin, 2019] and tri-directional mappings [Zhong et al., 2020b]. Many process-based models are high-dimensional and expensive to run, prohibiting the direct use of cross-domain surrogate modeling. ML-based multifidelity modeling offers an intermediate step. The general idea is to reduce the requirements on high-fidelity model runs by utilizing cheaper-to-run, lower fidelity models. Towards this goal, in [Perdikaris et al., 2017], a recursive Gaussian process was trained sequentially using data from multiple model fidelity levels, reducing the number of high-fidelity model runs. In , a multifidelity PINN was introduced to learn mappings (cross-correlations) between low-and high-fidelity models, by assuming the mapping function can be decomposed into a linear and a nonlinear part. Their method was expanded using Bayesian neural networks to not only learn the correlation between low-and high-fidelity data, but also give uncertainty quantification . Fractures and faults are extensively studied in geosystem modeling for risk assessment and production planning. In [Schwarzer et al., 2019], a recurrent GNN was used to predict the fracture propagation, using simulation samples from high-fidelity discrete fracture network models. In [Sidorov and Yngve Hardeberg, 2019], GNN was used to extract crack patterns directly from high-resolution images, which may have a significant implication to a wide range of geological applications. Ultimately, the goal of geosystem modeling is to train ML agents to quickly identify optimal solutions and/or policies, which is a challenging problem that requires integrating many pieces in the current ML ecosystems. The recently advanced deep reinforcement learning algorithms offer a new paradigm for exploiting past experiences while exploring new solutions [Mnih et al., 2013]. In general, model-based deep reinforcement learning frameworks solve a sequential decision making problem. At any time, the agent chooses a trajectory that maximizes the future rewards. Doing so would require hopping many system states in the system space, which is challenging for high-dimensional systems. In [Sun, 2020], the deep Q learning (DQL) algorithm was used to identify the optimal injection schedule in a geologic carbon sequestration planning. A deep surrogate, autoregressive model was trained using U-Net to facilitate state transition prediction. A discrete planning horizon was assumed to reduce the total computational cost. In [Ma et al., 2019], a set of deep reinforcement learning algorithms were applied to maximize the net present value of water flooding rates in oil reservoirs. It remains a challenge to generate the surrogate models for arbitrary state transitions, such as that is encountered in well placement problems where both the number and locations of new wells need to be optimized. Challenges and Future Directions Our survey shows that GeoML has opened a new window for tackling longstanding problems in geological modeling and geosystem management. Nevertheless, a number of challenges remain, which are described below. Training data availability GeoML tasks require datasets for training, validation, and testing. In the subsurface domain, data acquisition can be costly. For example, to acquire 3D seismic data, an operator may spend at least $1M before seeing results. The costs of drilling new exploration wells are on the same order of magnitudes. Thus, data augmentation using synthetic datasets will play an important role in improving the current generalization capability of ML models. A main challenge is related to generating realistic datasets that also meet unseen field conditions. In addition, generating simulation data for subsurface applications can also be time consuming if the parameter search space is large and requires substantial computational resources. Efforts from the public and private sectors have started to make data available. Government agencies encourage or require oil and gas operators to regularly report well information (e.g., drilling, completion, plugging, production, etc.) and make the data available to the public. Standing on the foundation, companies integrate the data and added proprietary assessments for commercial licenses. The U.S. Energy Information Administration (EIA) implements multiple approaches to facilitate data access (https://www.eia.gov/opendata). Over the last decade, the National Energy Technology Laboratory (NETL) has developed a data repository and laboratory, called Energy Data eXchange (EDX), to curate and preserve data for reuse and collaboration that supports the entire life cycle of data (https://edx.netl.doe.gov/). Open Energy Information (https://openei.org) represents an example of community-driven platform for sharing energy data. However, challenges related to data Findability, Accessibility, Interoperability, and Reuse (FAIR) remain to be solved. For example, government agencies or data publishers may have different definitions and data capturing processes. Comparing data on the same basis requires additional processing and deciphering [Lackey et al., 2021]. Because of the high cost of acquiring data, proprietary, and other reasons, companies often are hesitant to share their data to form a unified or centralized dataset for ML training. A new approach, federated learning, is emerging to address the data privacy issue [Konečnỳ et al., 2016]. The approach trains an algorithm across decentralized models with their local datasets. Instead of sending the data to form a unified dataset, federated learning exchanges parameters among local models to generate a global model. This approach shows one way to solve the data issues in the subsurface fields to promote collaboration. Model scalibility Geosystems are a type of high-dimensional dynamic systems. Image-based, deep learning algorithms originating from computer vision were developed for fixed, small-sized training images. Large-scale models (e.g., hyperresolution groundwater model) are thus too big to use without resampling, a procedure that inevitably loses fine details. There is a strong need for developing multi-resolution, multi-fidelity ML models that are suitable for uncovering multiscale geological patterns. We are beginning to see new developments in this direction from the applied mathematics [Park et al., 2020;Li et al., 2020;Fan et al., 2019]. However, the feasibility of these approaches on field-scale problems in geosciences needs to be tested. From the cyber-infrastructure side, next-generation AI/ML acceleration hardware continuously evolve to tackle the scalability issue. For example, a recent pilot study in computational fluid dynamics showed that it could be more than 200 times faster than the same workload on an optimized number of cores on the NETL's supercomputer JOULE 2.0 [Rocki et al., 2020]. Similar scaling performance has been reported on other exascale computing clusters involving hundreds of GPU's [Byna et al., 2020]. Domain transferrability Even though geologic properties are largely static, the boundary and forcing conditions of geosystems are dynamic. A significant challenge is related to adapting ML models trained for one set of conditions or a single site (single domain) to other conditions (multiple domains), with potentially different geometries and boundary/forcing conditions. This problem has been tackled under lifelong learning (see Section 2). In recent year, few-shot meta-learning algorithms [Finn et al., 2017;Sun et al., 2019b] have been developed to enable domain transferrability. The goal of meta-learning is to train a model on a variety of learning tasks such that the trained model can discover the common structure among tasks (i.e., learning to learn), which is then used solve new learning tasks using only a small number of training samples [Finn et al., 2017]. Future GeoML research needs to adapt these new developments to enhance transfer learning across geoscience domains. Conclusions Geosystems play an important role in the current societal adaptation to climate change. Tremendous opportunities exist in applying AI/ML to manage the geosystems in transparent, fair, and sustainable ways. This chapter provided a review of the current applications and practices of ML in the geosystem management. Significant progress has been made in recent years to incorporate deep learning algorithms and physics-based learning. Nevertheless, many of the current approaches/models are limited by their generalization capability because of data limitations, domain specificity, and/or resolution limitation. In addition, many of the current models were demonstrated over simplistic toy problems. Future efforts should focus on mitigating these aspects to make GeoML models more generalizable and trustworthy.
2021-04-13T01:15:49.084Z
2021-04-10T00:00:00.000
{ "year": 2021, "sha1": "8742ca6db990338f95e976f4bea5b2af8558ce56", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a48fe65c3348702ae0c527c25fe854995c544f50", "s2fieldsofstudy": [ "Physics", "Environmental Science", "Geology", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
3381713
pes2o/s2orc
v3-fos-license
Implementing SBIRT (Screening, Brief Intervention and Referral to Treatment) in primary care: lessons learned from a multi-practice evaluation portfolio Background Screening, Brief Intervention and Referral to Treatment (SBIRT) is a public health framework approach used to identify and deliver services to those at risk for substance-use disorders, depression, and other mental health conditions. Primary care is the first entry to the healthcare system for many patients, and SBIRT offers potential to identify these patients early and assist in their treatment. There is a need for pragmatic “best practices” for implementing SBIRT in primary care offices geared toward frontline providers and office staff. Methods Ten primary care practices were awarded small community grants to implement an SBIRT program in their location. Each practice chose the conditions for which they would screen, the screening tools, and how they would provide brief intervention and referral to treatment within their setting. An evaluation team communicated with each practice throughout the process, collecting quantitative and qualitative data regarding facilitators and barriers to SBIRT success. Using the editing method, the qualitative data were analyzed and key strategies for success are detailed for implementing SBIRT in primary care. Results The SBIRT program practices included primary care offices, federally qualified health centers, school-based health centers, and a safety-net emergency department. Conditions screened for included alcohol abuse, drug abuse, depression, anxiety, child safety, and tobacco use. Across practices, 49,964 patients were eligible for screening and 36,394 pre-screens and 21,635 full screens were completed. From the qualitative data, eight best practices for primary care SBIRT are described: Have a practice champion; Utilize an interprofessional team; Define and communicate the details of each SBIRT step; Develop relationships with referral partners; Institute ongoing SBIRT training; Align SBIRT with the primary care office flow; Consider using a pre-screening instrument, when available; and Integrate SBIRT into the electronic health record. Conclusions and implications SBIRT is an effective tool that can empower primary care providers to identify and treat patients with substance use and mental health problems before costly symptoms emerge. Using the pragmatic best practices we describe, primary care providers may improve their ability to successfully create, implement, and sustain SBIRT in their practices. (Continued from previous page) Keywords: SBIRT (Screening, Brief Intervention, Referral to Treatment), Primary care, Substance abuse, Alcohol abuse Background Substance use and mental health disorders are major global health issues. Worldwide, an estimated 240 million adults suffer from alcohol use disorder. Almost a quarter of adults use tobacco, which is responsible for approximately 11% of deaths in men and 6% of deaths in women [1]. The USA is currently experiencing an opioid epidemic, with catastrophic public health consequences. In 2015, the number of US drug overdose deaths rose to over 52,000 with 63% of these involving an opioid [2]. Meanwhile, depression presents one of the highest disease burdens worldwide [3]. Altogether, the disability-adjusted life-years due to mental and substance use disorders have increased by 15% from 2005 to 2015 [3]. These emerging data stress the need for sustainable, evidence-based public health initiatives that can reduce the impact of these conditions. Screening, Brief Intervention and Referral to Treatment (SBIRT) is a public health framework approach initially used to identify and deliver services to those at risk for the adverse consequences of alcohol abuse [4,5], but which has been expanded to a number of substance-use disorders, depression, and other mental health conditions [4,6]. Primary healthcare is key to preventing and finding disease early. However, in the USA, it has long been documented that there is insufficient time for all the preventive care needed [7]. SBIRT began in the 1960s as a screening and brief intervention tool to quickly identify those with risky alcohol use, saving time for providers by focusing on the highest need patients [5,8].In the last several decades, research and demonstration projects (funded largely by the US Substance Abuse and Mental Health Services Administration (SAMHSA)) have confirmed that implementing SBIRT can positively impact patients and their communities [4,[9][10][11][12]. While not all research has yielded positive effects [3], the US Preventive Services Task Force (USPSTF) felt the evidence was strong enough to begin recommending screening and brief behavioral interventions for alcohol in 2004, and reaffirmed the recommendation in 2013 [13]. These demonstration projects have also recently begun assessing barriers and facilitators to successful SBIRT implementation [14,15], the possibility of financial sustainability from clinical revenue [16,17], and the effectiveness of various team members delivering SBIRT services [18]. Despite all this research, there is limited evidence for transferring this success from funded demonstration projects to day-to-day primary care office practice, or for beginning SBIRT screening in practices without significant external funding. Bernstein et al. describe lessons following a well-funded emergency department (ED) program, including external funding for start-up, local ED staff champions, sustainability planning from the beginning, and creation and maintenance of a robust referral network [15]. Singh et al. interviewed administrators and evaluators from six SAMHSA SBIRT grantee programs and found sustainability after the grant funding ended was related to securing new funding, having champions, adapting and making system changes, and managing program staffing challenges [17].Muench and Holland performed focus groups of team members and physicians in Oregon and Pennsylvania, respectively, during state-funded alcohol SBIRT projects [19,20]. Both sets of researchers noted similar barriers, including time constraints, limited access to treatment, ongoing funding and reimbursement concerns, and limited knowledge and self-efficacy. While these studies provide a framework for primary care practices, they all come from large, well-funded projects where previously developed SBIRT was implemented in practices. While Dwinnels describes successful outcomes of a small SBIRT program in a regional community health center, he does not describe its sustainability, nor the factors associated with success [6]. Too many people today are not receiving the treatment they need for substance use and other mental health problems [21], and the growing opioid epidemic is a public health emergency [22]. Primary care is the entry to the healthcare system for the majority of patients across the globe. SBIRT offers great potential for primary care physicians and their staff to identify patients with risky substance use and early symptoms of mental illness and assist in their treatment. However, there is a need for pragmatic "best practices" for implementing SBIRT in primary care offices geared toward frontline providers and office staff. In 2014, The University of Cincinnati Department of Family and Community Medicine partnered with Interact for Health, a greater Cincinnati-based independent foundation, in evaluating SBIRT programs in 10 primary healthcare locations. From this work, we developed practical guidance for primary care practices to assist with developing and implementing SBIRT programs to help them address important public health issues in their communities. Basics of SBIRT In the last 30 years, the SBIRT model has developed increasing function and utility. SAMHSA describes the three components of SBIRT as follows: Screening quickly assesses the severity of substance use and identifies the appropriate level of treatment. Brief intervention focuses on increasing insight and awareness regarding substance use and motivation toward behavioral change. Referral to treatment provides those identified as needing more extensive treatment with access to specialty care [23]. The SBIRT model has continued to grow due to its ability to be built on one of any validated screening instruments for a number of substance and mental health problems, be implemented in a variety of healthcare settings, be performed by a myriad of care team members, and be adapted for a number of culturally diverse populations [18,24,25]. For several conditions, "pre-screens" have been validated that allow for rapid, universal screening, followed by more focused full screens [26,27].This has decreased the amount of time needed for screenings in primary care and other general populations. Because of the variety of conditions screened for, and the many settings where SBIRT can occur, there are no good population rates for its actual use, although a 2011 SAMHSA white paper did review the growing evidence for SBIRT's effectiveness [25]. Screening in primary care project Between 2014 and 2016, Interact for Health awarded small grants (all US$60,000 or less) for the implementation of 10 SBIRT programs throughout the greater Cincinnati and Northern Kentucky region in an effort to reduce the number of people with risky substance use, anxiety, and depression. Unlike many previous SBIRT studies [19,28,29], each practice chose the condition or conditions for which they would screen, the screening tools, and how they would provide brief intervention and referral to treatment within their setting. An evaluation team from the University of Cincinnati's Department of Family and Community Medicine (UC DFCM) communicated with each practice in an iterative process throughout the grant period and collected quantitative and qualitative data regarding facilitators and barriers to the SBIRT process. SBIRT practice descriptions The SBIRT practices included primary care practices (family medicine and general internal medicine), federally qualified health centers (FQHCs), school-based health centers (SBHCs), and a safety-net emergency department ( Table 1). Six of the practices screened for a single condition, while four practices screened for two to four conditions. Program evaluation methods Individual SBIRT programs varied in length from 9 to 18 months. The UC DFCM evaluation team met with each practice prior to the start of their program to help them develop process flowcharts that captured the corresponding action and personnel for each stage of SBIRT ( Fig. 1). They then collected quarterly data via an online reporting system. Data collected Quantitative data were collated and summarized. Qualitative data, including openended question responses, practice visit notes, and interview notes, were collated and coded using the editing method [30,31]. In this method, while acknowledging the existing literature about SBIRT in primary care [6, 12-14, 18, 19, 28, 32, 33], we sorted the interview data into coding categories derived from the data themselves, explicitly checking them against other categories and the original data, and then searched for patterns and themes. We then returned to the existing literature, and framed our findings as pragmatic best practices for successful implementation of SBIRT in primary care offices. Quantitative results Across all ten program practices, an estimated 49,964 patients were eligible for screening. For all conditions, 36,394 pre-screens and 21,635 full screens were completed (19,687 adults and 1984 youth); 6203 scored positive on the full screens with 3108 brief interventions performed. Practices reported that 1302 referrals to treatment were made, but all practices reported an inability to confidently track confirmation of patients receiving treatment. Alcohol (7361) and substance use (7303) together comprised more than two thirds of the total full screens completed. Details of the SBIRT rates by conditions screened are found in Table 2. Best practices for SBIRT implementation in primary care Have a practice champion This role is responsible for logistical coordination and problem-solving as well as provider accountability. The practice champion does not necessarily need to be the medical director of the practice, but should be someone who is respected by their coworkers. Several studies have cited the need for a champion to encourage staff buyin and engagement and to identify and manage ongoing barriers to program success [20,34]. This was consistent with our findings where a program leader who could act as cheerleader, door opener, and bridge between all team members was key to a successfully integrated program. Practice champions who are not in leadership positions need the support and backing of leadership. When the program leader was not a clinical leader, practices that included the medical or nursing director in planning meetings and decision making were more likely to have earlier success. With increasing and competing demands in healthcare settings nationwide, it is necessary to have a point person capable of securing buy-in from the necessary care team members, obtaining initial resources, and ensuring judicious use resources as the program continues. Utilize an interprofessional team Incorporating physicians, medical assistants, information technology staff, front desk staff, and other essential staff can aid in identifying challenges and optimizing the process for maximum patient impact. Physicians often mention their lack of time as a major barrier to SBIRT [14,19]. Involving an interprofessional team can mitigate the physician's role in favor of shared responsibility among all participants in the SBIRT continuum of care [18,20,28]. These interprofessional team members need to be involved from the planning stage. Several of our practices failed to include all team members in the planning process. This resulted in disjointed program rollouts at these practices with wasted resources and the need for additional time and energy to make Percentage of brief interventions. Some practices bypassed brief interventions and referred some patients directly to treatment n/a Not applicable as there were no pre-screening tools available major midcourse corrections. Coordination and communication across disciplines and between diverse skillsets is necessary for seamless and complete delivery of all SBIRT stages. Define and communicate within the team details of each SBIRT step Each SBIRT component should be determined based on the needs and availability at the primary care practice as well as provider interest and experience [14,18,20,35]. Early identification of the conditions to be screened and selection of appropriate and validated tools is the first, but essential step, as it will focus and guide the rest of the process. However, our participants found that creating, implementing, and documenting the brief intervention was actually one of the most difficult parts of implementing SBIRT. Practices that created detailed brief intervention expectations (who, when, where, how long, and how often) had more successful outcomes. We also found that practices screening for multiple conditions were unable to offer brief interventions or referrals for multiple positive screens due to time and staff availability. These practices created algorithms that prioritized one positive screen (e.g., drug abuse) over another (e.g., depression) for brief intervention. A limitation of these algorithms is that they were operationalized by screening staff who had limited clinical training and thus were not always patient centered. Primary care practices should consider patient survey fatigue, as well as their own capacity to offer interventions and referrals in a timely manner should they decide to screen for multiple conditions. Develop relationships with referral partners All practices failed to implement a referral to treatment that included a communication feedback loop to primary care. Adequate referral partner relationships are necessary for high-risk patients. To better link patients with treatment options after a positive screen and brief intervention, referral partners should be brought to the table during the planning phase of SBIRT. Additionally, other options such as telephonic or telehealth treatment should be explored to increase access to treatment as part of SBIRT [34]. In our region, the lack of referral resources, especially those that can accept a variety of healthcare insurance, were noted as a significant weakness of the implemented SBIRT programs. Additionally, lack of feedback from referral centers made tracking difficult. The confidentiality that is afforded to mental health and substance abuse records further complicated this process. An open line of communication between referring and referral partners and inclusion during SBIRT planning can help to mediate followup barriers, thereby ensuring timely and accurate feedback on treatment linkages. Integrated practices incorporating mental health and/or substance abuse care with primary care also show promise as method for improving both care and communication [36]. Institute ongoing SBIRT training Because primary care SBIRT relies on an interprofessional team, training of all involved parties is integral to program success. Staff turnover and insufficient training have been cited as barriers to SBIRT success [18,20,34] and full program implementation may require up to 12 months [18] with continued training and education. As with many primary care offices, our practices were vulnerable to staff turnover. Keeping this in mind, training protocols should be a part of the original planning and program design. SBIRT training should also be incorporated into the onboarding process to maximize success through any staff transition by building broad institutional memory. Align SBIRT within the primary care office flow As part of the planning phase, a graphical flow alignment diagram that follows the patient through the SBIRT process from beginning to end is useful in assuring that SBIRT fits within existing office flow, such as outlined in Fig. 1. Specifically, flow diagrams that clearly define the pre-screen and screening instrument to be used, scores that lead to brief intervention or directly to treatment, and identify the staff responsible for each step help create an SBIRT program that can more seamlessly integrate into the practice. A graphical flow diagram allows for process refinement prior to implementation. Data collection processes should be included in the operational plan, as feedback is necessary to assure that SBIRT outcomes are being met. Universally, our practices that created, communicated, adapted, and revised the flow diagrams during the planning phase had fewer problems as they rolled out their SBIRT programs. These formal visual maps minimized potential problems before they arose by defining the team and assigning ownership of various SBIRT components. Consider using a pre-screening instrument, when available A major concern by primary care staff who perform the screening of SBIRT is time [20]. Using brief, validated pre-screens can decrease the amount of time spent administering longer instruments, and increase the yield from the full screens. For example, two FQHC practices screened for alcohol abuse, one using the full Alcohol Use Disorder Identification Test (AUDIT) for everyone and one using the AUDIT-C prescreen, followed by the full AUDIT for those with positive pre-screens. The center using only the full AUDIT had a 5% rate of positive screens, but everyone had to complete the full AUDIT. The center using the pre-screen had 30% of their patients pre-screen positive, so only this smaller number completed the full AUDIT and 74% of them had positive full screens. Incorporation of pre-screening into mature SBIRT programs has been utilized to address concerns regarding sustainability and ensure judicious use of staff time while increasing the number of patients served [34]. Whenever possible, validated pre-screening instruments should be utilized. Integrate SBIRT into the electronic health record (EHR) The ability to track patients through the SBIRT process via the EHR is necessary for documenting patient care, analysis of program impact, and assisting practices with population health by better defining and managing the patient population identified by SBIRT. Applicable coding ensures more accurate billing and allows for potential reimbursement for the screening and brief intervention which has been noted as a necessity for program sustainability [15,16]. Additionally, other EHR tools such as automated reminders increased the number of patients screened at our program practices. The EHR needs to clearly flag or highlight positive screens to ensure that brief interventions are delivered [37]. Attention must be paid to the EHR integration during the planning phase, however, or lost revenue and poor outcome documentation can sink a program before it becomes established. Discussion As initial programmatic and research funding for SBIRT ended, significant questions still remained about how to create and maintain sustainable SBIRT programs in primary care settings. With the USPSTF supporting the regular use of SBIRT for alcohol abuse [13], and strong evidence for SBIRT growing for other conditions [6,[9][10][11][12], primary care offices need practical guidance for how best to create and implement SBIRT programs. Since the literature has done a better job of describing barriers to SBIRT than facilitators [5,10,14,16,17,19,20,24,34,38], we took the lessons learned from our qualitative evaluation of 10 diverse practices and created 8 pragmatic best practices. Many of these are further evidence supporting existing recommendations. For example, the need for practice champions, creating a robust referral network, planning for sustainability, and using an interprofessional team have been described in the SBIRT literature [15,17,18,28]. We have added, however, specific details gleaned from working with practices that created SBIRT programs internally, with minimal external funding, in order to provide guidance for primary care physicians, staff and administrators interested in implementing their own SBIRT program. There are limitations to our study. The 10 practices were selected through a competitive grant process by the community agency, Interact for Health, and therefore might be different than other practices in the community. The greater Cincinnati-Northern Kentucky region is a mid-sized metropolitan region in the midwest USA and is likely different in primary care and clinical practice than other locations in the country. And while the program was created for screening in primary care, the funder included practices such as a safety-net emergency department that many would not consider a primary care location. However, most of the practices were family medicine or general internal medicine offices, school-based clinics, or community health centers. The qualitative findings were consistent with findings from the medical literature [15,17,18,20] making it likely that the practical best practices from these practices will be of value to primary care practices seeking to implement SBIRT. Conclusion The sustainability of an SBIRT program in a primary care setting relies heavily on a well-defined and operationalized plan that fits within office flow. Having a practice champion as well as bringing key members of the team on board in the planning stages improves the chances of successful implementation and continued SBIRT delivery. With our current opioid epidemic, perhaps more than any other time in recent history, primary care must take action and fully participate in identifying patients at risk of substance use and mental health problems. In addition to current community-based prevention programs, public health models like SBIRT in primary care are needed to make a concerted effort against the downstream effects of substance use and mental illness. SBIRT has been shown to be an effective tool that can empower primary care providers to identify and treat this population before costly symptoms emerge. Using the pragmatic best practices we describe, primary care practices may improve their ability to successfully create, implement, and sustain SBIRT programs. Funding Funding for this evaluation was provided by a grant from Interact for Health and the United Way of Cincinnati (through Interact for Health) to the University of Cincinnati Department of Family and Community Medicine (Nancy C. Elder, MD, MSPH). Staff from the funding agency participated via assisting the evaluation team in accessing data from the SBIRT practices, reviewing and commenting on the ongoing evaluation data and reports, and reviewing and commenting on the final manuscript. The data have been fully shared between the evaluation team and the funder, but the evaluation team (the grantee) created and controlled the evaluation, including the data collection forms, analysis, and writing of the manuscript. Availability of data and materials The datasets generated and analyzed during our current study are available from the corresponding author on reasonable request. Authors' contributions DH is the project manager who collected qualitative and quantitative data quarterly; assisted in preparing data reports, tables, and figures; assisted with data analysis; and reviewed and edited reports and manuscripts. CH is a co-investigator who managed the data collection and assisted with team management; assisted in preparing data reports, tables, and figures; assisted with data analysis; and reviewed and edited reports and manuscripts. MC assisted in preparing data reports, tables, and figures, assisted with data analysis, and assisted with writing, reviewing, and editing reports and manuscripts. RF assisted in preparing data reports, tables, and figures, assisted with data analysis, and assisted with writing, reviewing, and editing reports and manuscripts. MP assisted DH, NE, and CW with access to practice personnel for data collection, met regularly with evaluation team providing input into the evaluation process, reviewed ongoing reports, and reviewed and edited the final manuscript. AY assisted DH, NE, and CW with access to practice personnel for data collection, met regularly with evaluation team providing input into the evaluation process, reviewed ongoing reports, and reviewed and edited the final manuscript. NE is the principal investigator for the project, arranging funding and overseeing financial and data reports, and met regularly with evaluation team and with funding team. NE supervised all team members; assisted in preparing data reports, tables, and figures; assisted with data analysis; reviewed and edited reports, and was the lead author on the final manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate This evaluation of a community SBIRT program was considered to not be human subject research and was waived from review by the University of Cincinnati Institutional Review Board. Consent for publication Not applicable.
2017-12-31T08:11:39.960Z
2017-12-29T00:00:00.000
{ "year": 2017, "sha1": "106580506fc067c35de8770aa1993953d2d3f603", "oa_license": "CCBY", "oa_url": "https://publichealthreviews.biomedcentral.com/track/pdf/10.1186/s40985-017-0077-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "106580506fc067c35de8770aa1993953d2d3f603", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251891722
pes2o/s2orc
v3-fos-license
Clinician barriers, perceptions, and practices in treating patients with hepatitis C virus and substance use disorder in the United States The likelihood of clinicians prescribing direct-acting antiviral (DAA) therapy for patients with chronic hepatitis C virus (HCV) and substance use disorder (SUD) was assessed via a survey emailed throughout the United States to clinicians (physicians and advanced practice providers) in gastroenterology, hepatology, and infectious disease specialties. Clinicians’ perceived barriers and preparedness and actions associated with current and future DAA prescribing practices of HCV-infected patients with SUD were assessed. Of 846 clinicians presumably receiving the survey, 96 completed and returned it. Exploratory factor analyses of perceived barriers indicated a highly reliable (Cronbach alpha = 0.89) model with five factors: HCV stigma and knowledge, prior authorization requirements, and patient- clinician-, and system-related barriers. In multivariable analyses, after controlling for covariates, patient-related barriers (P < 0.01) and prior authorization requirements (P < 0.01) were negatively associated with the likelihood of prescribing DAAs. Exploratory factor analyses of clinician preparedness and actions indicated a highly reliable (Cronbach alpha = 0.75) model with three factors: beliefs and comfort level; action; and perceived limitations. Clinician beliefs and comfort levels were negatively associated with the likelihood of prescribing DAAs (P = 0.01). Composite scores of barriers (P < 0.01) and clinician preparedness and actions (P < 0.05) were also negatively associated with the intent to prescribe DAAs. Conclusion These findings underscore the importance of addressing patient-related barriers and prior authorization requirements—significant problematic barriers—and improving clinicians’ beliefs (e.g., medication-assisted therapy should be prescribed before DAAs) and comfort levels for treating patients with HCV and SUD to enhance treatment access for patients with both HCV and SUD. Introduction Hepatitis C virus (HCV) is the most frequently reported blood-borne infection in the United States (US) and a leading cause of liver-related morbidity, liver cancer, liver transplantation, and mortality (Ditah et al., 2014). Treatment of HCV has greatly improved since the introduction of direct-acting antivirals (DAAs), which show therapeutic efficacy in >95 % of patients across the four major HCV genotypes and have limited adverse effects (Falade-Nwulia et al., 2017;Webster et al., 2015). In 2013, when DAAs were introduced to treat and cure HCV, the World Health Organization responded by calling for the elimination of viral hepatitis by 2030 (World Health Organization WHO, 2017). However, in the US, HCV treatment rates have been declining since their peak in 2015 and we are only about halfway to the expected rate needed Abbreviations: DAA, direct-acting antiviral; GHAPP, Gastroenterology & Hepatology Advanced Practice Providers; HCV, hepatitis C virus; PWID, persons who inject drugs; SUD, substance use disorder; US, United States. in 2020 to enable reaching the goal of the World Health Organization, leaving more patients that need to be treated (Centers for Disease Control and Prevention, 2021). With less than a decade remaining to reach the goal, HCV remains one of the top causes of chronic liver disease worldwide (Paik et al., 2020). The reasons for suboptimal treatment uptake are multifactorial and include an increase in HCV infections resulting from the opioid crisis, poor linkage to care for individuals diagnosed with HCV, insurance barriers (e.g., Medicaid prior authorization requirement), sobriety requirements, and the COVID-19 pandemic (Ko et al., 2019;Liang and Ward, 2018;Centers for Disease Control and Prevention, 2017). These impediments are especially prevalent among poor and underserved populations (Jain et al., 2019;Park et al., 2021). There are other multilevel issues related to clinicians, patients, and the health system structure that have led to low HCV treatment rates (Malespin et al., 2019;Grebely et al., 2017). In a previous study by our group, individuals with HCV and substance use disorder (SUD) were 47 % less likely to receive DAAs compared with individuals with HCV but without SUD among Florida Medicaid beneficiaries (Park et al., 2021) despite evidence from several clinical trials supporting treatment with DAAs for persons who inject drugs (PWID) among those receiving current opioid agonist therapy (Simoncini et al., 2021;Trooskin et al., 2020). Despite the importance of advancing understanding regarding the barriers to DAA treatment and the urgent need to implement innovative interventions to reach the global health goal of eliminating hepatitis by 2030, little is known about clinician experiences in treating HCVinfected patients with SUD in the DAA era. Thus, we developed a survey to assess clinician self-reported barriers, perceptions, and practices for treating HCV-infected patients with SUD and the associations of these barriers, perceptions, and practices with the willingness of clinicians to prescribe DAA treatment for patients with HCV and SUD. Sampling and recruitment strategies Using a modified Dillman approach (Dillman et al., 2014), we sampled US clinicians in gastroenterology, hepatology, and infectious disease specialties using two sampling strategies. First, we recruited a group composed primarily of physicians but that also included advanced practice providers (i.e., nurse practitioners and physicians assistants), using a list created with a search of the publicly available websites of study sites participating in the HCV-TARGET (Hepatitis C Therapeutic Registry and Research Network) and PRIORITIZE (a pragmatic, randomized controlled trial of oral antivirals for the treatment of chronic hepatitis C) studies (National Library of Medicine U.S., 2020; National Library of Medicine U.S., 2021). Second, to recruit a nationally representative group of advanced practice providers, we used three nonoverlapping, proprietary lists owned by the Gastroenterology & Hepatology Advanced Practice Providers (GHAPP) organization drawn from their membership, subscribers, and hepatology specialists. GHAPP is a not-for-profit organization dedicated to educating and providing support resources for the professional advancement of advanced practice providers who treat patients with gastrointestinal disorders and chronic liver disease (Gastroenterology and Hepatology Advanced Practice Providers GHAPP, 2020). We sent recruitment emails on March 25, 2021, to persons participating in the HCV-TARGET and PRIORITIZE studies, informing them of the present study and providing them with a link to the online questionnaire using REDCap. The listserv manager of GHAPP sent recruitment emails via Mailchimp to the GHAPP sample on June 9, 2021. For both recruitment samples, we contacted nonrespondents up to two additional times between April 13 and July 7, 2021, using REDCap and Mailchimp. HCV-Target and PRIORITIZE participants were offered a $25 electronic Amazon gift card on completion of the survey; GHAPP participants, a $10 electronic Amazon gift card. To ensure we had a sufficient sample size to conduct exploratory factor analysis, we set an a priori minimum of 95 respondents (5 respondents per item) for our model with the largest number of items. The University of Florida, Gainesville, Institutional Review Board approved this study (IRB201702880). After being informed of the purpose of the study, clinicians provided consent by completing and submitting the questionnaire. Questionnaire instrument Our team members developed the questionnaire instrument by identifying relevant items from the peer-reviewed literature. Our team reviewed and revised the original instrument containing 63 questions. We pilot tested the revised version containing 56 questions in a convenience sample of 20 hepatology clinicians. The final instrument, which included 2 adaptive response questions, 14 demographic characteristics questions, 43 assessment questions, and an option to leave an openended comment is provided in the Appendix. Responses to the assessment questions were collected using three Likert-type scales with five anchors each: (1) extremely unlikely to extremely likely, (2) never to always, and (3) strongly disagree to strongly agree. The assessment questions included three concepts: (1) current (n = 5) and anticipated future (n = 5) experiences with prescribing DAAs for HCV-infected patients with SUD, (2) perceived barriers to providing HCV treatment (n = 21), and (3) prescribers' preparedness and actions in treating HCVinfected patients with SUD (n = 12). The questionnaire used an adaptive strategy to guide participants. The first question asked if the participant treated patients with HCV and SUD. A response of no resulted in the participant progressing to the demographic characteristics questions without answering any assessment questions. A response of yes resulted in a follow-up question asking if the participant prescribed DAAs for patients with HCV and SUD. If the participants responded yes to the follow-up question, they were given all assessment and demographic characteristics questions, whereas if the participants responded no, they were given all questions except those five questions regarding current experiences. Statistical analysis Statistical analyses were conducted using R, version 3.6.3, in R Studio, version 1.2.5033. We conducted two exploratory factor analyses using varimax rotation of the correlation matrices. The first factor analysis included 19 items assessing barriers to prescribing DAAs. Barrier questions (n = 2) about non-Medicaid insurance requirements and open-ended "other" barriers were not included in this model. The second factor analysis included 12 items assessing prescriber preparedness and actions. The data obtained from both the HCV-TARGET and PRIORITIZE sample and the GHAPP sample were analyzed together. To determine the number of eigenvalues >1 and to generate scree plots, we used analyses of the correlation matrices. We then forced the number of factors using the principal axis method. We determined which items loaded on which factors by identifying high loadings (>|0.3|) and logical groupings among the factor patterns (Pett et al., 2003). Composite scores for each model and for each model's factors were created separately by averaging the relevant item scores. The internal consistencies of the composite and factor scores were measured using Cronbach ⍺, with an acceptable internal consistency set a priori as α ≥ 0.6. We then used multivariable linear regression to examine the association of prescriber-reported likelihood of prescribing DAAs using a 5point Likert-type scale with (1) the composite and factor scores of each scale (i.e., barriers; and preparedness and actions scales) separately, and (2) the composite score for the two scales (i.e., barriers along with preparedness and actions scales) controlling for statistically significant clinician characteristics in univariate analyses. Significance was set a priori as α ≤ 0.05. Participant characteristics Of 846 clinicians sent recruitment emails and presumably having received the survey invitation, 96 completed and submitted the survey (response rate of 11.3 %). The sample of clinicians comprised physicians (34.4 %), nurse practitioners (36.5 %), and physician assistants (28.1 %) ( Table 1). Most clinicians self-reported being white (71.9 %) and female (70.8 %) and indicated that their specialty was hepatology (55.2 %). The clinicians were evenly distributed among the four regions of the country. The percentage of clinicians who treated 10-50 unique patients with HCV in the previous year was 46.9 %, and 40.7 % treated > 50 unique patients. The highest proportion of clinicians (55.2 %) worked in an academic setting, followed by clinicians who worked in a group setting (40.6 %). Likelihood level for prescribing DAAs to patients with HCV and SUD Overall, current DAA prescribing likelihood was high (mean likelihood score = 4.45, SD = 0.78; 1 = extremely unlikely to 5 = extremely likely) ( Table 2). In their current practice, 56 % of clinicians were extremely likely to prescribe DAAs to patients with HCV and SUD, and 38 % were likely to prescribe DAAs to this population. In total, 64 % of clinicians reported that they were likely or extremely likely to prescribe DAAs for PWID with HCV and SUD, and 97 % were likely or extremely likely to prescribe DAAs to treat former PWID who had initiated medication-assisted treatment. In addition, 83 % of clinicians reported that they were likely or extremely likely to prescribe DAAs to treat patients with alcohol use disorder, and 96 % were likely or extremely likely to prescribe DAAs to treat patients with a history of alcohol use disorder who had been alcohol-free for 1 month. The overall mean prescribing likelihoods as well as the proportions for likelihoods of prescribing across the varied conditions in the ensuing 6 months were similar to those of current practice. Most clinicians reported that with regard to treating patients with HCV, COVID-19 had affected their practice from not at all to moderately (Table 1). Exploratory factor and regression analyses of barriers Factor analysis of the 19 barrier items resulted in six eigenvalues >1. The scree plot of the eigenvalues suggested one to five factors. We generated models with two to five factors based on the scree plot and eigenvalues. The five-factor model was deemed the most logical (Table 3): patient-related barriers (seven items), HCV stigma and knowledge (two items), prior authorization requirements (four items), clinician-related barriers (two items), and system-related barriers (four items). Item scores and factor loadings are presented in Table 3. Reliability estimates for the barriers to prescribing DAAs were high (composite Cronbach ⍺ = 0.89; factor Cronbach ⍺ range = 0.77-0.83), and the composite mean score was moderately low (mean = 2.40; 1 = Never to 5 = Always), indicating an overall moderately low perceived frequency of the barriers. Among the factors, patient-related barriers (mean = 2.62) were the most highly endorsed barriers, followed by HCV stigma and knowledge (mean = 2.56) and prior authorization requirements (mean = 2.32). Using multivariable regression, we examined the ability of the five factors from the barriers model to explain clinician likelihood for prescribing DAAs, after controlling for significant covariates (medical specialty, number of HCV patients in the previous 12 months). The regression model containing the barrier model factor scores explained 41.3 % of the variation in clinician-reported DAA prescribing likelihoods (Table 4). Patient-related barriers (P < 0.001) and prior authorization requirements (P = 0.001) were significant negative predictors of clinician likelihood to prescribe DAAs for patients with HCV and SUD, whereas HCV stigma and knowledge (P = 0.028) and system-related Abbreviations: HCV, hepatitis C virus; SUD, substance use disorder; US, United States. *Data represent mean ± standard deviation. **Other included multiple races, prefer not to answer, other, and unknown. barriers (P = 0.011) were significant positive predictors. The model containing the composite barrier score explained 22.6 % of the variation in clinician-reported willingness to prescribe DAAs, with the composite score (P < 0.001) being a significant negative predictor of clinician likelihood to prescribe DAAs for patients with HCV and SUD. Exploratory factor and regression analyses of prescriber preparedness and actions Factor analysis using the varimax rotation of 12 prescriber preparedness and actions items resulted in four eigenvalues > 1. The scree plot of the eigenvalues suggested two to five factors. Based on the scree plot and eigenvalues, we generated models with two to four factors. The three-factor model was deemed the most logical (Table 5): clinician beliefs and comfort level (6 items), clinician referral action (2 items), and clinician perceived training limitations (4 items). Item scores and factor loadings are presented in Table 5. Reliability estimates for the prescriber preparedness and actions items for prescribing DAAs were good (composite Cronbach ⍺ = 0.75; factor Cronbach ⍺ range = 0.61-0.81), and the composite mean score was 2.55 (1 = strongly disagree to 5 = strongly agree), indicating an overall moderately low level of perceived preparedness and actions. Among the factors, clinician referral actions (mean = 3.33) was the most highly endorsed as affecting clinician treatment of HCV-infected patients with SUD, followed by clinician perceived training limitations (mean = 2.92) and beliefs and comfort level (mean = 2.05). Using multivariable regression, we examined the ability of the three factor scores from the model of the prescriber preparedness and actions items to explain clinician likelihood of DAA prescribing, after controlling for significant covariates. The regression model containing factor scores explained 23.3 % of the variation in clinician-reported likelihoods of prescribing DAAs (Table 6). Among the factors, the clinician beliefs and comfort level factor (P = 0.010) was a significant negative predictor of clinician likelihood to prescribe DAAs for patients with HCV and SUD. The model containing the composite preparedness and actions score explained 22.9 % of the variation in clinician-reported DAA prescribing likelihood, with the composite score (P < 0.001) being a significant negative predictor of clinician willingness to prescribe DAAs for patients with HCV and SUD. Multivariable regression analysis of barriers and prescriber preparedness and actions combined Using multivariable regression, we examined the ability of the composite barriers and composite preparedness and actions scores to predict clinician-reported likelihood of prescribing DAAs for patients with HCV and SUD, after controlling for significant covariates. The regression model containing the composite scores from the two models explained 29.1 % of the variation in clinician-reported likelihood of prescribing DAAs (Table 7). Both composite scores (P = 0.008 and P = 0.012) were significant negative predictors of prescribing DAAs for patients with HCV and SUD. Discussion Our findings suggested that the barriers that prevented, delayed, or interfered with DAA treatment for patients with HCV and SUD were multidimensional, with patient-related barriers (e.g., failure to keep appointments, patient continued to use substance) perceived by treating clinicians as the most problematic, followed by prior authorization requirements (e.g., laboratory results, requirement for patient to be drugor alcohol-free), and system-related barriers (e.g., lack of insurance, prior authorization refusals). However, clinician beliefs and comfort levels (e.g., believing a patient with SUD should be receiving medication-assisted therapy before initiating HCV treatment; difficulty engaging with patients who continually fail drug tests) were associated with lower likelihoods of prescribing DAAs for patients with HCV and SUD. The findings in the present study regarding barriers are consistent with what has been described in the literature in recent years. Failure of patients with HCV infection to attend multiple appointments is a barrier to HCV care that has been previously reported from both patient and clinician perspectives (Heard et al., 2021;Paisi et al., 2022;Litwin et al., 2019;von Aesch et al., 2021). Social and health circumstances (e.g., need to secure food, lack of transportation, concomitant mental or physical condition, and SUD) are competing priorities that may prevent patients from complying with HCV care (Paisi et al., 2022, Litwin et al., 2019, Zhang et al., 2020Amoako et al., 2021). Clinician concerns about treatment adherence and reinfection have been found in previous studies to be persistent barriers to treating patients with HCV and SUD (von Aesch et al., 2021;Asher et al., 2016;Marshall et al., 2020;Winnock et al., 2013), but the clinicians surveyed in the present study identified patient risk of reinfection as being a less problematic barrier than patient lack of motivation and adherence. Prior authorization required by payers to approve prescription of DAAs is a time-consuming process that continues to hamper access to medications by the marginalized subpopulation of patients infected with HCV (Duryea et al., 2020). Not surprisingly; the clinicians surveyed in this study deemed it as the second most problematic barrier for prescribing DAAs to patients with SUD. Evidence suggests that US Abbreviations: DAA, direct-acting antiviral; HCV, hepatitis C virus; PWID, persons who inject drugs; SUD, substance use disorder. a Measured on a 5-point Likert-type scale (1 = extremely unlikely, 2 = unlikely, 3 = neither unlikely nor likely, 4 = likely, 5 = extremely likely). physicians spend an average of 45-120 min per patient and an average of 14.9 h per week to complete the prior authorization process, which entails the submission of information that is not supported by scientific evidence (Duryea et al., 2020). Most of our surveyed clinicians were experienced HCV providers who had treated a fair number of patients with HCV in the past year. It is important to note that these experienced providers were presumably comfortable with HCV treatment and patients with HCV and SUD; but their practices were still limited by barriers such as prior authorization policy. Other countries, such as Australia and Canada, have experienced increasing trends in DAA initiation after the removal of restrictions based on the specialty of the clinician, fibrosis stage, and ongoing SUD (Simoncini et al., 2021;Marshall et al., 2020). In the US; as of June 2022 many states had removed fibrosis restrictions and specialist requirements while some states continue to require that prescriptions be written in consultation with a specialst or allow providers to prescribe after completing training courses (Harvard Law School Center for Health Law and Policy Intervention, 2022). These changes in the requirements for HCV specialist care will accelerate expansion of HCV treatment access as primary care providers at the forefront of patient contact can become providers of HCV care (Wang et al., 2022). Similarly, many states dropped restrictions on sobriety after this study was conducted, while some states still require screening and counseling regarding substance use concurrent with HCV treatment (Harvard Law School Center for Health Law and Policy Intervention, 2022). However, there is also evidence suggesting that, in general, clinicians involved in HCV care have little experience working with people with SUD, which often leads to distorted beliefs about the adherence capacity of these patients (Trooskin et al., 2020;Jatt et al., 2021) and unfounded concerns of reinfection (Trooskin et al., 2020). These beliefs may be derived from stigmatization and discrimination (Simoncini et al., 2021;Trooskin et al., 2020;Jatt et al., 2021;Higashi et al., 2020). The lack of experience and the discomfort among clinicians were reflected in our findings. The clinician belief that a patient with SUD should be receiving medication-assisted therapy before initiating HCV treatment and the clinician concern that it is difficult to engage with patients who continually fail drug tests were associated with clinicians being less likely to prescribe DAAs for patients with HCV and SUD. Continuing education and training in SUDs are needed to improve clinician understanding, knowledge, and comfort level for treating patients with HCV and SUD. Although clinician perceived limitations were not associated with the reported likelihood of prescribing DAAs to patients with HCV and SUD in the present study, it is important to underscore that clinicians reported a lack of knowledge on the availability of drug treatment facilities or support services for people with SUD. This "silo effect" has been previously reported and refers to the lack of integration of services and interdisciplinary collaboration when caring for patients with HCV and SUD (Trooskin et al., 2020). Considering the persistence of these barriers among clinicians, it may be time to consider moving our complex health care system toward a more holistic, community-based approach for treating patients with HCV infection and SUD (Trooskin et al., 2020;Paisi et al., 2022;Litwin et al., 2019;von Aesch et al., 2021;Marshall et al., 2020). Indeed, some countries have implemented this approach and show improved access to Table 3 Mean scores, factor loadings, and reliability estimates of barrier items (N = 96). Factors and Items Frequency score a HCV treatment and overall outcomes (Trooskin et al., 2020;von Aesch et al., 2021;Marshall et al., 2020). To overcome these barriers, we suggest several strategies including allowing non-specialist care providers the ability to prescribe treatment, co-locating screening, diagnosis, and treatment services for HCV at SUD treatment centers, and integrating HCV/SUD care delivered by a multidisciplinary team with case management services (Ho et al., 2015;Moussalli et al., 2010). Clinicians reported that when a patient felt stigmatized or showed a lack of knowledge about HCV, the clinician was more likely to prescribe DAAs. This finding is in contrast to the majority of results assessing infectious disease stigma that indicate that stigma may lead patients to fail to seek treatment (Dolezal and Lyons, 2017). The discrepancy in findings may be because the clinicians in the present study were considering patients who had already connected to care rather than patients who had failed to seek care. The implication, then, is that enabling and stewarding a connection with care for this population may lead to a high likelihood of treatment. Also, we think that clinicians who understand patients' stigma as a potential barrier to HCV treatment were more likely to overcome stigma as a barrier for the patients under their care. Despite the aforementioned concerns, a high proportion of the clinicians surveyed in the present study expressed willingness to treat patients experiencing SUD. Before DAAs became available, a Canadian study reported that <20 % of HCV clinicians were likely to provide treatment to current PWID using a needle exchange program, and 90 % of HCV clinicians were likely to provide treatment to former PWID (Myles et al., 2011). In the present study, 64 % of the surveyed clinicians were likely to provide DAA treatment to current PWID, and 97 % to former PWID. The increases in the willingness to treat PWID may be attributable to the higher sustained virologic response rates achieved with DAAs compared with interferon-containing treatments among PWID (Grebely et al., 2015;Dore et al., 2016;Lalezari et al., 2015;Grebely et al., 2018;Butner et al., 2017;Ottman et al., 2019). This finding is also consistent with that of studies conducted by Marshall et al. (Marshall et al., 2020;Higashi et al., 2020) in which surveyed clinicians expressed a moral responsibility to do "the right thing" when treating patients with HCV infection and SUD. Abbreviations: HCV, hepatitis C virus; NA, not available; SD, standard deviation, SUD, substance use disorder. a Measured on a 5-point Likert-type scale (1 = strongly disagree, 2 = disagree, 3 = neither disagree nor agree, 4 = agree, 5 = strongly agree). Higher mean numbers indicate higher agreement. Table 6 Multivariable regression analysis of the likelihood of prescribing direct-acting antivirals to treat HCV-infected patients with substance use disorder, using three factors and a composite score of clinician preparedness and action items (n = 93). Table 7 Multivariable regression analysis using composite score of barriers model and clinician preparedness and actions model to predict likelihood of prescribing direct-acting antivirals to treat HCV-infected patients with substance use disorder (n = 92). Although most clinicians responded that the COVID-19 pandemic had affected their practices in treating patients with HCV moderately to not at all, several previous studies have reported that during the pandemic, HCV antibody testing volume decreased, ribonucleic acid-positive results fell, and prescriptions for HCV treatment were reduced compared with previous years (Kaufman et al., 2021). The present study has several strengths. We evaluated barriers and perceptions of a nationwide sample of gastroenterology and hepatology clinicians regarding DAA treatment for patients with both HCV and SUD. Our investigation included the development of conceptual frameworks of those barriers and perceptions that are useful in describing clinicians' DAA prescribing choices. Such an investigation is also part of the validation of the instrument used in this study, which showed good face validity, and the reliability of the factors was good. The present survey study contributes to the urgent need to advance the understanding of clinician perceptions and barriers to treating patients with both HCV and SUD. Findings from this study may be used as a basis for future planning and development of interventions to improve DAA treatment access among patients with HCV and SUD. For example, educational programs for patients and clinicians should focus more on patientrelated barriers. Limitations First, the generalizability of our findings may be limited because a large proportion of the clinicians surveyed were affiliated with academic institutions. Second, our findings were representative of a convenience sample of US clinicians; it is possible that clinicians who were highly involved in HCV-related care selectively completed the survey, resulting in a nonresponse bias, as participation was voluntary. Response bias inherent to survey design also cannot be excluded. Third, the response rate was lower than desired, although it was not unusually low for this type of data collection in this population (Tinsley et al., 2013) and our a priori required sample size was successfully met. Fourth, we assessed clinician self-report of barriers, beliefs, and practice, which may or may not correlate with real-life practices. Conclusion Findings from this survey study indicated that clinicians engaging in HCV care perceived patient-related barriers (e.g., failure to keep appointments) and prior authorization requirements to be significant problematic barriers to treating patients with HCV and SUD. The findings also underscore the importance of clinicians' beliefs and comfort levels toward patients with HCV and SUD in the reported likelihood of prescribing DAAs. Despite such challenges, this study highlighted the willingness of HCV clinicians to provide DAA treatment to current or former PWID infected with HCV. Multidimensional strategies providing regularly updated education regarding SUDs to health care professionals offering HCV care and adoption of multidisciplinary teams with case management services coupled with less restrictive prior authorization requirements would enhance treatment access among PWID. Funding This work was funded by the National Institute on Drug Abuse (K01DA045618) and the National Institute on Alcohol Abuse and Alcoholism (U24AA022002). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
2022-08-29T15:07:03.918Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "43363f44be630ab984ebf50e56be190cc5fb00d9", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8e5d282c3740e2736ad331df643b1d6700b19959", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259004082
pes2o/s2orc
v3-fos-license
NF-κB fingerprinting reveals heterogeneous NF-κB composition in diffuse large B-cell lymphoma Introduction Improving treatments for Diffuse Large B-Cell Lymphoma (DLBCL) is challenged by the vast heterogeneity of the disease. Nuclear factor-κB (NF-κB) is frequently aberrantly activated in DLBCL. Transcriptionally active NF-κB is a dimer containing either RelA, RelB or cRel, but the variability in the composition of NF-κB between and within DLBCL cell populations is not known. Results Here we describe a new flow cytometry-based analysis technique termed “NF-κB fingerprinting” and demonstrate its applicability to DLBCL cell lines, DLBCL core-needle biopsy samples, and healthy donor blood samples. We find each of these cell populations has a unique NF-κB fingerprint and that widely used cell-of-origin classifications are inadequate to capture NF-κB heterogeneity in DLBCL. Computational modeling predicts that RelA is a key determinant of response to microenvironmental stimuli, and we experimentally identify substantial variability in RelA between and within ABC-DLBCL cell lines. We find that when we incorporate NF-κB fingerprints and mutational information into computational models we can predict how heterogeneous DLBCL cell populations respond to microenvironmental stimuli, and we validate these predictions experimentally. Discussion Our results show that the composition of NF-κB is highly heterogeneous in DLBCL and predictive of how DLBCL cells will respond to microenvironmental stimuli. We find that commonly occurring mutations in the NF-κB signaling pathway reduce DLBCL’s response to microenvironmental stimuli. NF-κB fingerprinting is a widely applicable analysis technique to quantify NF-κB heterogeneity in B cell malignancies that reveals functionally significant differences in NF-κB composition within and between cell populations. . NF-κB activation state cannot be determined from NF-κB fingerprints. Computationally simulated NF-κB fingerprints in six cell population specific computational simulations informed by experimental NF-κB fingerprinting ( Figure 4B). Simulation are the same here as in Figure 4C, except for the U2932 cell line where basal IKK activity was increased 100x and the expression of RelA and RelB increased to recapitulate experiment fingerprint ( Figure 4B). 1,000 cells were simulated in each cell population (6,000 simulations in total), with cell-to-cell variability incorporated as described previously (2), cell density is indicated with a contour plot and each cell population is shown in distinct colors. Figure S5. Computational modeling of DLBCL, including receptor-proximal signaling. A) Schematic of the computational model constructed by combining existing models of TLR signaling (3), BCR signaling (4), and NF-κB/IκB regulation (5). All models are run as published, with active IKK species summed from the BCR and TLR models to determine the active IKK input curve to the NF-κB model. Mutations present in DLBCL are indicated in purple. Detail of the combinatorial complexity of NF-κB dimer and inhibitor interactions are omitted. Supplemental Modeling Methodology All modeling files are available at https://github.com/SiFTW/NFkBModel. This repository includes the Jupyter notebooks for running the models and producing the figures. Software and versions Computational simulations were performed using the free DifferentialEquations.jl package, in the free Julia programming language (version 1.6.2). The version number for each package used in this study is provided below: Generating models from reactions, parameters and rate laws. Computational models were encoded across three .csv files: reactions.csv, parameters.csv, and rateLaws.csv (available on GitHub https://github.com/SiFTW/NFkBModel). All parameters in all simulations were kept consistent with published parameters (5), with the exception of cell line-specific parameterizations defined below. Bespoke Python 3 code was used to assemble these files into a single Julia (.jl) file of equations written into a function compatible with the DifferentialEquations.jl package. This python code is available on GitHub (https://github.com/SiFTW/CSV2JuliaDiffEq). Two different collections of these CSV files were used in this manuscript. One defining the NF-κB, and one defining the model in which TLR, BCR and NF-κB signaling are combined. To assemble each of the two models, the CSV2Julia (https://github.com/SiFTW/CSV2JuliaDiffEq) command was run with the appropriate model definition CSV files as arguments to the CSV2Julia function. Solving models Simulation length, initial conditions, and algorithm used in generating solutions (such as tolerance) are provided in the Jupyter Notebooks https://github.com/SiFTW/NFkBModel). The steady state of each simulation was obtained through an extended simulation from the initial conditions (100,000 minutes), and no species was found to be substantially changing by this time point. The endpoint of this steady state simulation was used as the initial conditions for the time course phase. For conditions where stimulation was required, this was added to the model at the start of the time course (t=0). Cell-to-cell variability Cell-to-cell variability was approximated (assuming pre-existing differences in expression and degradation rates) as described previously (2). Table S1 shows the parameters which are distributed within the NF-κB model. To produce the simulations in Figure 1 C-D, the rates of transcription for RelA, cRel, and RelB were multiplied by 10, producing the increased RelA, increased cRel, and increased RelB models respectively. Each of these simulations was simulated to steady state followed by a time course response to increased canonical stimulation beginning at t=0. Other than the canonical stimulation, simulated as an increase in NEMO-IKK, all parameters were kept consistent from the time course-phase in the steady state-phase. Models from gene expression data In order to produce cell line-specific models from gene expression data, parameters for the rates of transcription of RelA, cRel, and RelB, p50, p100 were adjusted. U2932 and RIVA-specific models were created by adjusting these parameters. Gene expression data was downloaded from the Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE103934) (6). The gene expression (in log10 counts per million reads) was standardized per gene, such that the expression of a particular gene across all cell lines has 0 mean and 1 standard deviation (a z-distribution). The default parameter for expression of each gene was multiplied by 10 z-score . As such a gene with average expression would not have its expression scaled (10 0 = 1), while a gene with expression 1 standard deviation higher than the average would have 10 fold higher expression (10 1 =10). The result was two models, adjusted from the published B-cell model, incorporating cell linespecific gene expression. Models recapitulating NF-κB fingerprints To fit the NF-κB model to flow cytometry data from the NF-κB fingerprints RelA and RelB transcription rates were manually adjusted ( Figure 4). In addition to the models in which only RelA and RelB was adjusted, cell line-specific models were created with elevated basal NEMO-IKK activity by multiplying the level of activity during the steady state phase 100 fold. As this resulted in changes to the levels of RelA and RelB, expression of these parameters was adjusted to compensate for the increase in NEMO activity and recapitulate NF-κB fingerprints ( Figure S4). Creating a model combining NF-κB, TLR and BCR signaling CSV files encoding the reactions, rate laws, and parameters for the NF-κB model (above), were combined with files encoding published TLR and BCR models (3,4). An additional linking module was defined as linking reactions, linking parameters, and linking rateLaws (see the moduleDefinitionFiles
2023-06-02T13:23:04.397Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "4a21124662680f8e0a34b1f5c637375ac8b65632", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fonc.2023.1181660", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43a1535eb6db3838ece3c5a3771dd4e0648a84a6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253154720
pes2o/s2orc
v3-fos-license
ROLE OF LEADERS’ POSITIVE COMMUNICATION IN FACILITATING CHANGE Coping with unexpected and unprecedented challenges, particularly in managing change, is part of a leader’s function. Change often presents problems and tensions between the parties involved, which can derail the achievement of their objectives. For change to be successful, leaders need to build morale, unify individual and departmental aspirations, and positively influence such change. This study explores positive communication models that can facilitate leaders in managing change. By reviewing the literature of positive communication in the areas of a positive organization, particularly involving the integrative approach and constructive interaction, this study found the certain ways of communication that can encourage effective change agents while reducing the resistance of the individual change target. This work reveals that the constructive and integrative dimension of positive communication may facilitate the change agent to be more internally directed and purpose oriented. On the other hand, questioning and discovery emphasize the affection aspects and will lessen the resistance and make change target those who are more open and eager to collaborate. INTRODUCTION Communication is the key to the functioning of leadership. In the positive leadership model, where the leader creates a positive work climate and extraordinary performance (Cameron, 2008), positive communication is considered to be an essential dimension. Here, positive communication is defined as an interaction that is dominated by affirmative and supportive communication in the organization. Leaders or managers express more appreciation, support, approval, and compliments. At the same time, negative expressions, critique, disappointment, and dislike are minimal. Organizations with managers or leaders who adopt this approach are found to positively affect organizational performance (Losada & Heaphy, 2004). In a more detailed model, Browning, Morris, and Fee (2011) offer two main dimensions of positive communication, namely integrative and constructive. Both of these dimensions can improve how employees interpret situations and increase the effectiveness of working in teams. Furthermore, Mirivel (2014) offers six more detailed strategies: greeting, asking, compliment, disclosing, encouraging, and deep listening. Positive communication can also be related to interactions that emphasize the strengths and positive attributes of the other person. Roberts et al. (2005) found that a positive response will make a person feel more valuable. In time, this approach will strengthen the relationship, create more cohesiveness, and involve parties supporting each other. The abovementioned approaches are likely to play a role in an organization's change initiatives. Communication from the leader is essential for leading employees to change (Boyatzis et al., 2019). In a relatively new approach, namely positive change, a leader is expected to be willing to inspire others to change according to their own wishes and also create a high level of performance. Positive change is the leader's efforts to inspire themselves and produce significant change and results (R E Quinn & Quinn, 2009). Those positive communication practices also have been found to affect positive relationships, establishing a positive climate (Cameron, 2013). Furthermore, positive communication may also facilitate managers to create positive images they want to create. Amir and Wijaya (2022) recently offered a model in which positive communication is considered to influence a person's personal branding strategy. According to them, positive communication can help reinforce the personal branding strategy launched and facilitate more opportunities to be accepted by the audience. Change process needs certain communication strategy. For example, Boyatzis et al. suggest positive communication in a coaching environment where leaders pay attention to phrases that inspire people to change. However, studies that specifically address the role of positive communication in positive change have not been conducted, and leaders often do not know what to do to communicate more positively and make an impact in the long term. This study examines how positive communication can facilitate leaders in carrying out positive changes. It will evaluate how the dimensions offered by the literature related to positive communication can make it easier for leaders to carry out positive changes and establish individual targets for change that are less prone to resistance. Leaders need a simple, empirical, comprehensive model that they can draw on as a compass for making communicative decisions and creating initiatives that will foster a positive organization. The remainder of the paper is organized as follows. First, the authors review the model of positive communication in an organizational context. Then, the constructive and construction elements in positive communication are detailed, and a demonstration of the orientation and practices ensues. The study and explanation of positive change follow. Thereafter appear the analysis and discussion of the possible relationship between elements and practices of positive communication and the dimension of positive change. POSITIVE COMMUNICATION IN ORGANIZATIONAL CONTEXT As a general term in communication, "positive" can be said to be an interaction that has good features, such as pleasant, polite, assertive or empathetic, efficient or empathetic (Browning et al., 2011). Cameron (2008) suggests that positive communication involves the way people communicate using supportive language that dominates negative and critical language. Positive expressions, such as support, approval, and compliments, dominate terms such as disapproval, cynicism, or disparagement. Cameron posits that this communication approach will strengthen positive leadership elements, such as positive relationships, and be more conducive to strengthening a positive climate and facilitating the positive meaning of their work and the organization (Cameron, 2008). In addition to Cameron's concept of positive communication as a positive element of leadership, the studies of Mirivel (Mirivel, 2014) or Browning and Morris as well as Fee (2011) are often referred to when defining the concept of positive communication. The sections below detail Mirivel's model, which suggests six strategies, and the following section reviews Browning and colleagues' model. SIX STRATEGIES TO INFLUENCE AND INSPIRE Drawing from interpersonal communication and language and social interaction studies, Mirivel (2014) proposes that positive communication is any verbal and nonverbal behavior that functions positively in the course of human interaction. Mirivel proposes six strategies that leaders could provide to influence as well as inspire their team. Leaders need to model appropriate communication in greeting others, initiate contact with various stakeholders within, across, and beyond the organizational hierarchy, and create human connection. The central behavior to master is inviting interaction and dialogue, all of which can begin with the simple act of greeting. The second principle of positive communication is to use powerful questions to flip the script on human interaction and engage people in the process of discovery. Leaders can do so by using open-ended questions to engage stakeholders, disrupt dysfunctional and monotonic meetings, and place themselves in a constant state of discovery. To create a positive climate, managers can use positive communication to affect the people around them in a positive way and help employees thrive and be creative. Emphasizing affection, as the third strategy, can be conducted through complimenting and the act of building people up. Managers can capitalize on strengths and opportunities rather than weaknesses and deficiencies. In this process, leaders will learn to create a growth-mindset culture and aspire an increased sense of purpose. The fourth strategy involves deepening the relationship between parties. When managers communicate with authenticity, genuineness, and a spirit of transparency, the conversation will be more open. Particularly when the situation is challenging, disclosing critical information and being transparent is critical. Managers can consistently and intentionally deepen connections with stakeholders, respond to challenges effectively, and foster a context of openness and transparency. Communicating to encourage is the fifth strategy to inspire and influence. By doing so, managers can create extraordinary leadership moments that provide memorable, transformative, and meaningful moments for their teams. Because people need direction and a sense of impact, the process of encouraging provides the opportunity for managers to leave their legacy. The sixth strategy involves the manager's need to create an all-inclusive environment to become effective. By listening more deeply and transcending the perceived differences that exist between people and groups, managers can create dialogic moments that strengthen the quality of the everyday interactions between them and their team. Integrative Elements Browning, Morris dan Fee (2011) offers a different model, although the implications are similar, namely improving how employees interpret situations and increasing the effectiveness of teamwork. According to these researchers, there are two major elements involved, namely integrative and constructive. Integrative is concerned with bringing together differences and creating a unified perspective. In this principle, there is inclusiveness, respectfulness and supportiveness. Inclusive suggests that the parties involved respect the aspirations of the parties while maintaining relevance and coherence. The parties also accommodate differences in point of view, culture, and experience. Gibbs (2009) suggest the similar concept where , unity produces strength. On the other hand, the principle of respectfullness is carried out by the parties with the assumption of trust and honesty. Browning et al. (2011) believe that this method can make people motivated to achieve their goals, as well as effective in making decisions and avoid mismanaging. Journal of Business and Behavioural Entrepreneurship,6(1),[87][88][89][90][91][92][93][94][95][96] While supportiveness means wanting to facilitate others to achieve success. Employees who work under pressure, will feel that a supportive attitude is very important, because it can generate energy and reaffirm one's strengths. Grolleau et al. (2013), who studies the factors that increase happiness at work, says that being supportive plays a big role. Supportiveness also keeps emotions awake and often makes other people comfortable in communicating. When the situation is comfortable, the parties tend to freely present their problems that are useful for the organization. Inclusiveness, respectfulness, and supportiveness may help the employee collaborate or work in a team that values and appreciates his or her opinion, including giving the employee the sense of receiving help when required. Constructive interaction While the integrative suggests unity, the constructive interaction element suggests the parties' desire to make things better. The orientation in communicating is contributing, which involves three mechanisms; solution-focused, future-oriented, and collaborative communication. Solution-focused talks about improvement, and explores existing resources. This way can make the parties become more confident, and can see the available possibilities. Solution-focused can generate a sense of optimism. Future orientation allows parties to link their visions for the future. The term "shadow of the future" is relevant to this so that practicing employees relate their daily activities to the long-term goals of the organization. In the context of this organization, it becomes important because togetherness is needed in carrying out the mission. Collaborative interaction emphasise what is relevant, informative, truthful, and appropriate in communication. Collaboration means parties contributes to understand the conversation's purpose. Stewart (2009) coined the term "nexting", that characterizes the ability to inform about what is essential, and helpful in a conversation in the communication process. As the parties see the opportunity to improve the situation, there is a better direction where people provide a supportive orientation rather than displaying an abandoning orientation. Furthermore, the constructive collaboration tolerates the error as a normal, and parties involved attempting to make sense of it. Noticing mistakes and suggesting improvements will keep the organization on the right track. Therefore, participants may attempt to align their aspirations with the expectations of others by questioning or offering something to gain a better outcome. THE CHALLENGE OF CHANGE The challenge most often discussed in organizational change is reluctance or resistance to change, particularly when changes are considered only top-down initiatives. Regardless of the cause or reason for the change-changes in market, competition, economy, changes in regulations, low performance, the presence of technology or machines-the process is typically the same. The organization starts with the manager and plans, and the related field begins to formally announce, and then there is a reaction that is generally what the manager expects. The main reason employees are reluctant is most frequently because what the change plan involves is typically bad for them; distraction, discomfort, uncertainty, or loss of the value of the skills they have (Lewis, 2016). Such a plan interferes with existing relationships, whether with clients or colleagues. Similarly, the loss of a privilege was initially considered a basic need, such as an office space or a workplace. Because work also means learning, employees typically also calculate the loss of the learning curve. POSITIVE CHANGE PERSPECTIVE Positive change refers to the theoretical assumption in the field of POS where all living systems have a tendency toward positive energy and away from negative energy (Cameron 2012). Following this approach, the positive environment created by positive interaction and communication can engender positive energy and life-giving resourcefulness. When positive practices are institutionalized in organizations, including providing compassionate support for employees, forgiving mistakes and avoiding blame, fostering meaningfulness of work, expressing frequent gratitude, showing kindness, and caring for colleagues, leads organizations to perform at significantly higher levels on desired outcomes (Bright et al., 2006). Positive practices produce a significant organizational change in a positive direction. Drawing from Quinn and Well's (2012) works, this study suggests five dimensions where change agent have important roles, comparing the conventional approach that has appeared most frequently in the literature. Externally open Positive perspective uses the assumption that changes caused by oneself are far more important than those caused by others. Two people can engage in precisely the same behavior and receive highly different reactions. With a positive perspective, there is a shift from telling people what to do to showing people how to be. There is a focus on moral power (Weick & Quinn, 1999). When the change agent becomes more purposive, authentic and empathetic and open, that agent is improving the conditions for being a positive influence (R E Quinn & Quinn, 2009). Secondly, focus on a purpose-centered approach tends to the ultimate goal that has been set. The eventual goal that is addressed here is something extraordinary because it gives aspirations to oneself and to others. On the other hand, the traditional approach tends to be comfort centered, namely doing what was most frequently done in the past. When faced with things that are different from routine, people are overwhelmed by the gap between expectations and reality-being or doing something that may be thought of as deviant is often considered a threat, and maintaining the status quo becomes the norm. In the next dimension, positive change agents tolerate challenges and disruptions and continually clarify their higher purpose and help others do the same. In this way, they are transformative, showing purpose and belief, articulating possibilities for a better future, and encouraging employees to think differently. By pursuing a higher moral purpose and transcending traditional norms, these individuals inspire creativity, innovation, and positive deviance. Positive change agents do not assume that the environment is a barrier to almost all of their behavior. The assumption is that macro affects micro as well as social norms that affect individuals. In behaving, positive change agents are more internally directed (i.e., self-regulating when moving towards goals), and the behavior is consistent with the established values. These agents are the authors of a self with a match between emotions, values, and actions (Rogers, 1975). Internally directed behavior can produce an upward spiral in which self-concordant motivation increases, which increases the likelihood that individuals will achieve their personally valued goals and, therefore, promotes satisfaction and further internally directed behavior (Sheldon & Houser-Marko, 2001). Other-focused, instead of self-focused, is another positive change agent's character that differentiates them from the conventional. The positive change literature highlights individuals who are exceptional where the change agents transcend selfinterest. They prioritize the collective good ahead of their own personal interests and are willing to sacrifice themselves to help the group accomplish its goals. Such agents also tend to adopt a more altruistic and integrated perspective. Furthermore, research on compassion that emphasizes capacity for caring and helping is often driven more than just self-interest. Positive change agents are particularly open to input and views on change initiatives from others. People in conventional assumptions tend to align their current preferences and approaches and are resistant to stating that they are on the wrong path because there is an assumption that they are highly dependent on factors beyond one's control (Dweck, 2008) These five perspectives can be influenced by the dimensions and practices of positive communication, as shown in the next sub-section. The integrative dimension in Browning and co-workers' model is particularly helpful in producing purpose-centered and other focused dimensions because the unifying element of the integrative perspective will help make it easier for change agents to convince the change targets that they are part of a wider effort. The practice of inclusiveness communication is among the most necessary because the change targets need to be convinced that they are part of an overall effort. This approach emphasizes that they are significant in the change efforts being carried out. On the other hand, they are also reminded that togetherness is important in change initiatives and that the success of change is not only a matter of personal interest or maintaining one's safe zone. Meanwhile, respectfulness will make the target of change feel that their feelings and worries are recognized. However, changing the situation is not always easy. The supportive aspect has a more pronounced role in facilitating change targets to be ready to anticipate the impact of change and to be skilled at running new things. This way of communication also reduces the change target's concerns about the competencies he or she has. POSITIVE COMMUNICATION IN FACILITATING POSITIVE CHANGE Meanwhile, the constructive dimension, particularly the solution-focused and future orientation aspects, will also facilitate the change targets to accept change initiatives. Change has side effects that are often a problem. Being solution focused, change agents build a sense of trust and confidence that the possibilities are available and become optimistic. What is prepared to welcome change is the capital to achieve something better in the future. Mirivel's model also explains that the questioning or discovery process helps facilitate the target of change to find reasons to change that do not involve orders. In addition, the strategy of compliments and providing a sense of affection makes them feel that their actions are appropriate. This strategy also helps reduce feelings of isolation, worry, and or anxiety. Similarly, encouragement is necessary to ensure that the target of change is focused upon according to the values they have, which are internally directed. CONCLUSION Managing organizational change demands a conducive interaction between leaders, agents of change, and employees who are the target of change. On the other hand, change always brings challenges and difficulties for the parties involved, where a positive change approach can be an option to overcome them. The application of positive communication by leaders and change agents can facilitate the management of positive change. Integrative and constructive aspects can play a role in helping the target of change understand the importance of change from oneself, internally directed, and also find the ultimate goal as a common goal. The questioning and discovery model and the emphasis
2022-10-27T15:18:11.407Z
2022-10-12T00:00:00.000
{ "year": 2022, "sha1": "2975c40b2a5b48f258f7ea5054ca76037f215656", "oa_license": "CCBYNCSA", "oa_url": "http://journal.unj.ac.id/unj/index.php/jobbe/article/download/26789/13211", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "018105c0259a4315fbea825c2a519b1f81db32ff", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
12037313
pes2o/s2orc
v3-fos-license
Fine structure of the eggs of blowflies Aldrichina grahami and Chrysomya pacifica ( Diptera : Calliphoridae ) We report here the fine structure of the eggs of blowflies Aldrichina grahami (Aldrich) and Chrysomya pacifica Kurahashi. For A. grahami, the plastron is wide and extends to almost the entire length of the eggs. The plastron near the micropyle is truncated. The polygonal patterns of chorionic sculpture bear a distinct swollen boundary. Regarding C. pacifica, the plastron is narrow and extends to almost the entire length of the eggs. The plastron near the micropyle bifurcates to a Y-shape, but the arms of the ‘Y’ are short. Information presented herein allows some distinctive features to differentiate among eggs of blowfly species. Corresponding author: Kom Sukontason, Department of Parasitology, Faculty of Medicine, Chiang Mai University, Chiang Mai 50200, Thailand. Telephone: (66-53) 945342, Fax: (66-53) 217144. E-mail: ksukonta@mail.med.cmu.ac.th Received: January 16, 2004. In Revised Form: June 15, 2004. Accepted: June 23, 2004. Blowflies are medically important, and their larvae have occasionally been reported as the cause of myiasis in humans and animals (Zumpt, 1965). Today, however they are more important in the area of forensics. Blowfly specimens (eggs, larvae, or pupae) collected in human corpses are being used as entomological evidence in forensic investigations not only to estimate the postmortem interval, but also to analyze any toxic substance involved in the cause of death (e.g., Smith, 1986; Lord, 1990; Goff, 2000; Greenberg and Kunich, 2002). Identification of fly specimens collected from a corpse is primarily needed for use in forensic investigations. The blowfly eggs of many genera are of forensic importance and have been studied using scanning electron microscopy (SEM) in many parts of the world (Kitching, 1976; Greenberg and Szyska, 1984; Erzinclioglu, 1989; Liu and Greenberg, 1989; Greenberg and Singh, 1995; Greenberg and Kunich, 2002). We describe herein, the eggs of two species of blowflies [Aldrichina grahami (Aldrich) and Chrysomya pacifica Kurahashi] which had not been previously investigated, to provide a greater database for blowfly identification. In view of medical importance, A. grahami was a forensically relevant species, due to its presence in pig carcasses (Ma et al., 1997). Chrysomya pacifica was previously identified as Chrysomya megacephala (Fabricius), and it was recently separated as C. pacifica, and therefore its biological information is very limited. However, the other blowflies in genus Chrysomya, particularly C. megacephala and Chrysomya rufifacies (Macquart), are well known for forensic importance. Chrysomya pacifica may be involved in forensic entomology in the future. The preserved eggs of A. grahami (laboratory-bred Japanese strain) and C. pacifica (laboratory-bred strain from Papua New Guinea) were used in this study. For ̆ SUKONTASON ET AL. Biol Res 37, 2004, 483-487 484 SEM, the eggs were initially washed several times using normal saline solution to remove any attached debris. The specimens were fixed with 2.5% glutaraldehyde mixed in phosphate buffer solution (PBS) at a pH of 7.4 at 4 oC for 24 hr. They were then rinsed twice with PBS at 10-min intervals. The rinsed eggs were then treated with 1 % osmium tetroxide at room temperature for 3 hr for post-fixation. This was followed by rinsing the eggs twice with PBS and dehydrating them with alcohol. To replace water in the eggs with alcohol, the eggs were subjected to increasing concentrations of alcohol as follows: 30 %, 50 %, 70 %, 80 % and 90 %. The eggs remained in each concentration of alcohol for 12 hr during each step of the dehydration process. They were then placed in absolute alcohol for two 12 hr periods followed by acetone for two 12 hr periods. Finally, the eggs were subjected to critical point drying to complete the dehydration process. In order to view the eggs, they were first attached with double-stick tape to aluminum stubs so they could be coated with gold in a sputter-coating apparatus before being viewed with a JEOL-JSM840A scanning electron. Terminology of the fly eggshell followed Margaritis (1985). Blowfly eggs are creamy-white, elongated, and taper at the ends. The eggs of A. grahami are 1.38 ± 0.07 mm in length and 0.36 ± 0.06 mm in width (n = 50). The plastron is wide (0.029 ± 0.011 mm in width) and upright (Fig. 1A, star in Fig. 1C) and extends to almost the entire length of the eggs (Fig. 1A). The plastron near the micropyle is truncated, but does not apparently bifurcate (arrow, Fig. 1B). The polygonal patterns of chorionic sculpture bear a distinct swollen boundary (arrow, Fig. 1C). Regarding C. pacifica, the eggs are 1.54 ± 0.14 mm in length, and 0.37 ± 0.07 mm in width (n = 50). The plastron is narrow (0.009 ± 0.003 mm in width), and extends almost the entire length of the eggs (Fig. 2A). The plastron near the micropyle bifurcates to a Y-shape, but the arms of the ‘Y’ are short (arrow, Fig. 2B). The hatching line along the plastron area is smooth and swollen (star, Fig. 2C). The polygonal lines are faint (arrow, Fig. 2C). Regarding eggs of A. grahami, the wide and upright plastron are similar to those of blowfly species such as Phaenicia (=Lucilia) caeruleiviridis (Macquart), Phaenicia (=Lucilia) illustris (Meigen), and Lucilia cuprina (Wiedemann) (Greenberg and Kunich, 2002). The swelling at the polygonal pattern boundary of A. grahami resembles the blowfly egg of Calliphora alpina (Zetterstedt) (Erzinclioglu, 1989), but is markedly different from the faint boundary of L. cuprina (Sukontason, unpublished data of author). Thus, the information presented here allows some distinctive features to differentiate among eggs of calliphorine and luciliine blowfly species. Medically, A. grahami has been claimed to be a necrophagous insect since the larvae usually feed on putrid animal matter (Shinonaga, 1965). Furthermore, fly specimens of this species have also been found in pig carcass, the animal model in forensic entomology experiments conducted in both indoor and outdoor areas in Hangzhou, Zhejiang, China (Ma et al., 1997). This could point to A. grahami as the forensically relevant fly species. Kurahashi and Chowanadisai (2001) noted that the adult fly is attracted to decaying matter in towns located >1,000 m above sea level in north Vietnam, but it is also very common in the lowlands of Japan (Kurahashi et al., 1984). This species is widely distributed, as it exists in Russia (East Siberia, Far East), Japan, Korea, China, Hong Kong, Indonesia, Philippines, India, Taiwan, North Vietnam, Pakistan, Australia, and has been accidentally introduced into the USA (California and Hawaii) (Hall, 1948; Hardy, 1981). Although C. pacifica is morphologically similar to the Oriental Latrine Fly C. megacephala, it is currently classified as a separate species by one of the authors (H. Kurahashi). The fine egg structure of C. pacifica in this study provided more evidence to separate the two species. The Y-shape of the plastron near the micropyle of C. pacifica has a short arm (arrow, Fig. 2B), while that of C. megacephala is relatively long (Kitching, 1976; Greenberg and Kunich, 2002). However, the narrow ̆ 485 SUKONTASON ET AL. Biol Res 37, 2004, 483-487 Figure 1. Scanning electron micrographs of the eggshell of Aldrichina grahami (Aldrich). (A) Latero-dorsal view, anterior at left (arrow) and wide plastron area (p) extending to almost the entire length of the egg. Bar = 100 μm. (B) Plastron near the micropyle showing a truncated shape without bifurcations (arrow). Bar = 10 μm. (C) Chorionic sculpture showing polygonal patterns bearing a swelling of the boundary (arrow). Plastron (p) is wide and upright (star). Bar = 10 μm. SUKONTASON ET AL. Biol Res 37, 2004, 483-487 486 Figure 2. Scanning electron micrographs of the eggshell of Chrysomya pacifica Kurahashi. (A) Dorsal view, anterior at left (arrow) and narrow plastron area (p). Bar = 100 μm. (B) Plastron (p) near the micropyle showing bifurcation, but arms of the “Y” are short (arrow). Bar = 10 μm. (C) Smooth and swollen (star) hatching line along the plastron area. Faint polygonal lines (arrow). Bar = 10 μm. 487 SUKONTASON ET AL. Biol Res 37, 2004, 483-487 and extended plastron area of C. pacifica resembles that of many other species of blowflies (e.g. C. megacephala, C. rufifacies, C. albiceps (Wiedemann), C. chloropyga (Wiedemann), C. varipes (Macquart), C. saffranea (Bigot) and Cochliomyia macellaria (F.) (Kitching, 1976; Greenberg and Kunich, 2002). Other biological information on C. pacifica is little known at present. The present study adds to the database of eggs of blowfly species that could be of forensic importance in the future. More research on the bionomics of A. grahami and C. pacifica would then be needed. Blowflies are medically important, and their larvae have occasionally been reported as the cause of myiasis in humans and animals (Zumpt, 1965).Today, however they are more important in the area of forensics.Blowfly specimens (eggs, larvae, or pupae) collected in human corpses are being used as entomological evidence in forensic investigations not only to estimate the postmortem interval, but also to analyze any toxic substance involved in the cause of death (e.g., Smith, 1986;Lord, 1990;Goff, 2000;Greenberg and Kunich, 2002). Identification of fly specimens collected from a corpse is primarily needed for use in forensic investigations.The blowfly eggs of many genera are of forensic importance and have been studied using scanning electron microscopy (SEM) in many parts of the world (Kitching, 1976;Greenberg and Szyska, 1984;Erzinclioglu, 1989;Liu and Greenberg, 1989;Greenberg and Singh, 1995;Greenberg and Kunich, 2002).We describe herein, the eggs of two species of blowflies [Aldrichina grahami (Aldrich) and Chrysomya pacifica Kurahashi] which had not been previously investigated, to provide a greater database for blowfly identification.In view of medical importance, A. grahami was a forensically relevant species, due to its presence in pig carcasses (Ma et al., 1997).Chrysomya pacifica was previously identified as Chrysomya megacephala (Fabricius), and it was recently separated as C. pacifica, and therefore its biological information is very limited.However, the other blowflies in genus Chrysomya, particularly C. megacephala and Chrysomya rufifacies (Macquart), are well known for forensic importance.Chrysomya pacifica may be involved in forensic entomology in the future. The preserved eggs of A. grahami (laboratory-bred Japanese strain) and C. pacifica (laboratory-bred strain from Papua New Guinea) were used in this study.For ˘ SEM, the eggs were initially washed several times using normal saline solution to remove any attached debris.The specimens were fixed with 2.5% glutaraldehyde mixed in phosphate buffer solution (PBS) at a pH of 7.4 at 4 o C for 24 hr.They were then rinsed twice with PBS at 10-min intervals.The rinsed eggs were then treated with 1 % osmium tetroxide at room temperature for 3 hr for post-fixation.This was followed by rinsing the eggs twice with PBS and dehydrating them with alcohol.To replace water in the eggs with alcohol, the eggs were subjected to increasing concentrations of alcohol as follows: 30 %, 50 %, 70 %, 80 % and 90 %.The eggs remained in each concentration of alcohol for 12 hr during each step of the dehydration process.They were then placed in absolute alcohol for two 12 hr periods followed by acetone for two 12 hr periods.Finally, the eggs were subjected to critical point drying to complete the dehydration process.In order to view the eggs, they were first attached with double-stick tape to aluminum stubs so they could be coated with gold in a sputter-coating apparatus before being viewed with a JEOL-JSM840A scanning electron.Terminology of the fly eggshell followed Margaritis (1985). Blowfly eggs are creamy-white, elongated, and taper at the ends.The eggs of A. grahami are 1.38 ± 0.07 mm in length and 0.36 ± 0.06 mm in width (n = 50).The plastron is wide (0.029 ± 0.011 mm in width) and upright (Fig. 1A, star in Fig. 1C) and extends to almost the entire length of the eggs (Fig. 1A).The plastron near the micropyle is truncated, but does not apparently bifurcate (arrow, Fig. 1B).The polygonal patterns of chorionic sculpture bear a distinct swollen boundary (arrow, Fig. 1C).Regarding C. pacifica, the eggs are 1.54 ± 0.14 mm in length, and 0.37 ± 0.07 mm in width (n = 50).The plastron is narrow (0.009 ± 0.003 mm in width), and extends almost the entire length of the eggs (Fig. 2A).The plastron near the micropyle bifurcates to a Y-shape, but the arms of the 'Y' are short (arrow, Fig. 2B).The hatching line along the plastron area is smooth and swollen (star, Fig. 2C).The polygonal lines are faint (arrow, Fig. 2C). Medically, A. grahami has been claimed to be a necrophagous insect since the larvae usually feed on putrid animal matter (Shinonaga, 1965).Furthermore, fly specimens of this species have also been found in pig carcass, the animal model in forensic entomology experiments conducted in both indoor and outdoor areas in Hangzhou, Zhejiang, China (Ma et al., 1997).This could point to A. grahami as the forensically relevant fly species.Kurahashi and Chowanadisai (2001) noted that the adult fly is attracted to decaying matter in towns located >1,000 m above sea level in north Vietnam, but it is also very common in the lowlands of Japan (Kurahashi et al., 1984).This species is widely distributed, as it exists in Russia (East Siberia, Far East), Japan, Korea, China, Hong Kong, Indonesia, Philippines, India, Taiwan, North Vietnam, Pakistan, Australia, and has been accidentally introduced into the USA (California and Hawaii) (Hall, 1948;Hardy, 1981). Although C. pacifica is morphologically similar to the Oriental Latrine Fly C. megacephala, it is currently classified as a separate species by one of the authors (H.Kurahashi).The fine egg structure of C. pacifica in this study provided more evidence to separate the two species.The Y-shape of the plastron near the micropyle of C. pacifica has a short arm (arrow, Fig. 2B), while that of C. megacephala is relatively long (Kitching, 1976;Greenberg and Kunich, 2002).However, the narrow (Kitching, 1976;Greenberg and Kunich, 2002).Other biological information on C. pacifica is little known at present. The present study adds to the database of eggs of blowfly species that could be of forensic importance in the future.More research on the bionomics of A. grahami and C. pacifica would then be needed.
2017-04-16T07:43:15.077Z
2004-01-01T00:00:00.000
{ "year": 2004, "sha1": "0800e1739a9321b3a637ef16d68874a9c54e855a", "oa_license": "CCBY", "oa_url": "http://www.scielo.cl/pdf/bres/v37n3/art12.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0800e1739a9321b3a637ef16d68874a9c54e855a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254580508
pes2o/s2orc
v3-fos-license
Career guidance and public mental health Career guidance may have the potential to promote public health by contributing positively to both the prevention of mental health conditions and to population level well-being. The policy implications of this possibility have received little attention. Career guidance agencies are well placed to reach key target groups. Producing persuasive evidence to support claims of health outcomes from guidance is problematic. Although rare, there are some studies of health impacts from employment related interventions, suggesting it is not impossible to generate such evidence. There is a need to develop an evidence base addressing well-being outcomes which may require adopting health-style research methods. There is also a need to open a dialogue with policy makers concerning the potential for career guidance to contribute to public mental health. Keywords Career guidance Á Policy Á Public health Introduction: the neglect of health issues Issues of mental well-being have attracted little attention in career guidance policy making. Summarising international reviews by the Organisation of Economic Cooperation and Development (OECD), the European Commission and the World Bank, Watts and Sultana (2004) found that policies across 37 nations had converged on policy goals related to lifelong learning, the labour market and social equity. More recent studies (e.g. CEDEFOP, 2011; European Lifelong Guidance Policy Network, 2010) and international handbooks for policy makers (Hanson, 2006;OECD, 2004) also feature these central objectives. Health outcomes are largely absent from this policy discourse. This is remarkable in the light of three observations. Firstly, the burden of disease associated with mental health conditions is very great indeed. The World Health Organisation (WHO) suggest that one person in four will develop a mental health related condition in their lifetime, and that these disorders account for 13 % of the global burden of disease assessed by disability adjusted life years (DALYs) lost due to diseases across the planet (WHO, 2004a). Their projections suggest that depression may be the leading cause of disability by 2030 (WHO, 2011). Secondly, there is overwhelming evidence that unemployment is associated with psychological distress and increased incidence of mental health conditions (McKee-Ryan, Song, Wanbeg, & Kinicki, 2005;Paul & Moser, 2009;Warr, 2007), and that more generally work and worklessness are deeply connected to health (Bambra, 2011). Thirdly, the global down turn in economic activity since 2008 continues to present a significant challenge to labour markets. The International Labour Organisation (ILO, 2012) produce headline figures suggesting that the global unemployment remains around 6 %, with a stock of 200 million people out of work. An additional 400 million will join them in the next decade unless enough new jobs are created, which seems unlikely with current economic growth rates. This is a cause for concern given the consensus in the literature that the health and well-being of the unemployed is worse than that of the employed (Waddell & Burton, 2006) and the evidence that some causes of mortality, notably suicide, increase during a recession (e.g. Stuckler, Basu, Suhrcke, Coutts, & Mckee, 2009). If the career guidance community is to be concerned with issues of work and worklessness, it must concern itself with health also. At this time it seems particularly unwise to ignore the powerful connections that link employment and education to health and well-being. Well-being as a goal of public policy To a great extent, the impetus to consider well-being as a goal of policy has come from positive psychology (e.g. Delle Fave & Massimini, 2005;Pavot & Diener, 2004;Veenhoven, 2004). Diener and Seligman (2004) are perhaps the most persuasive advocates of this position. They point out that in spite of a general trend towards rising prosperity in developed countries in the post-war period, measures of positive well-being over the same period have remained more or less constant (an observation first made by Easterlin 1974). They also suggest that, notwithstanding the fluctuations associated with the economic cycle, there is a gradual trend towards greater reporting of mental health conditions, notably depression. This underpins the case that there has been an over-emphasis on economic growth and a false assumption that it equates with improvements in well-being. They argue for systematic and consistent use of measurements of well-being by governments. Such measures could steer policy to improve people's lives more accurately than the exclusive use of measures of economic growth, such as gross domestic product (GDP), which is adrift from measuring societal success in developed nations. This argument challenges fundamental assumptions in economics concerning the nature of utility, and the use of monetary measures as a proxy for well-being. It has successfully influenced some economists (e.g. Layard, 2005;The Economist, 2006). That which is measured is important, and the UK has made steps towards exploring well-being as a goal of policy and finding ways of capturing it (Donovan & Halpern, 2002;Halpern, 2010). A review of methodologies by the Office for National Statistics (ONS, 2011) led to the selection of well-being measures which now form part of the Integrated Household Survey of the UK population. This can be seen in a wider context; most notably Nicolas Sarkozy initiated the international Stiglitz Commission exploring an expansion from GDP measurement into measures of quality of life (Stiglitz, Sen, & Fitoussi, 2009). These developments are not uncontroversial. The increasing focus on well-being in policy has provoked opposition from a variety of perspectives. Critics from a neoliberal perspective are suspicious that it may be used as a justification for economic interventionism (e.g. Booth, 2012). Others are suspicious of emotional management practices, for example Ecclestone and Hayes (2008) who object to the encroachment of therapy into education. Critics also come from within psychotherapy and psychology, with some responding defensively to a new movement (e.g. Lazarus, 2003, who provides a thorough but unbalanced critique). Others are more constructive, such as the existential therapist van Deurzen (2009) who questions an excessive focus on happiness, arguing that experiencing and integrating negative emotions is a healthy process. Even those interested in well-being accept that it can be a politically malleable construct in policy discourse (e.g. Field, 2009) and that some caution is needed in questioning the evidence base before making policy recommendations (Bok, 2010). Key issues in public mental health International sources emphasize the importance of mental health promotion and illness prevention. The World Health Organisation (e.g. WHO, 2002WHO, , 2004bWorld Health Organisation/World Mental Health Consortium, 2004), and the European Union (Jané-Llopis & Anderson, 2005) make it clear that these should be priority areas for national health policy formation, and that an evidence-based approach should be adopted. However, mental health tends to be under-funded relative to the burden of disease it represents, and mental health promotion in particular is neglected (WHO, 2005). Scholars such as Keyes (2002) and Huppert (2004Huppert ( , 2005 adopt an epidemiological perspective. They demonstrate the distribution of illness is related to prevalence of risk factors in the whole population, not just the sick. This applies also to mental health: it is distributed normally (not bi-modally) in the population. There is no clear boundary between the general population and people diagnosed with anxiety and depression, in terms of symptomatology or the presence of risk factors. The mean number of mental health symptoms in a population is related to the prevalence of clinical disorder: ''this implies that explanations for the differing prevalence rates of psychiatric morbidity must be sought in the characteristics of their parent populations; and control measures are unlikely to succeed if they do not involve population-wide changes'' (Huppert, 2005, p. 327). As a result, targeted intervention for people with a diagnosed condition is not the only effective approach; whole population interventions may shift the distribution of risk factors and symptoms, with disproportionately large benefits to the most vulnerable groups. This underpins many public health interventions, for example a universal requirement to wear seat belts reduces road traffic injury in the sub-set of people involved in accidents. The logic applies to mental as well as physical health. The key point emerging from positive psychology perspectives on public mental health is that there are benefits at a population level from promoting positive wellbeing, and that whilst targeted interventions for specific groups with a diagnosis may also be necessary, they would not be sufficient to get the desired results. Social interventions and public mental health There is ample evidence for the importance of social causation in psychological distress (Friedli, 2009;Murali & Oyebode, 2004;Wilkinson & Pickett, 2010). This raises the question of what can be done: ''strategies for preventing distress can be built on a few simple things: education, a fulfilling job, a supportive relationship, and a decent living are to mental health what exercise, diet and not smoking are to physical health'' (Mirowsky & Ross, 2003, pp. 273-274). With three out of four elements of this prescription relating to careers, it can be justified to explore the relevance of career interventions within a wider context of social interventions to promote mental health. A weakness of public health policy is that responsibility for it may be restricted to health bodies whose resources are dominated by disease management not prevention, and who lack authority over the areas of life where the root causes of illness can be found (Stoate & Jones, 2010). The Adelaide Statement on Health in All Policies (WHO/Government of South Australia, 2010) represents an international recognition of the need for 'joined up' government, and that health needs to be addressed in other sectors of social policy. The interdependence of public policy requires a new approach to governance. This view now has a substantial scientific underpinning (e.g. Foresight Mental Capital and Wellbeing Project, 2008). Health prevention requires addressing causal factors and this cannot be done by treatment services: ''the drivers of health lie outside the health sector'' (Marmot, 1999, cited by WHO, 2004b. Bambra (2011) takes this argument a stage further by identifying employment as the key factor generating the social inequalities which subsequently lead to health inequalities: ''paid work, or the lack of it, is the most important determinant of population health and health inequalities in advanced market democracies'' (Bambra, 2011, p. ix). Career guidance as a public mental health intervention The notion that career guidance can contribute to mental health is not new. Herr (1989) and Loughead (1989) suggest both a preventive and a treatment role for career counselling, and the adoption of behavioural health approaches. Blustein (2008) explicitly suggests there are public policy implications of accepting the centrality of work and career in psychological health. He points to developments in occupational health psychology, vocational rehabilitation and positive psychology as having potential to inform the practice of career advisers and public policy. His contribution is perhaps the most developed in the literature to date. These ideas have not been taken up and developed by others, but there are early signs that this may be beginning to change. …whilst the funding of careers guidance is commonly justified in terms of its contribution to creating and maintaining an efficiently functioning economy, it could equally be argued that it is justifiable in terms of contributing to the health and well-being of the nation. (Bimrose, 2009, p. 1) Policy goals need not be mutually exclusive. Conceivably if there are health gains from guidance interventions these may result in indirect economic gains, such as reduced demand for health care services, a possibility hinted at by Gillie and Gillie Isenhour (2005) and Mayston (2002). Also guidance has social equity goals, so ameliorating the detrimental health effects of inequality and exclusion may be an aspect of this role Levels of prevention Public health interventions can be classified into three types: primary, secondary, and tertiary prevention. Primary prevention refers to interventions designed to prevent the occurrence of illness in the first place: this is indisputably the most attractive option. Primary prevention could seek to reach the whole population, or target 'at risk' groups. Unemployed people and young people are two key client groups for guidance and also represent obvious targets for primary prevention. There is a vast evidence base demonstrating the detrimental effects of unemployment on mental health (e.g. Paul & Moser, 2009). Career guidance interventions could benefit unemployed groups in two ways. Firstly, by providing support that accelerated reemployment, such as provision of information on opportunities, and job search coaching. Secondly, by providing support that helps to inoculate against the negative psychological effects of unemployment by reinforcing self-concept, promoting self-efficacy and teaching strategies to stay socially engaged. Young people (i.e. adolescents and young adults) are not only a key group for guidance services, they are also the age group most likely to experience first onset of mental health conditions. They are exposed to intense social and transition pressures, while biological maturation is not yet complete. Given the evidence that the developmental effects of unemployment on mental health conditions are potentially serious and long lasting, this area must be of particular interest as a target for primary prevention (Allen, Hetrick, Simmons, & Hickie, 2007;Monroe & Harkness, 2005). The increase in both the complexity of youth transitions, and the length of time taken to achieve independence, may raise exposure to health risks (Furlong, 2002). Career education and guidance interventions could strengthen identity and selfesteem in adolescence, and promote pro-active behaviour. These are valuable outcomes in their own right and associated with positive mental health. The contribution of careers services may also help prevent or reduce the duration of youth unemployment by promoting engagement in education and training, thus limiting exposure to a key risk to health. This is speculative; relatively little is known about the effectiveness of early mental health intervention, and some evidence is inconclusive (Harden et al., 2001). So this represents a key target for research effort: the potential value of knowledge in this area is great (Allen, Hetrick, Simmons & Hickie, 2007;Gillham, Shatte, & Freres, 2000). Secondary prevention refers to interventions designed to reduce the duration of a health condition and prevent its re-occurrence. There is an extensive literature suggesting benefits of re-engaging in work, particularly in relation to recovery from mental health conditions (Coutts, 2007;Crowther, Marshall, Bond, & Huxley, 2001;Seebohm, Grove, & Secker, 2002). Notwithstanding a tendency in the literature to neglect common mental health conditions in favour of severe and enduring conditions (Underwood, Thomas, Williams, & Thieba, 2007), the evidence for secondary prevention, if not conclusive, is at least promising. Tertiary prevention refers to measures to limit the detriments associated with chronic and long term conditions. An example might be the use of 'condition management programmes' in the UK, which are health interventions delivered in the context of employment initiatives for long term unemployed adults with health conditions. Cognitive behavioural therapy is the most commonly used psychological treatment in this context (Clayton et al., 2011). Welfare-to-work interventions, even for those with enduring health conditions, tend to focus on rapid placement into employment (Lindsay, McQuaid & Dutton, 2007). Career guidance is a not a routine feature of vocational rehabilitation services, but a case could be made that it has a role to play in secondary and tertiary prevention. Its developmental focus on long term life planning may lead to more sustainable outcomes than typical welfare-work services. Alternatively, guidance may provide support to access work, volunteering or learning primarily for therapeutic purposes, so as to gain the health benefits of participating in engaging activities in a social context. This work may benefit from close liaison with mental health services. Thus targeted provision for specific groups with a diagnosis should not be ruled out, but primary prevention for the whole population remains the most important arena for the promotion of well-being. Platform for delivery It is problematic for social interventions to reach the whole population, in a similar way that fluoridation of water supplies could reduce tooth decay across society. As a result the prime target for public mental health interventions is most often school pupils (Levine, Perkins, & Perkins, 2005;Rothi, 2006) as they represent a large 'captive' target group, with near complete population coverage for an age cohort, and the potential that impacts may have long lasting or multiplying benefits. This is an obvious domain of activity for career guidance services, at least in those nations that have state delivery structures. Indeed some career guidance services may achieve better population coverage than schools, by virtue of contact with disaffected young people and non-attenders, an important at-risk group. Another clearly defined population for public mental health intervention is employees in the workplace (Barry & Jenkins, 2007), also a potential domain for the delivery of guidance. Guidance services also reach unemployed adults through specialist agencies, and may be located in public employment services (Sultana & Watts, 2006), hinting at an answer to this important question: While there are structures for disseminating interventions for general health, physical health and mental health, there is currently no structure through which we can promote psychological health and well-being. Where do you develop a psychological health delivery system, especially for people who are unemployed? In other words, whose core business is it to address the psychological effects of unemployment? (Rose & Harris, 2004, p. 301) Career guidance services are unusual in that they are likely to have access to several of the key target populations for public mental health interventions. Also, they lack the clinical culture of health services, which may be stigmatising or send an implicit message of incapacity. As a result they represent a viable platform from which to promote positive mental well-being. Evidence There is a substantial evidence base demonstrating the impact of social and economic factors on mental health, but there is a relative paucity of evidence on the impact of social and economic policy initiatives on mental health, particularly in the field of employment (Candy, Cattell, Clark, & Stansfeld, 2007). Reviews of the literature by Lakey, Mukherjee, and White (2001) and Coutts (2010) highlight the 'evidence void' relating to the health impacts of active labour market policies. Jané-Llopis, Barry, Hosman, and Patel (2005) accept there are substantial gaps in the evidence base and some ineffective interventions, but claim that there is a growing theoretical base and body of empirical evidence for the effectiveness of public mental health measures, with the potential for lasting effects on well-being and additional socio-economic benefits. Social interventions are most likely to have indirect effects on health so there is not just a lack of evidence in relation to their effectiveness, and a need to build the evidence base, but also substantial methodological challenges in attempting to do so. In health related policy making, there are hierarchies of evidence, with greater faith placed in research that is perceived to be more robust. Hughes and Gration (2009) apply this kind of thinking to career guidance interventions and present a five-level evidence hierarchy, locating research designs with strong counterfactuals at the higher levels. Large randomised control trials (RCTs) carry the most weight in these evidence hierarchies. Often described as the 'gold standard' for research, RCTs represent the best design for identifying those effects that can be attributed to an intervention. However the application of methods developed for clinical drug trials to research into social interventions such as counselling remains controversial, and can be criticised for a number of reasons (Timulak, 2008). These include lack of adequate control groups or placebo equivalents, and use of samples that are not typical of those found in practice situations. RCTs may have strong internal validity, but may often have weak ecological validity. Furthermore, RCTs usually require standardisation in 'treatment', which is problematic in guidance interventions. In the terminology of health research, career guidance would be defined as a 'complex intervention' because it typically involves multiple components, combined in ways tailored to the individual (Medical Research Council, 2008). Not only do evidence hierarchies privilege RCTs, they also undervalue qualitative research. RCTs may neglect process research illuminating change mechanisms, and build in assumptions in their choice of outcome variables. There is good reason to believe that qualitative evidence has the potential to be highly valuable, particularly when exemplary studies such as Bimrose, Barnes, and Hughes (2008) are considered. Nonetheless when policy makers look for an empirical basis for action, they must turn to concise and authoritative summaries of the evidence. Meta-analyses and systematic reviews of the literature are particularly influential, most notably Cochrane Collaboration reviews (as described by Higgins & Green, 2008). The inclusion criteria in these reviews are strict, with RCTs given primacy. Such evidence is not currently available in career guidance; most relevant research would not meet the criteria for inclusion, including valuable qualitative research. The narrative approaches preferred by some (e.g. Blustein, Medvide, & Kozan, 2012), irrespective of their merits, do not generate the kind of evidence that influences international public health policy, at least not as isolated studies. Currently national bodies such as the UK's National Institute for Health and Clinical Excellence (NICE) and international bodies such as the WHO or the European Union demonstrate a preference for RCTs in systematic reviews of the scientific literature in guiding public (mental) health policy (e.g. NICE, 2012;World Health Organisation, 2004a). There is hope this may change in future. Some of these sources, whilst privileging RCTs, do acknowledge their limitations and the potential of other kinds of evidence to make a contribution. There are moves towards developing metasynthesis techniques in the qualitative health research literature, (e.g. Walsh, 2005). Shadish and Myers (2004) discuss the inclusion policy for Campbell Collaboration reviews, a sister organisation to the Cochrane Collaboration, which focuses on social rather than medical interventions. Whilst still giving primacy to randomised trials, they identify a number of reasons to include other research designs in Campbell systematic reviews, to reflect the nature of social science evidence base. Producing a persuasive evidence base is not inconceivable. Even though there are few systematic reviews addressing social interventions to improve health, some examples are identified by Bambra, Gibson, Sowden, Wright, and Petticrew (2010). These include Adams, White, Moffatt, Howel, and Mackintosh (2006): the effects of welfare benefits advice in healthcare settings; Audhoe, Hoving, Sluiter, and Frings-Dresen (2010): the effects of interventions for the unemployed on work participation and mental distress; and Rueda et al. (2012): the effects of return to work on health. Research addressing health outcomes of vocational interventions remains rare; more often employment outcomes are the chosen measure of rehabilitation even in reviews (e.g. Bambra, Whitehead, & Hamilton 2005;Crowther et al., 2001). There are interesting examples of large RCTs that attempt to manage the challenges to applying this research design to social interventions. Most notable is the work of the Michigan Prevention Research Center (MPRC) who developed a group based job search training intervention for unemployed adults. They produced evidence of positive impacts on mental health variables as well as employment outcomes (Price, Vinokur, & Friedland, 2002); these findings have been replicated internationally, notably in Finland (e.g. Vuori, Silvonen, Vinokur, & Price, 2002). This work was developed from the outset with the intention of creating a preventive public mental health intervention (e.g. Price, House, & Gordus, 1985). Their approach represents primary prevention for a mainstream (non-clinical) population of unemployed job seekers including some at risk of developing conditions such as depression. No equivalent research effort exists for career guidance interventions, in spite of claims for their therapeutic value, supported by accounts from counsellors (e.g. Blustein, 1987;Zunker, 2008). A more detailed discussion of the empirical evidence and causal mechanisms linking career guidance to well-being outcomes is provided by Robertson (2013). Conclusion Although claims have been made for the therapeutic effects of career counselling, few authors, with the notable exception of Blustein (2008), have sought to extend this logic to the wider sphere of public policy. The potential for career guidance interventions to have a positive effect on mental well-being at a population level has been neglected. It is time to for health to sit alongside economic development, lifelong learning and social equity goals for guidance in international policy discourse. There are two significant obstacles to be faced in pursuing a well-being agenda in guidance. Firstly, the governance of career services is usually within education or employment structures: organisations that are not tasked with health objectives and have a limited history of contributing to public health initiatives. Secondly, producing a persuasive evidence base to inform public health policy may require going some way towards adopting the methods used in health research, some of which are problematic to translate to a guidance context. Nonetheless, a strong rationale can be made that career guidance may impact on well-being. It is clearly important to explore three avenues which can be pursued in parallel: initiating a dialogue with policy makers, developing an empirical evidence base, and considering the implications for service design and delivery.
2022-12-13T14:45:16.260Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "244ab684d378114f10e021667adf52974ec28bb2", "oa_license": "CCBY", "oa_url": "https://napier-repository.worktribe.com/preview/378111/Career%20guidance%20public%20mental%20health%20IJEVG%20manuscript%20v4%20June%202013.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "244ab684d378114f10e021667adf52974ec28bb2", "s2fieldsofstudy": [ "Medicine", "Psychology", "Political Science" ], "extfieldsofstudy": [] }
243990186
pes2o/s2orc
v3-fos-license
Diffusion State Transitions in Single-Particle Trajectories of MET Receptor Tyrosine Kinase Measured in Live Cells Single-particle tracking enables the analysis of the dynamics of biomolecules in living cells with nanometer spatial and millisecond temporal resolution. This technique reports on the mobility of membrane proteins and is sensitive to the molecular state of a biomolecule and to interactions with other biomolecules. Trajectories describe the mobility of single particles over time and provide information such as the diffusion coefficient and diffusion state. Changes in particle dynamics within single trajectories lead to segmentation, which allows to extract information on transitions of functional states of a biomolecule. Here, mean-squared displacement analysis is developed to classify trajectory segments into immobile, confined diffusing, and freely diffusing states, and to extract the occurrence of transitions between these modes. We applied this analysis to single-particle tracking data of the membrane receptor MET in live cells and analyzed state transitions in single trajectories of the un-activated receptor and the receptor bound to the ligand internalin B. We found that internalin B-bound MET shows an enhancement of transitions from freely and confined diffusing states into the immobile state as compared to un-activated MET. Confined diffusion acts as an intermediate state between immobile and free, as this state is most likely to change the diffusion state in the following segment. This analysis can be readily applied to single-particle tracking data of other membrane receptors and intracellular proteins under various conditions and contribute to the understanding of molecular states and signaling pathways. INTRODUCTION Cells sense their environment through membrane proteins, and extracellular stimuli are translated into intracellular signaling cascades and a cellular response. This process often begins with ligands that bind to membrane receptors, induce receptor oligomerization, and recruit other proteins such as co-receptors. The formation of receptor oligomers and signaling platforms reduce the receptor mobility and change its diffusion behavior (Stone et al., 2017;. Singleparticle tracking (SPT) is a method to measure and to reveal subtle changes in the diffusion of membrane receptors in cells and at the molecular level (Manzo and Garcia-Parajo, 2015;Shen et al., 2017). SPT requires low molecular densities, in order to allow single-molecule detection and assignment of these into single-protein trajectories. Such low molecular densities can be achieved by substoichiometric labeling, by the introduction of a photoactivatable fluorophore, or by using transiently binding labels that specifically target the membrane protein (Manley et al., 2008;Giannone et al., 2010). SPT provides information on diffusion coefficients and on the type of motion, i.e. free diffusion, spatially confined movement, and immobile particles (Michalet, 2010). It may also occur that a molecule switches between different diffusion states within a single trajectory; such transitions can be analyzed by comparing the experimental dataset to Monte Carlo simulations (Wieser et al., 2008), using hidden Markov models (Persson et al., 2013;Sungkaworn et al., 2017;Liu et al., 2019), analytic diffusion distribution analysis (Vink et al., 2020), local MSD exponent values (Hubicka and Janczura, 2020), and unsupervised Gibbs sampling (Karslake et al., 2021). Receptor tyrosine kinases (RTKs) constitute a family of membrane receptors comprising 58 different proteins (Lemmon and Schlessinger, 2010). One subfamily is the MET receptor family containing the hepatocyte growth factor receptor, also known as MET. MET was first discovered as an oncogene in 1984 (Cooper et al., 1984). The role of MET together with its physiological ligand hepatocyte growth factor/scatter factor (HGF/SF) is manifold: It is essential in embryogenesis, is involved in growth, and regulates cell migration Schmidt et al., 1995;Uehara et al., 1995). MET overexpression was found to be of relevance in several cancers and is targeted in cancer therapy (Ichimura et al., 1996;Goyal et al., 2013;Mo and Liu, 2017). Next to its canonical ligand HGF, MET is targeted by the surface protein internalin B (InlB) secreted by the pathogenic bacterium Listeria monocytogenes that causes human listeriosis (Braun et al., 1998). InlB triggers similar cellular responses as HGF/SF and induces bacterial invasion into hepatocytes (Dramsi et al., 1995;Shen et al., 2000;Niemann et al., 2007). Here, we apply a segmentation analysis to single-molecule trajectories of un-activated and InlB-bound MET and extract the diffusion states and transitions between these diffusion states. To follow activated MET, we used the N-terminal internalin domain of InlB (InlB 321 ) that binds to the extracellular domain of MET and induces MET phosphorylation (Banerjee et al., 2004;Niemann et al., 2007;Ferraris et al., 2010;Dietz et al., 2013), and that we used in previous work to measure the diffusion of MET in living HeLa cells with single-particle tracking (Harwardt et al., 2017). We found that MET bound to InlB diffuses slower than resting receptors and that the immobile population increases. This immobilization was assigned to interactions with the actin cytoskeleton as well as to recruitment of MET to endocytosis sites. Using a segmentation approach, we now present an extended analysis of these data by taking into account that single receptors may switch between different diffusive states within single trajectories. For this analysis, single trajectories were divided into segments showing uniform movement. These segments were analyzed separately with regard to their diffusion mode (free, confined, immobile) (Rossier et al., 2012;Harwardt et al., 2017;Orré et al., 2021). In addition, we extracted the transitions between different segments within single trajectories, which report on functional transitions of the MET receptor signaling complex. For MET, we found that upon InlB activation the immobile state becomes more stable and transitions into immobile states occur more often. The confined diffusion state acts as an intermediate state between immobile and free, as this state is most likely to change the diffusion state in the following segment. This straight-forward analysis routine can be transferred to SPT data of other biological targets. Data Acquisition The SPT data used within this study, together with experimental details on data acquisition and sample preparation, were previously published (Harwardt et al., 2017). In brief, the universal point accumulation for imaging in nanoscale topography (uPAINT) method (Giannone et al., 2010) was applied to measure the dynamics of the MET receptor in living HeLa cells. For the resting receptor, an ATTO 647N-labeled, non-activating Fab antibody fragment was used. The ligand-bound state was probed using the InlB 321 ligand site-specifically labeled with ATTO 647N which was fully functional (Dietz et al., 2013;. Imaging was performed in total internal reflection fluorescence (TIRF) mode using an N-STORM microscope (Nikon, Japan). For both un-activated MET and InlB-bound MET, 60 cells were analyzed. Single-Molecule Localization The MET receptor was targeted with fluorescent labels and its position in a cell membrane determined by analyzing image stacks with the ThunderSTORM plugin (version dev-2016-09-10-b1) (Ovesný et al., 2014) implemented in the image processing program Fiji (Schindelin et al., 2015). Camera settings were adjusted according to the manufacturer's manual and the base level was estimated by averaging the pixel intensity with the shutter closed. Deviations from ThunderSTORM default settings are the chosen fitting method "maximum likelihood", activated "multi-emitter fitting analysis" with a "maximum numbers of molecules per fitting region" of 3 and a "limit intensity range" spanning the 2-sigma interval of the photon distribution in logspace, extracted from detected emitters with "multi-emitter fitting analysis" disabled. The localizations were filtered by applying "remove duplicates". Single-Particle Tracking Trajectories of MET receptors were obtained by loading singlemolecule localization data provided by ThunderSTORM into the swift tracking software (version 0.4.2) (Endesfelder et al., manuscript in prep). Parameters for swift analysis were determined using the SPTAnalyser software. A detailed description is added to the manual at https://github.com/ JohannaRahm/SPTAnalyser. The parameters "diffraction_limit" 14 nm, "exp_displacement" 85 nm (Fab)/75 nm (InlB), "p_ bleach" 0.010 (Fab)/0.014 (InlB), and "p_switch" 0.01 were set globally for all cells. The parameters "exp_noise_rate" and "precision" were calculated individually per cell. swift divides trajectories into segments if the diffusion behavior of the particle changes. Diffusion State Analysis The diffusion state analysis was performed with SPTAnalyser. Diffusion coefficients of individual segments were calculated by optimizing the parameters of a linear diffusion model on the basis of the first four time steps of the mean squared displacement using the method of least squares (Eq. 1). Segments with a diffusion coefficient below 0 were discarded. MSD(Δt) 4DΔt (1) Segments with a minimum length of 20 frames (400 ms) were classified into diffusion states as previously reported (Rossier et al., 2012;Harwardt et al., 2017;Orré et al., 2021). First, the segments were separated into immobile and mobile diffusion applying a diffusion coefficient threshold D min . The threshold is derived from the dynamic localization error (Eq. 2) which was calculated for each cell, with average values of MSD(0) and diffusion coefficient D (Savin and Doyle, 2005;Michalet, 2010). The third quartile was used to determine D min (Eq. 3), where n is the number of time steps used in the linear model to extract the diffusion coefficient. All segments with a diffusion coefficient below D min 0.0028 μm 2 /s were classified as immobile. Mobile segments were separated by fitting 60% of the MSD plot with Eq. 4, where r c is the confined diffusion radius and τ is a time constant. Segments with τ smaller than half the time interval used to compute the MSD (120 ms) are classified as confined diffusion, and values higher than that as free diffusion. Transition Counting For each single trajectory in which at least two segments were identified, the transition of the diffusion state between the segments was determined. For the three diffusion states of immobile (i), confined (c), and freely diffusing (f) particles, nine different Segments with a length of less than 20 frames, or with a negative diffusion coefficient, were not classified. Transitions between a classified and an unclassified segment were neglected. Unclassified segments that occurred between two classified segments, and that had a length of up to 19 frames, were masked, and the transition between the segment before and after the unclassified segment was considered in the analysis. This means that all segments that were shorter or equal than the mask length of 19 frames were removed from the trajectories, and that transitions between the preceding and the succeeding segment were counted. The mask value was synchronized to the minimum length a segment must exceed to be classified into a diffusion state. Transition counts were normalized per cell and summed to one to compare the occurrences of transition types. Transition counts were normalized per diffusion state so that the counts of transition types proceeding from the same diffusion state summed up to one to compare the occurrences of diffusion states in adjacent segments. Simulations To evaluate the error rate of the diffusion state classification, simulations of single-particle trajectories were performed. For this purpose, the software ermine (Estimate Reaction-rates by Markov-based Investigation of Nanoscopy Experiments) was used to create simulations of trajectories of freely diffusing particles. The probability distribution was defined by the expectation value of the mean squared displacement r within a timestep t (Eq. 5). The apparent mean squared displacement r was calculated based on the apparent diffusion coefficient D and the static error ε (Savin and Doyle, 2005) (Eq. 6). In order to match the simulation close to the experimental data, the following parameters were chosen: for D, the average diffusion coefficient of 0.12 μm 2 /s of the mobile population (confined and free) was used, t corresponds to the camera integration time of 0.02 s, τ to the time lag between two consecutive frames of 0.02 s, and ε to the average localization error of 29 nm. The error rate of the diffusion state classification model was estimated by classifying the simulated trajectories of freely Availability The analysis procedure introduced in this work can be straightforward applied to other single-particle tracking data. Localizations can be detected with rapidSTORM (Wolter et al., 2012) Extraction of Different Diffusion States Within Single-Molecule Trajectories We developed a data analysis workflow that extracts transitions between diffusion states from single-particle trajectories ( Figure 1A, Supplementary Figure S1). We applied this analysis to single-particle tracking data of MET receptors in live HeLa cells recorded using the uPAINT principle (Giannone et al., 2010). For that purpose, MET receptors were either labeled with a monoclonal Fab fragment, which binds to but does not activate the receptor, or with the bacterial ligand InlB, which binds and activates the receptor ( Figure 1B). Both ligands were conjugated to the fluorophore ATTO 647N. The positions of the fluorophore labels were measured in live cells using TIRF microscopy and subsequently linked to trajectories ( Figure 1A). Individual trajectories were divided into segments that exhibited uniform motion. Segments were classified as immobile (i), confined (c), and freely diffusing (f) states and transitions between diffusion states within single trajectories were analyzed. Segments of single trajectories of Fab-and InlB-bound receptors exhibit different properties in terms of their mobility, population of diffusion states, lengths, and confinement radii (Figures 1C-F). Diffusion states (free, confined, immobile) were determined by analyzing the MSD plots of the segments (for details see Methods). The diffusion coefficients of the InlB/MET complexes are significantly smaller compared to Fab/MET for the confined (D InlB 0.051 ± 0.003 μm 2 /s vs D Fab 0.094 ± 0.007 μm 2 /s) and free (D InlB 0.084 ± 0.003 μm 2 /s vs D Fab 0.134 ± 0.004 μm 2 /s) population ( Figure 1C). The diffusion coefficients for the immobile populations are smaller than the precision of the method and result from segments below the detection limit of mobility (see Methods). Upon activation with InlB, the population of the freely diffusing particles is reduced and driven towards the immobile state ( Figure 1D). Segment lengths are drastically shorter for confined segments compared to the other two diffusion states ( Figure 1E). For example, in InlB-treated cells, a segment classified as confined diffusion lasts an average of 0.64 ± 0.02 s compared to 1.19 ± 0.03 s for immobile and 1.00 ± 0.04 s for a free diffusion. InlB-bound receptors generally move in a more confined manner, as confinement radii calculated for the confined and free populations are smaller compared to unactivated MET ( Figure 1F). Interestingly, the confinement radii of the free diffusion state are in the order of magnitude of the cell sizes. To evaluate the accuracy of the classification model, simulated trajectories of freely diffusing particles were classified. An error rate was calculated from freely diffusing particles particles that were classified as confined diffusion (Supplementary Figure S2). This error rate decreased with increasing trajectory length. In addition, trajectories of freely diffusing particles were simulated with number and trajectory length corresponding to the distribution of trajectories in the Fab experiments, resulting in an error rate of 15%. This suggests the possibility that trajectories of freely diffusing particles contribute to the confined population and reduce the average trajectory length, as a misclassification is more likely with shorter trajectories. The number of transition events observed for all trajectories of a cell is 184 ± 11 within a measurement period of 20 s. Compared to the average number of 1440 ± 90 trajectories per cell, this number appears relatively small, which is because such events rarely occur within the observed time window of a trajectory (1.36 ± 0.06 s). More transition events occur in longer trajectories (Supplementary Figure S1A). However, 70% of the trajectories do not change their diffusion mode and consist of only one segment (Supplementary Figure S1B). The number of transition counts increases by up to 30% by masking unclassified segments, i.e. segments with a length below the threshold of 20 frames. For this, the transitions of adjacent segments with a defined diffusion state around the masked segment are counted. Without masking, the average is 22.2 ± 0.6% of transitions between segments with a defined diffusion state. With masking, this value is increased to 71.6 ± 0.7% (Supplementary Figure S1C). Mostly no significant changes are observed between the relative frequencies of transitions when masking or without masking (Supplementary Figures S1D,E). Only transitions between immobile and free diffusion benefited slightly less from masking than those of the other transition types. MET Receptor Activation With InlB Changes Diffusion State Transitions Between Segments Segments of single-molecule trajectories of Fab-bound as well as InlB-bound MET receptors were classified into freely diffusing, confined moving, and immobile particles as described above. Exemplary cells with color-coded segments are shown in Figures 2A,B for resting and activated MET, respectively. In the zoom-ins only trajectories with at least one transition are displayed. In InlB-treated cells, the number of confined and especially immobile segments increases in comparison to Fabtreated cells, while at the same time the occurrence of freely diffusing particles is significantly lower. In addition, an increased confinement of InlB-bound receptors is visible. These observations are in accordance with the increased fractions of immobile and confined receptors ( Figure 1D) and the decreased confinement radius of InlB-bound MET trajectories. Interestingly, when comparing the frequencies of transitions between resting and InlB-bound MET receptors they mostly differ highly significantly (Supplementary Figure S2). Only the transitions from immobile to free and from confined to confined do not change significantly. The transition from immobile to immobile segments, from immobile to confined segments, as well as from confined to immobile segments increases for the activated cells. At the same time, transitions to the freely diffusing state occur less probable out of the InlBactivated state. To visualize the differences between the different diffusion states, we normalized the transitions with regard to the respective diffusion state (Supplementary Figure S3). For both, Fab-bound and InlB-bound MET receptor trajectories, it clearly shows that the immobile and the free state are relatively stable diffusion states, which we infer from the high probability that an immobile particle stays immobile in the next segment and a freely diffusing molecule remains freely diffusing ( Supplementary Figures S3 A,B). The confined state appears to be a more intermediate state: a confined diffusing receptor very likely changes its diffusion state in the next segment, either getting immobilized or switching to free diffusion. When comparing resting and InlB-activated MET mobility most transitions significantly change (Supplementary Figure 3C). Transitions towards the immobile state become more likely and to the freely diffusing state less probable in activated cells. The transitions involving the confined diffusing state change less significantly. DISCUSSION We report an analysis method for single-particle tracking data that resolves segments of different diffusional states within single trajectories. The method is sensitive to report segments of free, confined, and immobile states within single trajectories, and transitions between these diffusion states. This allowed us to relate dynamic information on protein mobility to functional states of a protein in a membrane, e.g. the immobilization upon binding of a ligand to a receptor. This additional information from single-particle tracking data complements the available portfolio on analyzing mobility data of single proteins (Rossier et al., 2012;Calebiro et al., 2013;Ibach et al., 2015;Sungkaworn et al., 2017). As a showcase example, we investigated the diffusion of the MET receptor in living HeLa cells by analyzing available single-particle tracking data of resting and InlB-activated MET (Harwardt et al., 2017). Our analysis reports similar diffusion coefficients for resting and InlB-bound MET. In addition, we were able to segment trajectories and to reveal transitions between diffusion states within single trajectories; this information was so far averaged out by a global MSD analysis of single-particle trajectories. The analysis of segments in single MET receptor trajectories revealed that upon activation, MET transits from a free diffusion state to confined and immobile states and the immobile state becomes more stable, which is in line with the canonical model of receptor tyrosine kinase activation and internalization (Li et al., 2005;Chung et al., 2010). Interestingly, we found that the confined state has a short lifetime which is reflected in the segment lengths as well as the transition probabilities. This diffusion state can be seen as an intermediate state of MET. Upon entry into this state, it is probable that the receptor is soon either immobilized, e.g. prior to endocytosis, or returns to a highly mobile state, e.g. searching for interaction partners. Our analysis procedure can be applied to single-particle tracking data of other molecules and provides straight-forward access to transitions in the mobility of proteins that can be related to functional states. Future developments may focus on extending the trajectory length and extracting the kinetics of transitions within single trajectories. This could be achieved by either using more stable fluorescent probes such as quantum dots (Hagen et al., 2009;Li et al., 2012;Cognet et al., 2014), or by recording single-molecule data with very low illumination intensity and in combination with image analysis-assisted Frontiers in Computer Science | www.frontiersin.org November 2021 | Volume 3 | Article 757653 6 localization through e.g. denoising algorithms (Kefer et al., 2021). Another exciting extension is dual-color SPT (Wilmes et al., 2015(Wilmes et al., , 2020, which in combination with segmentation analysis may relate changes in diffusion states to molecular interactions such as the formation of transient complexes between two receptors. DATA AVAILABILITY STATEMENT The dataset analyzed in this study can be found at the EMBL/EBI BioStudies server; https://www.ebi.ac.uk/biostudies/studies/S-BSST712. AUTHOR CONTRIBUTIONS MH conceptualized the study. JR, MD, and MH conceived the experiments. UE provided software, and JR, UE, and SM developed data analysis protocols. JR performed data analysis. All authors discussed the results, contributed to manuscript revision, read, and approved the final submitted version. ACKNOWLEDGMENTS We thank Marie-Lena Harwardt for providing the single-particle tracking data for the analysis and discussing their analysis, Claudia Catapano for testing the SPTAnalyser software and valuable input, as well as Marc Endesfelder and Bartosz Turkowyd for helpful discussions on the tracking analysis.
2021-11-12T14:15:09.577Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "2c0826df24981dc4c9a8a556758b6edf79778045", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcomp.2021.757653/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "2c0826df24981dc4c9a8a556758b6edf79778045", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
54689303
pes2o/s2orc
v3-fos-license
Efficiency in the California Real Estate Labor Market Problem statement: This research explores the extent of market efficie ncy in the real estate labor market. Given a common commission rate, areas with high average home prices will generate higher agent income per home sold. If markets are e fficient with few barriers to entry, additional agents per capita would be expected in high-priced ar as, but each home sale would represent a larger portion of an agent’s annual income so a risk premi um should be present. Approach: Agent earnings and the number of homes sold were examined in selec ted California counties. The data provides details on over 200,000 transactions, for nearly 47 ,000 different real estate agents and brokers, with usable data for 477 distinct zip codes. Results: Results show that regions with a higher median hom e price have a greater number of parttime real estate agents and an increased number of agents per capita. Conclusion: There are fewer average commission events per agen t in areas with higher housing prices, but a higher level of total commiss ion earnings per agent to compensate for the added income risk per completed transaction. INTRODUCTION Markets are said to perform efficiently when sufficient information and competition exist. With free entry and full knowledge, competitive forces lead to market equilibrium with zero economic profit. The zero-profit competitive equilibrium in the traditional economic theory of the firm is defined as a market where there are large numbers of rational, competitive profit-maximizing participants who can easily enter or exit markets in search of economic profit. In an efficient market, relevant information is freely available to all participants. Active competition among the many informed and rational participants leads to prices that just cover all costs, so there is no way to earn excess profits (above a "normal" market return) in the long run. Applied to the real estate industry, efficient markets would imply that well-informed real estate agents (and potential agents), with full knowledge of market conditions, housing prices and the level of competition in the market, would freely enter or exit the industry to maintain a competitive level of annual earnings for agents. No above-or below-normal earnings could persist because of the intense competition, free entry and full information. The vast majority of real estate salespeople is independent contractors and can a monopolistically competitive market, where there are many sellers with small market shares, slightly differentiated products and many buyers with low barriers to entry for both buyers and sellers. One would expect, however, that the conditions in the market for real estate agents differ slightly from those of a perfectly efficient market. In particular, entry is not always costless and information flow may be slower in real estate labor markets than in other settings, such as financial markets. In reality, the existence of transaction costs, information asymmetry and barriers to entry make most markets less than perfectly efficient. Debate about efficient markets has resulted in numerous empirical studies examining whether specific markets are in fact "efficient" and if so to what degree. It has been shown that real estate markets are not always efficient. For example, Levitt and Syverson (2008); Miceli (1992) and Turnbull (1996) show that the existence of asymmetric information between home sellers and real estate agents leads to lower prices and more rapid sales when agents represent home sellers compared to when the agents sell their own homes. Clayton (1998);Crockett (1982) and Goolsby and Childs (1988) finds strong evidence against efficient markets in the condominium market in Vancouver. In the market for real estate agents, state licensing and education requirements can limit supply. Licensing is defended as a means of maintaining quality and protecting consumers, but entry restrictions may increase agent earnings and reduce economic efficiency. Jud and Winkler (2000) develop a supply estimate for agents and find that the pass rate for licensing examinations and continuing education requirements do affect the numbers and incomes of real estate agents. On the other hand, Johnson and Loucks (1986), in a structural equation model of agent supply and demand, did not find that licensing requirements that restricted the number of agents led to higher earnings. This study explores market efficiency by examining the real estate labor market in selected California counties. These counties show a wide range of median home values. Given the traditional compensation structure, in which the commissions earned on a home sale are some standard percentage of the selling price, agents in a high-priced home market earn more per home sold than an agent in a low-price region. To the extent that information is available and entry is not blocked, potential profits in the high-price regions should attract new real estate agents. Although entry may be constrained and is not instantaneous, Jud and Winkler (2000) find that the supply of agents is elastic with respect to agent earnings. This suggests, all else equal, that the annual return to agents in high-price and low price regions should be similar. The implication for high-price regions is that there should be a greater number of agents per capita but fewer home sales per agent. There may also be greater discounting from the traditional commission, or increased non-price competition between agents. Hsieh and Moretti (2003) have found that the productivity of an average real estate agent falls (fewer houses sold per hour worked) as the average price of land in a city increases. This effect can also be tested using housing prices in low-Vs high-priced regions. Since the earnings of an agent in a high-priced home market require fewer sales than in a low-priced market, there should also be a difference in the percentage of part-time Vs full-time agents in the two markets, assuming an equal level of selling effort is required in the two regions. One would expect a greater proportion of part-time agents in areas with a higher median home price. Introducing risk into the model allows other factors to be considered. For example, an extensive literature has examined risk premiums in financial and labor markets and the risk Vs return tradeoff. This tradeoff can be examined in the real estate labor market. Since fewer sales per agent per year would be expected, on average, in high-price markets, each home sale in a high-price region comprises a larger percentage of an agent's annual income than in a low-priced region. One implication is that high-price regions should have higher average earnings to compensate agents in those regions for the increased income risk per home sold. Data and model: Data from selected California counties are used here to explore market efficiency, to examine the risk premium and to test for differences in full-and part-time participation between high-and lowpriced housing markets. Concentrating on California reduces any potential variation due to differences in how real estate transactions are handled across states and the different tasks real estate agents perform in different markets. The traditional compensation structure in California residential real estate sales is that commissions earned on a home sale are a standard percentage of the selling price. The typical standard for the year used in this analysis was 6% of the sales price, split evenly between the agents representing the buyer and the seller. This is consistent with the Hsieh and Moretti (2003) findings that the average commission rate from 1980-1990 was independent of the price of housing, with a national median of 6.1%. Although home sellers have always been able to bargain for a better actual rate, registered real estate agents was the only group able to add homes for sale to the Multiple Listing Service (MLS). Limited access to the MLS helped maintain the commission standard. In recent years internet-based information resources have eroded some of this market power and increased the number of discount brokers and partialservice alternatives for home sellers. Given the common 6% standard and assuming selling costs are similar in the two markets, agents in a high-priced home market earn more per home sold than an agent in a low price region. Assuming market information is available to participants and entry is unconstrained, potential earnings in the high-price regions should attract new real estate agents. All else equal, market efficiency arguments suggest that the annual return to agents in high-priceand low price regions should be similar. The implication for highprice regions is that there should be a greater number of agents per capita but fewer home sales per agent. To test these hypotheses, data on the number of sales and the dollar value of sales and commissions is needed at the individual real estate agent level. Data from California is used in this study. The number of sales and commissions earned by individual agents was obtained from a commercial service that compiles residential sales information for all recorded home sales in several California counties. Reports based on this database are typically sold to mortgage brokers, title companies and other professionals in the real estate industry. Residential real estate transactions data for year 2004 was obtained for four counties in Southern California (San Diego, Orange, Riverside and San Bernardino) and five counties in Northern California (Sacramento, Stanislaus, Santa Clara, Alameda and San Joaquin). This prebubble year was used to limit the noise in housing market data generated by the recent effects of the recession and the U.S. financial crisis. Overall, the data provides details on over 200,000 transactions, for nearly 47,000 different real estate agents and brokers in these 9 counties, with usable data for 477 distinct zip codes. U.S. Census data was used for median housing prices by zip code for these counties and data from the California Association of Realtors provided demographic details on registered real estate agents in each of these counties. Empirical evidence: Data obtained from the California Department of Real Estate (CA-DRE) showed 107,485 real estate agents and brokers with active licenses in 2004 in these nine counties. Not all who are licensed are active in the real estate market and some are only engaged part-time. The CA-DRE listing does not distinguish by active status or part-time Vs full-time employment. However, separate data was obtained from a commercial firm that compiles records of all real estate public transactions during the year. These records show 46,846 different agents who received a commission from a publicly recorded transaction in 2004. Occasionally an agent will represent both the buyer and the seller in a home sale, but most transactions involve separate agents and two commission shares. I will define a "commission event" as an instance in which a real estate professional earns a commission when representing the buyer, the seller, or both in a publicly recorded real estate transaction. The commercial database of professionals who completed at least one transaction shows the number of transactions for each, the total property value and the total value of commissions earned, using a 3% share for both the buyer and seller sides of the transaction. Over 343,000 commission events are recorded in these nine counties in 2004. These records show 45,747 agents with at least one commission event in 477 unique zip codes for which census data reports a median home value. Of the107, 485 real estate agents and brokers with active licenses in 2004 in these nine counties, a majority did not have a commission event that year. The 46,846 individuals with at least one event represent 43.6% of the licensed group. Summary statistics for those with at least one event are shown in Table 1. Figure 1 shows a histogram of the number of events per agent. Overall, 19.91% of this group had only one commission event and 33.15% had two or fewer events. Details of the housing market can be examined when commission events are combined with census data and viewed by zip code. There is a strong linear relationship between the number of housing units and population, with an average of 15 additional houses per 100 added populations. As expected, the number of commission events increases as the number of houses increases, with 16 more commission events per year per 1000 additional houses in a given zip code. There is no relationship between the average number of commission events per agent and the number of houses in the zip code. This is consistent with a simple efficiency hypothesis that entry of new agents is likely when profit potential exists in the market. Given the traditional 6% commission rate (split between buyer and seller agents) the earnings of an agent per transaction in a high-priced home market are higher than for a lowpriced market. This suggests that completing one or two transactions per year would be more attractive to potential entrants into this labor market in high-priced counties than in low-price areas. This may include individuals who are willing to work in real-estate sales part-time or who stand to save a considerable amount on commission costs when attempting to sell their own residence or when assisting family members or personal friends on an occasional basis. The implication that part-time participation in this labor market is more attractive in highpriced areas can be tested with county or zip-code level data. The median home value is a highly significant factor in explaining the percentage of agents with a single commission event or with only one or two events (p-value<0.005). The data by county is in Table 2 and regression results are summarized in Table 3. When evaluating this hypothesis using zip code level data, noise is introduced from many locations in which only a few homes were sold. To avoid this problem only zip codes are used in which 10 or more commission events occurred. The r-squared values fall but the median home price remains a significant factor. Results are in Table 4. Entry: Number of agents per capita: Agents in a high-priced home market earn a larger commission per home sold than an agent in a low-price region. There is little reason to suggest that selling costs or selling effort should differ significantly between areas with different median home values. If the labor market for real estate professionals is efficient, meaning that information is available and entry is unconstrained, the potential for higher earnings in the high-price areas should attract additional real estate agents. A good entry measure is the number of agents per capita. The above reasoning suggests that there should be a greater number of agents per capita in high-price regions than in low-price areas. Using data for all 477 zip codes, there is a positive and significant relationship between the number of agents per capita with at least one commission event and the median home price. The slope coefficient shows that there is an average increase of 0.172 agents per 1000 population for each $10,000 increase in median home price. When examined by county, the slope coefficients in separate regressions of agents per capita vs. median home value show positive coefficients in each of the nine counties and six of the nine coefficients are significant at the 10% level. Results are shown in Table 5 and 6. Some outliers are present in zip codes with low populations and few home sales, but a large agent per capita ratio. Including these observations reduces the value of r-squared, but the OLS estimate is robust to changes in which outliers are removed. Some heteroscedasticity is evident in several of the county regressions, but the OLS estimates are unbiased and corrections using White's heteroscedasticityconsistent estimator do not materially change the significance levels. Number of commission events per agent: Economic efficiency in the real estate labor market also suggests that the annual earnings of agents in high-price and low price regions should be similar. If not, agents could migrate from low earnings areas to higher earnings areas. Since it has already been demonstrated that the number of agents per capita is larger in higher valued housing markets, it follows that there should be fewer commission events per agent in the high-priced markets. This hypothesis is supported by the data, with 0.49 fewer average transactions per agent for each $100,000 increase in median price, as shown in Fig. 2 The dollar value of commission earnings: Introducing risk into the model allows other factors to be considered. For example, an extensive literature has examined risk premiums in financial and labor markets and the risk Vs. return tradeoff. This tradeoff is also testable in the real estate labor market. Since there are fewer transactions per agent per year, on average, in high-price markets, each home sale in a high-price region comprises a larger percentage of an agent's annual income than in a low-priced region. Since most individuals are risk averse, an implication in this market is that high home price regions should have higher average earnings to compensate agents in those regions for the increased income risk per home sold. This hypothesis is strongly supported by the data, as reported in Table 8 and depicted in Fig. 3. This result suggests that an earnings premium does exist to compensate agents in higher-priced markets for the risk associated with the lower number of average sales per year. All else equal, annual earnings per agent are higher by $129 for each $1,000 increase in the median home value. Many other factors could help explain this result, however. The cost of living will be higher in a region with higher home prices, so the greater earnings could be a compensating differential for these higher living costs. Separate price indexes are not reported by zip code, so adjusting for this factor is not an easy task. It is reasonable to assume that living costs would not vary dramatically across neighboring zip codes in the same state except for the housing cost component. The higher earnings in higher priced markets may also partially reflect a quality differential if more experienced, more educated, or better performing agents sell in high-priced areas. This could be compared to the stratification of waiters by quality of restaurant, or labor markets in car sales or insurance where income is at least partially determined by a percentage commission or tip. Additional model specifications: The regression results change slightly with a more robust specification. Consider the following model. When examined separately, the simple regression of commission events Vs the number of houses showed no relationship. The expectation that the average number of commission events should fall as the median home value increases has already been discussed. Although population is correlated with the number of houses in a given zip code, population increases (all else equal) would be expected to have no impact on the average number of events per agent. If the market is efficient and entry is not blocked, the number of agents would increase instead. The results in Table 9 confirm the inverse relationship for median value, but the number of houses and population show significant effects, with opposite signs. These results show 0.595 fewer average annual transactions per agent for each $100,000 increase in median price, which is consistent with the findings in Table 7. The negative. Population coefficient may reflect the lumpy nature of entry. The value indicates that the average number of commission events per agent will fall by 0.609 for each 10,000 increase in population. Lower population regions will have fewer home sales, which will support fewer real estate professionals. A small number of agents would need to share limited commissions with other agents. Entry of an additional agent in a small market could dilute the average number of sales per agent too much to make entry profitable. As the population grows the larger region will support additional agents, so the average number of transactions per agent will fall. Table 10 shows the results when the dependent variable is average agent earnings. The risk premium hypothesis is still supported. Although the number of commission events falls, theaverage total commission rises as the median house value rises. Specifically, average annual earnings increase by $122 per $1000 increase in median home value, which is consistent with the result in Table 8. The positive coefficient for the number of houses may simply be capturing an income effect. With a given population (and assuming family size is constant), an increase in the number of houses can be viewed an increase in the percentage of homeowners vs. renters. The number of houses per capita can thus be considered a proxy for income if higher income is associated with home ownership. Using the ratio of houses to population instead of the two variables separately also reduces any multicollinearity concerns. If the average earnings of agents is correlated with the average income in the region in which they work, then the average earnings per agent should be positively associated with the number of houses per capita. The results shown in Table 11 support this hypothesis. After accounting for affects due to variation in the median home value, agents in higher income regions (as measured by the proxy of houses per capita) earn an average of $1,213 more per year for each 1% increase in the number of houses per capita. CONCLUSION Using data from 2004 for over 200,000 transactions handled by nearly 47,000 real estate professionals in nine California counties and applying two assumptions-that the commission rate on home sales is constant and that entry into the real estate labor market is easy-several hypotheses about market efficiency are supported. Significant findings are that areas with higher median home prices have a greater number of part-time real estate agents and an increased number of agents per capita. There are fewer average commission events per agent in areas with higher housing prices, but a higher level of total commission earnings per agent to compensate for the added income risk per completed transaction.
2019-05-13T13:08:00.831Z
2011-12-12T00:00:00.000
{ "year": 2011, "sha1": "d59056577c0e3b938f84b1df31d35ba49bf43e40", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajebasp.2011.589.595", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e667987391b6c3a90534b6ca02567e2d0f270fa2", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
254823134
pes2o/s2orc
v3-fos-license
Multi-channel ALOHA and CSMA medium-access protocols: Markovian description and large deviations We consider a multi-channel communication system under ALOHA and CSMA protocols, resepctively, in continuous time. We derive probabilistic formulas for the most important quantities: the numbers of sending attempts and the number of successfully delivered messages in a given time interval. We derive (1) explicit formulas for the large-time limiting throughput, (2) introduce an explicit and ergodic Markov chain for a deeper probabilistic analysis, and use this to (3) derive exponential asymptotics for rare events for these quantities in the limit of large time, via large-deviation principles. Introduction and main results Protocols for medium access control (MAC) are fundamental and ubiquitous in any telecommunication system. Here we are particularly interested in multi-channel systems, where a fixed number of channels is available. Our goal is to develop a probabilistic model for the ALOHA and the CSMA protocol, which can be easily realised on a computer and be mathematically analysed in explicit terms. We will describe the stochastic process of arrival times of incoming and successfully delivered messages and the times that elapse in between in terms of a Markov renewal process that is explicit and has very good ergodic properties. 1.1. Medium access protocols; our goals. We consider the ALOHA protocol, where no infrastructure is given and collisions are possible, and the Carrier Sense Multiple Access (CSMA) protocol, where a message is delivered only if there is an idle channel at the time when the message arrives. We work in continuous time and assume that the messages arrive at random times that are given by a Poisson point process (PPP). To keep things simple, we assume that the service times (delivery times) are all equal to one. We are interested in a multi-channel system, i.e., we assume that at most κ messages can be processed at any given time, where κ ∈ N is a parameter. We think of two interpretations of this restriction: Either there are κ channels available in our system that can be used independently all the time, or there are interference constraints that make it impossible that more than κ messages can be transmitted at the same time, and any additional message is refused from the system. Our interest lies on important quantities like the total number of messages in the system, the number of incoming messages and the number of successfully delivered messages in a given fixed time interval. 1 TU Berlin and WIAS Berlin, Mohrenstraße 39, 10117 Berlin, Germany, koenig@wias-berlin.de 2 WIAS Berlin, Mohrenstraße 39, 10117 Berlin, Germany, shafigh@wias-berlin. de We strive to calculate expected values (in the limit of large time intervals), which follows elementary ideas, but also to analyse more detailed questions, like probabilities of certain events, which needs a deeper understanding of the communication system. To this sake, as one of our main novelties, we develop a description in terms of an explicit Markov chain (in discrete time) that admits a description of the mentioned quantities as functionals of this chain in terms of a kind of a Markov renewal process. To the best of our knowledge, such a Markov chain was not yet known in comparable situations, even though there are a number of ansatzes with queues and the related theory; however these stochastic processes are not able to give information about the real time, but only about certain quantities (i.e., number of messages in the system) at the (random) arrival times or the (random) delivery times of the messages. Our Markov chain makes the application of a number of well-known probabilistic tools available, like invariant initial distributions, ergodic theory and large deviations theory. We give explicit formulas for the transition probabilities and prove that this chain is uniformly ergodic, hence this Markov chain is also useful for making computer simulations. Our Markov chain in particular opens the possibility to describe (the probabilities of) rare events, which is for example helpful if one wants to understand ubiquitous situations in which the system underachieves by producing a smaller throughput than is expected over a long time stretch. The probabilistic theory of large deviations provides mathematical tools for deriving formulas for the exponential decay of the probability, and it provides also tools for characterising the most likely behaviour of the system in this unlikely situation. For the application of this theory, one needs a powerful description and a high degree or ergodicity, and this is provided by our Markov chain. Unfortunately, our description does not allow for the determination of sharp exponential lower bounds for the probabilities of large deviations, but only exponential upper bounds. But we believe it is the exponential upper bounds that shows the value of the large-deviation theory for understanding such communication system. The most important parameters in the system will be the number of channels, κ ∈ N, and the density parameter of the incoming messages, λ ∈ (0, ∞). We assume that the arrival times follow a standard Poisson point process with parameter λ. We will conceive the situation from the user's perspective and will discuss the optimal value of λ for achieving a maximal throughput. The idea behind this is that each user has the knowledge about the number of users in a vicinity of the κ channels and assumes that each of them makes message sending attempts at a certain rate that amounts to the total rate λ of all the message attempts. Under these assumptions, the optimal value of λ, divided by the number of users, should then be the probability parameter for making sending attempts. In the CSMA setting, it will be clear that the throughput is an increasing function of λ, and the optimisation is trivial (ignoring a potential trade-off coming from a huge number of unsuccessful messages). However, for the ALOHA setting, it will be interesting to identify the optimal density λ (depending on κ) for having a maximal throughput; this will be one of our results. Summarizing, the main contributions of this paper are the following. • An explicit probabilistic description (in terms of a Markov renewal process) of the number of messages and the number of successfully delivered messages and more quantities at a deterministic time, • explicit formulas for limiting expected values of these quantities and optimal values of parameters, • a large-deviations analysis of rare events involving these quantities. 1.2. Description of the models. We consider a system with a steady flow of incoming messages that require access to the system at random times. The time lags between any two subsequent arrivals of two messages are independent exponentially distributed times with density parameter λ ∈ (0, ∞). That is, the sequence of arrival times forms a standard Poisson point process (PPP) with parameter λ. Any successful message transmission has a duration of precisely one time unit; i.e., each service time is equal to one. We assume that κ channels are available. On arrival, each message asks for access to some of them. Now we consider two different algorithms (medium access protocols) according to which this request is handled: • ALOHA: For the message, one of the κ channels is picked uniformly at random. All these channel choices are independent over all messages. -If the channel is already busy i.e., if the transmission of another message in this channel is still running, then the new incoming message collides with the old one, causing the cancellation of both messages. -If the channel is idle, the new message is admitted immediately. If not cancelled by another message that arrives later during the delivery time, it will be successfully delivered after one time unit. • CSMA: The message is admitted to the system only if there is an empty channel; then it will be successfully delivered via one of these channels after one time unit. Otherwise, the message attempt is canceled. In the case of a successful delivery after one time unit, we say that the message has gained access to the medium. Advantages of pure ALOHA are that it does not need any infrastructure and is therefore cheap to install and run. However, a drawback is that on each arrival of a message might destroy another message that has been already admitted to the system. In turn, this means that each message can be sure to be successfully delivered only one time unit after it has picked an empty channel, namely if has not itself been killed by a later arriving message during its delivery. This means that too large a number of incoming messages (i.e., too large a large value of λ) decreases on an average the amount of successful deliveries in the system on a long time. We will specify this in terms of a law of large numbers and will see that only a certain percentage of all the channels are typically busy in order to achieve an optimal throughput in this protocol, and we will identify this value. In CSMA, every admitted message will definitely be successfully delivered after its delivery time. However, in contrast with the ALOHA protocol, some extra information (namely the information about free channels) needs to be constantly provided. Hence, increasing the message density λ increases the number of busy channels on an average and hence the throughput (which we quantify below), but also the average number of refused messages (which we neglect in this paper). 1.3. Our results. Introduce A(t) as the number of sending attempts in the time interval [0, t] and S(t) as the number of successful transmissions during this time interval. Then (A(t)) t∈[0,∞) is the counting process for the PPP(λ), but (S(t)) t∈[0,∞) is highly non-trivial and is the main objective. We formulate our results on the limiting expectation in Section 1.3.1, on a crucial Markov chain in Section 1.3.2 and on the probabilities of large deviations in Section 1.3.3. 1.3.1. Limiting expectation. Let us calculate the limiting expectation of the number of successfully delivered messages: Lemma 1.1 (Expected throughput). For both models, * ∈ {ALOHA, CSMA}, and for any λ ∈ (0, ∞) and κ ∈ N, exists and is equal to We wrote Poi λ for the Poisson-distribution with parameter λ on N 0 . The proof of Lemma 1.1 is in Section 2.2 for the ALOHA case and in Section 2.1 for the CSMA case. In contrast with CSMA, for the ALOHA protocol the question for the optimal value of λ for maximizing s ALOHA (λ, κ) is interesting. An explicit calculation does not seem possible for general κ. However, using the exponential series approximation for the two sums, we see that for κ → ∞ the throughput is asymptotically equivalent to which is easily seen to have a unique maximum for x = 3− √ 5 2 ≈ 0.38 by considering the derivatives. Hence, the optimal throughput is roughly sup λ s ALOHA (λ, κ) ≈ 0.38 κ for large κ. Simulations show that for κ = 2 resp. κ = 3 we already have an optimum value for λ ≈ 0.43 κ resp. λ ≈ 0.41 κ; the above approximation seems to converge extremely fast. 1.3.2. A crucial Markov chain. We are going to introduce now the main object of our ansatz, a certain Markov chain in discrete time that is able to describe the main quantities A(t) and S(t). This Markov chain is not only suitable for describing large-deviation events and their probabilities (see Section 1.3.3), but can generally also be used to derive explicit computer simulations for the entire process of messages and their deliveries. In both models, we denote by 0 < T 1 < T 2 < T 3 < . . . all the times at which a message is admitted to a channel. For i ∈ N put In words, σ i is the length of the time lag between the (i − 1)st and ith admittance of a message to some channel, and A i − 1 is the number of refused messages during that time interval. Recall that, for CSMA, T i is the time of the beginning of the i-th successful message transmission, but for ALOHA this is only the time of the start of some message transmission attempt; whether or not it will be successful will turn out only at time T i + 1. Nevertheless, we will show that the sequence (A i , σ i ) i∈N is suitable to derive precise information about our quantities of interest, (A(t), S(t)). As our first main result, we identify the distribution of the sequence as a kind of Markov renewal process: Proposition 1.2 (Markovian structure of (A i , σ i ) i∈N ). In both models, ALOHA and CSMA, the sequence (A i , σ i ) i∈N is a (κ − 1)-Markov chain with kernel W CSMA and W ALOHA , respectively, from for (a, t) = (a 1 , t 1 ), · · · , (a κ−1 , t κ−1 ) , where γ : Both (κ − 1)-Markov chains are uniformly ergodic in the sense that Condition (U) holds (see (3.5)). The proofs are in Section 3.1 for the CSMA case and in Section 3.2 for the ALOHA case. Remark 1.3 (Interpretation of the ALOHA kernel). In the kernel W ALOHA , the parameter β(s) plays the role of the number of busy channels at time T i + s, conditioned on the process of message arrivals before time T i . Therefore 1 − β(s) κ is the probability that a message that arrives at that time picks an idle channel. The remaining terms on the right of (1.6) express the probability that all the messages arriving during [ T i , T i +γ) and during [ T i +γ, T i +s) pick a busy channel and are therefore not admitted to the system. We will specify this in the proof. ♦ Remark 1.4 (Markov renewal process). We see that in both cases (σ i ) i∈N is autonomously a (κ − 1)-Markov chain (with a kernel that can easily be deduced from (1.5) and (1.6), respectively), and A i is a random function of σ i−1 , σ i−2 , . . . , σ i−κ+1 . More precisely, given the sequence (σ i ) i∈N , the variables A i are independent over i and are Poisson-distributed with a certain parameter depending on σ i−1 , σ i−2 , . . . , σ i−κ+1 . Because of this Markovian structure, (A i , σ i ) i∈N (more precisely, is often called a Markov renewal process. ♦ Remark 1.5 (Deriving S(t) and A(t)). In the CSMA case, t → S(t) is nothing but the time-inverse of the partial sum sequence of the σ i via the formula and can therefore be fully described in terms of (σ i ) i∈N . A similar assertion applies to A(t); see (4.14). However, in the ALOHA case one needs additionally (A i ) i∈N for the description of S(t) (see (4.15)). Certainly, one can also describe other interesting quantities as functionals of the Markov chain, for example the number of unsuccessful messages or (in the ALOHA case) the number of messages that are admitted to some channel, but are canceled later during the service time. ♦ 1.3.3. Large deviations. Now we turn to our results concerning the large deviations for (A(t), S(t)) in the limit t → ∞. Our goal is to quantify the exponential decay rate of the probability of rare events of the form { 1 t (A(t), S(t)) ∈ B} for many sets B ⊂ (0, ∞) 2 . Let us recall the notion of a large-deviation principle (LDP). Indeed, a sequence (X n ) n∈N of X -valued random variables (where X is a Polish space) is said to satisfy an LDP with lower semicontinuous rate function I : X → [0, ∞] if for any closed set F ⊂ X and any open set G ⊂ X , The first statement is called the large-deviations upper bound (or LDP upper bound), the latter the large-deviations lower bound. These together can be very roughly summarized by saying that P(X n ≈ x) ≈ e −nI(x) for any x ∈ X as n → ∞. However, topological subtleties are always present in an LDP. See [DZ10] for an account on LDP theory. Indeed, with the help of the Markov renewal process (A i , σ i ) i∈N we are in an excellent position to find and prove such an LDP and to identify the rate functions for the two cases; indeed there are very obvious candidates, which are based on the sequence of empirical pair measures of (A i , σ i ) i∈N . However, there is a problem that we could not overcome, and hence we are only able to derive the LDP upper bound. This problem does not seem to be only technical; it is the fact that (A(t), S(t)) is not a continuous functional of the empirical measure. This makes it impossible to use standard arguments, and we found no way around it; hence we state only the LDP upper bound below. It is not clear to us whether or not the corresponding lower bound holds as well. The rate functions will be identified in terms of certain entropies, which are well-known in the LDP-theory for Markov chains. Indeed, we write H(µ | ν) = dµ log dµ dν for the relative entropy of a probability measure µ with respect to another one, ν (if the density exists, otherwise H(µ | ν) = ∞). Furthermore, for a measure µ on X κ , we write µ (κ−1) for the projection of µ on the vector of the first κ − 1 components, and we say that µ lies in M (s) 1 (X κ ) if µ is a probability measure on X κ whose projection on the vector of the first κ − 1 components is equal to its projection on the vector of the last κ − 1 components. We abbreviate Σ := N × (0, ∞) and define the projections π 1 : Σ κ → N and π 2 : Σ κ → (0, ∞) by π 1 ((a 1 , r 1 ), . . . , (a κ , r κ )) = a κ and π 2 ((a 1 , r 1 ), . . . , (a κ , r κ )) = r κ . We write f, µ for the integral of a function f with respect to a measure µ. Our main asymptotic large-deviation results as t → ∞ are as follows. Theorem 1.6 (Large-deviation upper bound for (A(t), S(t))). In both cases, ALOHA and CSMA, as t → ∞, the pair 1 t (A(t), S(t)) satisfies an LDP upper bound on (0, ∞) 2 , i.e., for any closed set where, for a, s ∈ [0, ∞), for the CSMA protocol and in the case of ALOHA protocol. Both rate functions have precisely one minimizer and are convex on (0, ∞) 2 and are good (i.e., their level sets {(a, s) : I * (a, s) ≤ C} are compact for any C). The family ( 1 t (A(t), S(t))) t>0 is exponentially tight (i.e., for any M > 0 there is a K > 0 such that The proof is in Section 4. By the well-known contraction principle (saying that LDPs are obtained under continuous images and gives an explicit formula for the rate function), we obtain without work: Corollary 1.7 (LDP upper bound for number of successfully delivered messages). As t → ∞, 1 t S(t) satisfies an LDP upper bound with rate function for the CSMA protocol resp. for the ALOHA protocol. The proof of Corollary 1.7 is immediate from the contraction principle, noting that the canonical projection N × (0, ∞) → N is continuous. Remark 1.8 (Expected throughput and rate function). Let us mention that the expected throughput s * (λ, κ) that we identified in Lemma 1.1 can also be characterized in a standard way as the minimizer of the rate function I S * for * ∈ {ALOHA, CSMA}. However, it is rather difficult to identify it from this reasoning since we have no closed formula for the invariant distribution of the Markov chain (A i , σ i ) i∈N , such that we do not have any results in this respect. ♦ Remark 1.9 (Contracting to A(t)). Instead of contracting the pair (A(t), S(t)) to S(t), we could do this also with A(t) and obtain an analogue of Corollary 1.7. However, since A(t) is nothing but the PPP, one can derive an LDP for 1 t A(t) also with much simpler means and obtains the rate function Remark 1.10 (An application). We think that exponential estimates for the probabilities of rare events like in Corollary 1.7 are very relevant for the understanding of the strengths and shortcomes of such telecommunication systems. Indeed, they give us not only extremely good estimates for such probabilities, but also an analytic starting point for getting more information about the most likely situation that governs the rare event: the variational formula for the rate function. As an example, the probability of the event This means that this probability decays exponentially fast with rate at least Remark 1.11 (Why not a full LDP?). It would be rather desirable to have a full LDP for (A(t), S(t)) with rate function I * , but there is an obstacle that we could not overcome: the lack of continuity of the map µ → π 2 , µ , since π 2 is unbounded. This makes an application of the crucial Gärtner-Ellis theorem impossible, since it prevents us from proving that I * is strictly convex, and therefore from proving that its Legendre transform is differentiable. It also prevents us from using the contraction principle, since (A(t), S(t)) is not a continuous functional of the empirical measures of (A i , σ i ) i∈N (see Section 4). With a lot of more work, we would be able to derive LDP lower bounds that are severely restricted, and the restriction would not be easy to understand, so we abstained from formulating any lower bound. ♦ 1.4. Related literature. In [B48] and [K53] the CSMA model in continuous time is modeled with the help of a queue that expresses the number of messages that are present in the κ channels as a function of the time parameter. Both restricted to the case of exponential distributed service times, which lead to a Markov model, whose invariant distribution can be calculated easily and explicitly. More general results involving arbitrary service time distributions can be found in [B76]. Here the process (Q n ) n∈N of the numbers Q n of messages in the channels at the time of the arrival of the n-th message is considered and its limiting distribution is calculated explicitly depending on the arrival and service time distributions. This and additional ad-hoc methods make it possible to obtain information about the number of successes at a late time, e.g., a law of large numbers. Unfortunately, the process (Q n ) n∈N does not have the Markov property, but an infinitely long memory. Hence, probabilistic formulas could not be derived, and large deviations for the throughput of the system could not be considered. In [RS90] one finds the throughput of the single-channel continuous time CSMA, which is λ λ+1 and coincides with our formula for s CSMA (λ, 1) in the case κ = 1. Another version of CSMA, namely slotted (single channel) CSMA, has been studied more intensively than the continuous time model (see [RS90], [GD11], [WLZ10] and [LST19]), and provides also the same limiting throughput as the continuous time model in the single channel case. Let us mention that an analogous large-deviation analysis of multi-channel discrete-time versions of ALOHA and slotted ALOHA and CSMA is carried out in [KK22]. Since [A77], the single-channel pure ALOHA has been studied intensely (for a general overview see [LST19], [RS90] and [SBBB08]). The throughput is identified there as λe −2λ , which also coincides with our result for s ALOHA (λ, 1) in the special case κ = 1. In [SW95], [LST19], [RS90] and [SBBB08] one can also read about another, more popular and better known, single channel version of ALOHA, namely the slotted ALOHA, with the higher throughput λe −λ . The multichannel case of this model has also been studied, e.g., in [SL12], where the throughput λe − λ κ has been calculated. See [KK22] for a derivation of this value via a large-deviation analysis with explicit rate functions. To the best of our knowledge, there are no similar results for the multichannel model in continuous time in the literature yet, hence we think that our Lemma 1.1 is novel. Expectation of the throughput Let us derive formulas for the expected throughput in the two protocols, i.e., formulas for the expectation of the large-t limit of 1 t S(t). In Section 2.1 and Section 2.2, respectively, we consider CSMA and ALOHA. This section has nothing to do with the Markov chains introduced in Section 1.3.2. 2.1. CSMA. We borrow some knowledge that was gained in [B76]. We pointed out in Section 1.4 that the CSMA system was analysed in [B76] with the help of a stochastic process Q = (Q n ) n∈N , where Q n denotes the number of messages in the κ channels (the number of busy channels) at the arrival time of the n-th message. The limiting distribution ν Q of Q n , as n goes to infinity, has been calculated there as where we wrote Poi λ | [0,κ] for the Poisson distribution with parameter λ, conditioned on being ≤ κ. It was also proved there that the limiting distribution of Q n as n → ∞ coincides with the limiting distribution of the number of busy channels at a deterministic time t as t → ∞. Using this result, we obtain since the average number of the number of successes is equal to the arrival rate λ, multiplied by the success probability, which is given by the second factor, as every new arriving message can only be delivered successfully, if there are at most κ − 1 busy channels. In (2.1) one sees that s CSMA (λ, κ) is increasing in λ and converges to κ as λ → ∞, which is intuitively clear, because most of the channels are likely to be busy if the arrival rate is high. Therefore, there is no interesting optimisation task over λ, since the throughput get always better if the density of message is increased. Taking into account also the number of unsuccessful messages (which explodes as λ → ∞) makes this issue more interesting, but we do not strive on this here. 2.2. ALOHA. Let us calculate the expected limiting throughput in the ALOHA protocol by hand. The expression that we obtain is good enough for also finding the optimal value of the density λ, at least for large κ. Since we are looking at the limit of late times, we need to analyse the ALOHA process in equilibrium. This can be realised by extending the PPP from [0, ∞) to R and to consider its Palm measure given that one message arrives at time 0. Write A([a, b)) for the number of incoming messages during the time interval [a, b). Fortunately, the number of busy channels at time 0 depends only on the PPP during the time interval [−1, 0), i.e., on A([−1, 0)). Indeed, the probability of having at least one available channel and taking one of those is equal to Note that the PPP has the property that A([−1, 0)) does not depend on the incoming messages after time 0; these are the only ones that might influence the transmission success of the message that arrived at time 0. Hence, for the success probability we have to multiply the term in (2.2) with the probability that the message that arrived at time 0 does not get destroyed afterwards during its service time (0, 1], which is equal to Hence, In the case κ = 1 this is equal to λe −2λ , which was already known; see Section 1.4. This is optimized at λ = 1 2 with value s 1/2,1 = 1 2e ≈ .18. Markov approach In this section, we introduce suitable Markov chains for both protocols, CSMA and ALOHA, that are able to describe the number of successful and unsuccessful sending attempts by time t. Again, we keep λ ∈ (0, ∞) and κ ∈ N fixed. 3.1. Markov approach to CSMA. Let us model the CSMA protocol in terms of a stochastic process in discrete time. Recall that 0 < T 1 < T 2 < T 3 < . . . denotes all the times at which a message comes in and asks for being admitted to one of the κ channels. According to our assumptions, (T i ) i∈N is a standard Poisson point process (PPP) in [0, ∞) with parameter λ, and we denote τ i = T i − T i−1 . It is convenient to introduce the counting process N defined by N (I) = #{i ∈ N : T i ∈ I} (the number of sending attempts during the time interval I) for intervals I. Then A(t) = N ([0, t]) is the number of attempts by time t. By ( T i ) i∈N we denote the subsequence (T k(i) ) i∈N of (T i ) i∈N of all those times T j at which the incoming message is admitted to a channel (i.e., at which not all the κ channels are busy); then the delivery takes place during the time interval [T k(i) , T k(i) +1], and at time T k(i) +1 the message is successfully delivered. We introduce the counting process N (s) defined by N (s) (I) = #{j ∈ N : T j ∈ I} for measurable sets is the number of successfully delivered messages during the time interval [0, t]. We put σ i = T i − T i−1 , and we register the number of sending attempts in the time interval ( T i−1 , T i ]. Then we have, for any k ∈ N and t ∈ [0, ∞), Hence, we are able to express the main quantities, A(t) and S(t), in terms of the sequence (A i , σ i ) i∈N whose state space is equal to Σ = N × (0, ∞). Therefore, we want to describe its distribution. It turns out that it is in general not a Markov chain, but a (κ − 1)-Markov chain, i.e., a stochastic process with a memory of length ≤ κ − 1: Lemma 3.1 (Markovian structure of (A i , σ i ) i∈N in CSMA case). The sequence (A i , σ i ) i∈N is a timehomogeneous (κ − 1)-Markov chain with kernel W CSMA from Σ κ−1 to Σ defined by Proof. Let us fix i ∈ N and identify the conditional distribution of (A i+1 , σ i+1 ) given (A j , σ j ) j≤i . This will turn out to be the same as the conditional distribution given the (κ − 1)-past (A j , σ j ) j∈{i−κ+2,...,i} , and W CSMA will turn out to be a version of this conditional distribution; this will finish the proof of the lemma. Conditioning on (A j , σ j ) j≤i includes conditioning on ( T j ) j≤i = (T k(j) ) j≤i . The next arrival time T j with an idle channel after time T i is the first T j after T i such that no more than κ − 1 messages are in the κ channels at this time. Since all the messages that are currently in the system have arrived in the last time unit before, we can say that this next T j is the first T j after T i such that in the time interval (T j − 1, T j ) the number of the T k is smaller than κ. In formulas, In terms of the time differences, we see that σ i+1 = τ k(i)+1 + τ k(i)+2 + · · · + τ k(i+1) with In other words, given (σ j ) j≤i , the conditional distribution of σ i+1 is equal to the first point of a PPP(λ) after time γ i . Using the well-known memoryless property of the PPP, we see that this distribution is the distribution of γ i + X, where X is an independent Exp(λ)-distributed random variable. This distribution has the density s → λe −(s−γ i )λ 1l [γ i ,∞) (s). The fact that (A i , σ i ) i∈N is a (κ − 1)-Markov chain can obviously also be rephrased in terms of the sequence of subsequent (κ − 1)-vectors: Corollary 3.2. Equivalently, one can formulate Lemma 3.1 by saying that the vectors form a time-homogeneous Markov chain (R (CSMA) i ) i∈N on the state space Σ κ−1 with the transition kernel P CSMA defined by P CSMA (a 1 , t 1 ), · · · , (a κ−1 , t κ−1 ) , d (b 1 , s 1 ), · · · , (b κ−1 , s κ−1 ) (3.4) In Section 4 we will need the following strong ergodicity property of the Markov chain (R (CSMA) i ) i∈N . By P i we denote the i-th power of P (in the sense of 'matrix' multiplication), i.e., the i-step transition kernel, for i ∈ N. Condition (U).We say, a Markov chain in a Polish space Σ with transition kernel P satisfies (U) if there exist ℓ, N ∈ N satisfying ℓ ≤ N and a constant M ∈ [1, ∞) such that Proof. Instead of the transition kernel P CSMA , it will be sufficient to work with the kernel W CSMA . We write W (i) CSMA for the i-th power of the kernel W CSMA . We will show the existence of a constant M such that CSMA (a, t), (k, ds) /ds ≤ M W (κ+1) CSMA ( a, t), (k, ds) /ds (3.6) for a, a ∈ N κ−1 , t, t ∈ (0, ∞) κ−1 , k ∈ N, s ∈ (0, ∞). It is clear that (U) follows from that assertion with ℓ = N = κ + 1 and M = M (κ + 1). From (1.5) we see that actually both sides of (3.6) do not depend on a nor on a. We write both sides of (3.6) in terms of random variables, more precisely in terms of a (κ − 1)-Markov chain (σ i ) i∈{−κ+2,−κ+3,... } using the notation E t (·) = E(·|σ −κ+i+1 = t i ∀i ∈ [κ − 1]), and then we have Hence, (3.6) is equivalent to We are going to find a lower bound for the expectation on the right by restricting to the event {σ 1 > 1}, on which σ 2 , . . . , σ κ are independent Exp λ -distributed variables (this reflects the fact that, if for more than one time unit no new message arrives, then all channels are empty and the next κ incoming messages will find a free channel). Furthermore, we will derive an upper bound for the left-hand side in terms of a multiple integral involving such random variables. 3.2. Markov approach for the ALOHA protocol. Now we turn to a similar treatment of the ALOHA protocol. We adopt all the notation from Section 3.1; that is, we fix λ ∈ (0, ∞) and κ ∈ N and assume that (T i ) i∈N is a standard PPP(λ) (the sequence of times at which a message comes in and requires a channel for being transmitted) and N ((a, b]) is the number of Poisson points in the time interval (a, b] for any a < b. Recall that, in the ALOHA protocol, each incoming message jumps into a randomly picked one of the κ channels, regardless whether it is idle or busy. If it is busy, then it destroys the message that is currently in the channel and itself as well. As a consequence, the new message is rejected immediately, i.e., it does not get access to the system, while the old one first remains in the channel until its service time is over and leaves after one time unit without having been successfully delivered. However, if the channel is idle, then the new message is only potentially successful, since it can still be destroyed during the service time by a new arriving one that picks this channel. This uncertain situation remains until one time unit after the entry into the channel; then the message is successfully delivered if it has not been cancelled by then. We consider the sequence ( T i ) i∈N of all the times at which an incoming message picks an idle channel, a subsequence of (T i ) i∈N . We again put σ i = T i − T i−1 and A i = A(( T i−1 , T i ]), which is 1+ the number of incoming messages that jump into some busy channel and therefore destroy the message therein. Recall that Σ = N × (0, ∞). Lemma 3.4 (Markovian structure of (A i , σ i ) i∈N in ALOHA case). The sequence (A i , σ i ) i∈N is a (κ − 1)-Markov chain with kernel W ALOHA from Σ κ−1 to Σ defined by Proof. We keep i ∈ N fixed, condition on (A j , σ j ) j≤i and examine the distribution of (A i+1 , σ i+1 ). Let us first examine the density of the probability of the event {σ i+1 = s}. At time T i+1 there is at least one free channel in order for the arriving message to access the system. Hence, T i+1 must be after time T i + γ i , where γ i := [1 − i n=i−κ+2 σ n ] + , like in the CSMA model, that is, σ i+1 > γ i . However, this time T i+1 is not necessarily the first point of the PPP after T i + γ i , but the first Poisson point after T i + γ i at which an idle channel is picked. Hence, we have to calculate the probability of picking a free channel at an Poisson time point. For this, we need to know the number of free channels at any arbitrary time after T i + γ i . So let T i + s, s > γ i , be this arbitrary time. If s > 1, at least one time unit has passed without new incoming messages after T i , which means that all channels are idle again at T i + s. If s ≤ 1, at least one channel is busy at time T i + s, as there is at least one message, namely the one arrived at T i , whose service time is not over yet. Of course, there could be more messages still remaining in the system, depending on s, and we have to determine this relation. It is clear, that if additionally s + σ i ≤ 1, then the messages arrived at T i−1 is also still in the system, so there are at least two busy channels at time T i + s. Analogously, there must be at least 3 occupied channels, if s + σ i + σ i−1 ≤ 1 additionally, as the service time of the message arrived at T i−2 is also not over yet. We see step by step, that if s + κ−3 k=0 σ i−k ≤ 1, we have at least κ − 1 busy channels and if s + κ−2 k=0 σ i−k ≤ 1, all k channels must be busy, as the delivery of all the last k messages is still remaining. Hence, the number of busy channels at time T i + s is given, for s > γ i , by Then it is clear that the probability of picking randomly a busy respectively free channel at time T i + s is equal to β(s) κ respectively 1 − β(s) κ . So the first point of the PPP after T i + γ i coincides only with probability 1 − β(s) κ with T i+1 (on the event {σ i+1 = s}). This yields the density for the probability that the first message after time T i + γ i picks a free channel. Now we consider, for any n ∈ N, the event that T i+1 = T i + s is the n-th point of the PPP after T i + γ i at which for the first time an idle channel is picked. On the event that there are precisely n − 1 Poisson points in the interval ( T i + γ i , T i+1 ) and another one at T i+1 = T i + s, the density of these n Poisson points is equal to (s 1 , . . . , s n−1 , s) → 1l {γ i <s 1 <s 2 <···<s n−1 <s} λ n e −λ(s−γ i ) ds 1 · · · ds n−1 ds. On this event, the probability that the first n − 1 of them pick a busy channel and the last one an idle one is equal to In order to obtain the density of σ i+1 , we need to integrate over all these s 1 , . . . , s n−1 and have to sum on n ∈ N. Hence, the conditional distribution of σ i+1 is given as where we used the exponential series and remind on (3.8), now with γ i instead of γ (observe that β(s) = 0 on [0, γ i ]). Now, we look at the intersection of {σ i+1 = s} with the event {A i+1 = k} for k ∈ N. Here we have k − 1 unsuccessful attempts during ( T i , T i+1 ]; indeed these are all the Poisson points in the interval ( T i , T i + γ i ] plus the ones in the interval ( T i + γ i , T i+1 ) that failed to pick an idle channel, and these two numbers are independent by the properties of the PPP. The number of the first ones have a Poisson distribution with parameter λγ i , and the one of the latter ones has been examined above. Hence, the conditional distribution of A i+1 is the convolution of these two: where we used the binomial theorem. In particular, we see that we have again a (κ − 1)-Markov chain, as the transition probaility depends only on σ i , . . . , σ i−κ+2 (and by the way, not at all on the A j 's). Analogously to the CSMA case in Section 3.1, the sequence of (κ − 1)-vectors R (ALOHA) i of (A i , σ i ) i∈N defined as in (3.3) form a Markov chain on the state space Σ κ−1 with a transition kernel P ALOHA that is defined analogously to (3.4). Also the analogue to Lemma 3.3 holds: Lemma 3.5 (Uniform ergodicity of (R (ALOHA) i ) i∈N ). For the ALOHA protocol, for any λ ∈ (0, ∞) and κ ∈ N, the Markov chain (R (ALOHA) i ) i∈N satisfies (U). For our goal it is sufficient to show the existence of some M > 0 such that, for each t,t ∈ (0, ∞) κ−1 , s > 0, k ∈ N and n ≤ k We will show this even for any non-negative measurable function G (with the same constant M ) by showing the corresponding inequality for the respective densities of (σ 2 , . . . , σ κ ) under P t and P t . Large deviations In this section, we prove a large deviation upper bound for the pair 1 t (S(t), A(t)) for both protocols. We can make most of the steps jointly for both protocols. The basis of our large-deviation analysis is the empirical pair measures and the empirical κ-string measures L κ n of the Markov chain that we introduced in Section 1.3.2. The two LDPs for (L κ n ) n∈N as n → ∞ are easily derived from general theory, and the main object, (A(t), S(t)) is a kind of time-inverse of n → ( π 1 , L κ n , π 1 , L κ n ). However, there are two problems left: The latter is a priori not a continuous functional of L κ n , and we need to make the step from an LDP for this pair to the pair (A(t), S(t)). These two major steps will be done in Lemmas 4.3 and 4.4. However, we were not able to overcome the lack of continuity of µ → π 2 , µ and cannot derive a full LDP for (A(t), S(t)). Let us abbreviate Σ = N × (0, ∞) and let * ∈ {CSMA, ALOHA}. We introduce the empirical pair measure of the Markov chain (R * i ) i∈N 0 defined in (3.3), In this expression, we assume periodic boundary conditions, i.e., R * 0 = R * n . Then L (2) n satisfies the marginal property: its two marginal measures are equal to each other. We denote by M (s) 1 (Σ κ−1 ×Σ κ−1 ) the set of probability measures ν on Σ κ−1 × Σ κ−1 that satisfy this marginal property and write ν for any of the two marginal measures of ν. (The assumption R * 0 = R * n is only of technical nature and can also be dropped without any problem, but we will not elaborate on that minor point.) Since (R * i ) i∈N 0 satisfies the condition (U) by Lemmas 3.3 and 3.5, respectively, [DS89, Exercise 4.1.48] says that there is an invariant distribution ν * of (R * i ) i∈N 0 . Then, by [DS89, Lemma 4.1.45] the empirical pair measures (L (2) n ) n∈N converges almost surely towardsν * ⊗ P * . Furthermore, we even get a good control on the rate of this convergence: By [DZ10, Cor. 6.5.10 and Th. 6.5.12] the empirical pair measures (L (2) n ) n∈N satisfies an LDP on M 1 (Σ κ−1 × Σ κ−1 ) with rate function is absolutely continuous with respect to ν ⊗ P * , and ∞ otherwise. The term in (4.1) is called the relative entropy of ν with respect to ν ⊗ P * . If Λ would be differentiable, then the Gärtner-Ellis theorem would provide also the corresponding lower bound and hence a full LDP. However, in our case we do not know if this is true, due to the discontinuity of the mappings µ → π i , µ for i ∈ {1, 2} in the weak topology of probability measures, since π i is not bounded. So let us identify the limit in (4.6), which will be done with the help of the LDP for (L κ n ) n∈N . Also here, we are facing the serious problem of missing unboundedness of π 1 and π 2 ; but we found a way around this. Indeed, we absorb the π 2 -part in the transition kernel and employ a cutting argument for the π 1 -part. We write now E * = E (λ) * for the expectation with respect to our Markov chain (stressing the arrival parameter λ of the underlying PPP). The following trick absorbs the π 2 -integral into the transition kernel. For this, we write W (λ) * = W * to stress the parameter λ in the transition kernel of our Markov kernel, and we introduce the transformed kernels W (A,B,λ) * ((a, t), (k, ds)) = e Ak+Bs W * ((a, t), (k, ds)), Then we observe that (4.8) As a consequence, (4.9) We have from (4.8) that E λ * e n(A π 1 ,L κ n +B π 2 ,L κ n ) = E (λ−B) * e nD π 1 ,L κ n , n ∈ N. For definiteness, assume that D > 0; the opposite case is almost the same. Since we can lower bound π ≥ π ∧ m and since µ → π 1 ∧ m, µ is continuous and since (L κ n ) n∈N satisfies an LDP with rate function µ → H(µ | µ (κ−1) ⊗ W * ), Varadhan's lemma tells us that lim inf where we used (4.9). We will show in the following that also the complementary inequality to (4.10) holds, which shows that (4.6) holds with Λ = M . This finishes the proof of the lemma, since it easily follows from (4.11) that J * defined in (4.3) is the Legendre transform of M = Λ. For estimating in the opposite direction, we need to employ a cutting argument as follows. For any m ∈ N and δ > 0, E (λ−B) * e nD π 1 ,L κ n = E (λ−B) * e nD π 1 ,L κ n 1l {D| π 1 −π 1 ∧m,L κ n |≤δ} + e nD π 1 ,L κ n 1l {D| π 1 −π 1 ∧m,L κ n |>δ} ≤ e δn E (λ−B) * e nD π 1 ∧m,L κ n We need to show that the exponential large-n rate of the last term is arbitrarily small if m is picked large, then the complementary inequality to (4.10) follows. From Lemmas 3.1 and 3.4, respectively, we know that, given (σ i ) i∈N , under E (λ−B) , the random variables A 1 , . . . , A n are independent and have the distribution of 1+ a Poi λ i -distributed random variable, where λ i = (λ − B)γ i in the CSMA-case and λ i = (λ − B)(γ i + B(s)/κ) in the ALOHA-case, where γ i = (1 − κ−1 j=1 σ i−j ) + . In any case, we have λ i ≤ 2λ for any i and can therefore estimate Hence, we can estimate, writing E for expectation with respect to Poi 2λ , using the exponential Chebyshev inequality with some K > 0, (4.12) We need to show that the last term in the brackets can be made arbitrarily large as m → ∞ with an appropriate choice of K = K(m). We estimate the last term as follows: where we used an index shift and estimated 1 (k+m)! ≤ 1 k! 1 m! . Now it is easy to see that one can pick K = K(m) → ∞ in such a way that the term in the brackets on the right of (4.12) diverges to ∞ (take K of order log m). This finishes the proof of the complementary inequality to (4.10) and therefore finishes the proof of the lemma. Now we can prove the main result of this section, the LDP upper bounds for 1 t (A(t), S(t)). This finishes the proof of Theorem 1.6. Proof. Let us explain our proof strategy. As we already mentioned above, the sequence (L (2) n ) n∈N of empirical pair measures of the Markov chain (R * i ) i∈N 0 satisfies an LDP with rate function given in (4.1). Then it is clear that the sequence (L κ n ) n∈N of empirical κ-string measures satisfies an LDP with rate function given in (4.2). It will turn out that (A(t), S(t)) can be expressed in terms of the partial sums n i=1 A i = n π 1 , L κ n and n i=1 σ i = n π 2 , L κ n . (4.13) We will derive the upper bound of the LDP for 1 t (A(t), S(t)) from this, in combination with the upper bound of Lemma 4.3 for the pair ( π 1 , L κ n , π 2 , L κ n ). This derivation will be heuristically done in Step 1 of the proof. Since (A(t), S(t)) is basically a pair of time-inverses, probabilities of events of one-sided inequalities like {A(t) < a, S(t) > s} for a, s ∈ (0, ∞) are relatively easy to handle, and this we will do in Step 2. The LDP upper bound for 1 t (A(t), S(t)) will be proved in Step 3, while the proof of the exponential tightness is contained in Step 2. Step 1: Heuristics. We assume that ( π 1 , L κ n , π 2 , L κ n ) satisfies the LDP with rate function J * given in (4.3) and derive heuristically the LDP for 1 t (A(t), S(t)) from that. Let us first treat the CSMA protocol. Fix a, s > 0. Using (4.13), we see (ignoring that at and st may be not integers) that, as t → ∞, A i ≈ at = P π 2 , L κ st ≈ 1 s , π 1 , L κ st ≈ a s ≈ exp − stJ CSMA (a/s, 1/s) = exp − tI CSMA (a, s) . (4.14) This finishes the heuristics for the CSMA case. In the ALOHA case, let S(t) be number of potentially successful messages that arrive by time t, i.e., those that pick a free channel at arrival. Since every time that a new arriving message picks a busy channel, the old one that is already in this channel also gets lost, the number of successfully delivered messages by time t is obtained by subtracting the number of new arriving messages taking a busy channel, from the number of potentially successful messages. The former is equal to A i − 1 in each interval ( T i , T i+1 ]. Considering S(t) potentially successful messages and therefore S(t) intervals, it means that we have (4.15) Furthermore, as we have already seen in the case of CSMA. Now, let s, a > 0, the we have, as t → ∞, P(S(t) = st, A(t) = at) ≈ P S(t) ≈ 1 2 (a + s)t, A(t) = at A i ≈ at ≈ P π 2 , L κ 1 2 (s+a)t = 2 s + a , π 1 , L κ which finishes the heuristics. Step 2: Exponential rates for quadrants. As we mentioned, one-sided inequalities for S(t) and A(t) are relatively easily to handle. We demonstrate this by showing, as a first step, the exponential tightness of ( 1 t (A(t), S(t))) t>0 . Indeed, for any K > 0, P 1 t (A(t), S(t)) ∈ [0, K] 2 c ≤ P(A(t) > tK) + P(S(t) > tK). Then, using also (4.13), we estimate P(S(t) > tK) = P tK i=1 σ i < t ≤ P π 2 , L κ tK ≤ 1 K ≤ e −tK inf (0,1/K]J e o(t) , . Then, it is easy to see that K inf (0,1/K]J → ∞ as K → ∞, hence exponential tightness follows. We further use the simple relation between the partial sum of the σ i 's and (A(t), S(t)) for proving that, for any (a, s) ∈ (0, ∞) 2 , (4.16) Analogous statements for all the other sign combinations of the partial derivatives with the respective quadrants are also true and are proved in the same manner; we omit these proofs. Since I * is continuous, we can freely replace the closed set [a, ∞) × [s, ∞) by (a, ∞) × (s, ∞). We prove now (4.16) for the CSMA case; the other one is similar and will be omitted. Fix (a, s) such that ∂ a I(a, s) and ∂ s I(a, s) are both positive. For showing (4.16), we see that (again using (4.13)) P where in the estimate we used the upper bound in the LDP for L κ ts and afterwards that I CSMA (a, s) = sJ CSMA ( a s , 1 s ) and then that it is continuous and assumes its infimum over [a, ∞) × [s, ∞) in the corner of this quadrant. The latter comes from the convexity of I CSMA and the positivity of the two partial derivatives. For handling the case that one of the two partial derivatives is zero, we claim that (4.17) The proof of this is similar to the proof of (4.16), but estimates against the half plane [0, ∞) × [s, ∞) and uses that the infimum of I * over [0, ∞) × [s, ∞) is attained at (a, s) by convexity of a → I * ( a, s) and because of ∂ a I(a, s) = 0. We omit the details. Step 3: Proof of the upper bound. We use the fact that an exponentially tight sequence (X t ) t>0 of random variables satisfies the LDP upper bound with rate function I on a Polish space X if lim ε↓0 lim t→∞ 1 t log P(X t ∈ B ε (x)) ≤ −I(x), x ∈ X . (4.18) The proof of this fact is an elementary exercise using standard compactness arguments; we omit the proof. It remains to handle the case where one of the two partial derivatives is equal to zero. If both are, then (a, s) is the unique minimal point (a min , s min ) of I * , and the exponential rate is equal to zero, which is equal to I * (a min , s min ). The remaining case that precisely one of the two partial derivatives vanishes, can be handled either by an approximation argument (using the continuity of I * ) or by appealing to (4.17); we omit the details.
2022-12-19T06:42:20.217Z
2022-12-16T00:00:00.000
{ "year": 2022, "sha1": "c675a7cf71811e943f27e17bf590f0a3cc325908", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c675a7cf71811e943f27e17bf590f0a3cc325908", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
233874173
pes2o/s2orc
v3-fos-license
Lower Number of Teeth Is Related to Higher Risks for ACVD and Death—Systematic Review and Meta-Analyses of Survival Data Tooth loss reflects the endpoint of two major dental diseases: dental caries and periodontitis. These comprise 2% of the global burden of human diseases. A lower number of teeth has been associated with various systemic diseases, in particular, atherosclerotic cardiovascular diseases (ACVD). The aim was to summarize the evidence of tooth loss related to the risk for ACVD or death. Cohort studies with prospective follow-up data were retrieved from Medline-PubMed and EMBASE. Following the PRISMA guidelines, two reviewers independently selected articles, assessed the risk of bias, and extracted data on the number of teeth (tooth loss; exposure) and ACVD-related events and all-cause mortality (ACM) (outcome). A total of 75 articles were included of which 44 were qualified for meta-analysis. A lower number of teeth was related to a higher outcome risk; the pooled risk ratio (RR) for the cumulative incidence of ACVD ranged from 1.69 to 2.93, and for the cumulative incidence of ACM, the RR ranged from 1.76 to 2.27. The pooled multiple adjusted hazard ratio (HR) for the incidence density of ACVD ranged from 1.02 to 1.21, and for the incidence density of ACM, the HR ranged from 1.02 to 1.30. This systematic review and meta-analyses of survival data show that a lower number of teeth is a risk factor for both ACVD and death. Health care professionals should use this information to inform their patients and increase awareness on the importance of good dental health and increase efforts to prevent tooth loss. INTRODUCTION In general, a lower number than 32 natural teeth in adults reflects the endpoint of dental caries and periodontitis. Tooth loss at a younger age is mainly due to caries, and in older ages, it is due to periodontitis. The cumulative incidence for caries peaks before the age of 30, while for periodontitis, it peaks between 20 and 40 years of age. The worldwide prevalence of severe tooth loss (≤9 remaining teeth) is 2.4% (1,2). Tooth loss leads to reduced masticatory function, poorer nutritional status and unhealthy dietary changes, low self-esteem and quality of life, and negative general health (3)(4)(5). The burden of disease of severe caries, severe periodontitis, and the consequent tooth loss comprises 2% of the global burden of human diseases (6). Apart from genetic and biological determinants, caries and periodontitis (and subsequent tooth loss) share several risk factors. In particular, hyposalivation, smoking, dysbiotic oral biofilms, and dietary fermentable carbohydrates contribute to their occurrence, while diabetes, obesity, and rheumatoid arthritis have been shown to be associated with both oral diseases (3,4). While material, behavioral, cultural, and psychosocial factors have been shown to contribute to the risk of both oral diseases and atherosclerotic cardiovascular diseases (ACVD) (7,8), a biomedical connection between tooth loss and ACVD has also been investigated (9,10). Shared pathways for oral diseases and ACVD have been proposed and studied; notably, shortlived bacteremias in periodontitis give rise to low-grade systemic inflammation and pro-inflammatory action and may contribute to the onset of atherosclerosis (11,12). The atherosclerotic process may eventually lead to the occurrence of ACVD events (morbidity or mortality), like coronary heart diseases (CHD), cerebrovascular accidents, and peripheral arterial disease (13,14). ACVD is responsible for up to 54% of deaths in the United States and 45% of deaths in Europe (15,16). Although the relationship between tooth loss and ACVDrelated events and all-cause mortality (ACM) has been studied often, conflicting results have been reported, and solid evidence is lacking (17)(18)(19)(20). Previously published systematic reviews did not allow for solid conclusions about the relationship between the number of teeth and ACVD, as they include a small number of studies (20) or restricted the outcome to ACM only (19). This systematic review and meta-analysis reports up-to-date evidence about the relationship between tooth loss, ACVD-related events, and ACM. METHODS This systematic review and meta-analyses were conducted in accordance with the guidelines of Preferred Reporting Items of Systematic Reviews and Meta-analyses (PRISMA Statement) and the guidelines for Meta-analysis Of Observational Studies in Epidemiology (MOOSE Guidelines) (Supplementary Files 1, 2) (21,22). Research Question Is tooth loss, particularly a lower number of present teeth, more related to an increased risk for ACVD-related events (morbidity or mortality) and ACM? Study Retrieval Any study that evaluated the relationship between the number of teeth and ACVD-related morbidity, ACVD-related mortality, or ACM was retrieved from PubMed-Medline (National Library of Medicine, Washington, D.C.) and EMBASE (Excerpta Medical Database by Elsevier). We reported on studies included in these databases before June 17, 2020. The search was conducted by NB (for detail on the used search terms, see Supplementary File 3). There were no additional records identified through other sources, all studies were available, and, therefore, it was decided that there was no need to contact the authors of identified publications. Study Selection We included cohort studies reported in the English language that evaluated a prospective relationship between the number of teeth and ACVD events or ACM at any follow-up time. Study participants, regardless of age, with a known (categorical or linear scaled) number of teeth at study inclusion were followed longitudinally to assess for the occurrence of ACVD events or ACM. Cumulative incidences and incidence densities were calculated. Events were based on information from hospital admissions (ICD codes, medical records), death certificates, selfreport, or questionnaires. Cross-sectional studies, case-control studies, review articles, letters, personal opinions, book chapters, conference abstracts, patents, and articles written in a language other than English were excluded. Two reviewers (NB and NS) independently screened titles of the retrieved studies based on the eligibility criteria mentioned above. The studies were categorized as definitely not eligible (notably, to be excluded), definitely eligible, or to be decided. Next, the reviewers screened abstracts for records, which were judged as to be decided. Subsequently, the same approach was followed for screening based on abstract. Thereafter, full texts were obtained for studies that met the inclusion criteria during screening based on the abstract or which then remained to be decided. Eligibility for final inclusion of publications was based on full text reading. Reviewers resolved initial disagreements by consensus discussion. For the reference list of the included studies, see Supplementary File 4. Risk of Bias Assessment Two reviewers (NB and NS) used the ROBINS-E Tool (Risk Of Bias In Non-Randomized Studies-of Exposures) to assess the methodological quality and potential risk of bias of the included studies. The ROBINS-E tool comprises seven domains of bias: confounding, selection of participants into the study, classification of exposures, departures from intended exposures, missing data, measurement of outcomes, and selection of the reported result. Each domain was categorized into a low, moderate, or serious risk of bias. Judgments within each domain were summarized into an overall risk of bias assessment for each study (23). If one domain was judged as a serious risk of bias, the overall risk of bias of that study was assessed as a serious risk of bias. If all the domains were judged as low risk of bias, the overall risk of bias of that study was assessed as low risk of bias. Otherwise, the study was judged as a moderate risk of bias. Data Extraction The following study data reported in the articles that met the selection criteria were extracted: author names, country, study design (inception or non-inception cohorts), the total number of participants and their demographic characteristics (age and sex), number of remaining teeth, number of participants with incident ACVD-related events or ACM, and follow-up time. Based on this information, it was possible to calculate the crude risk ratio (RR) with a 95% confidence interval (95% CI) for the cumulative incidence of ACVD-related events and ACM. Next to this, the crude and adjusted hazard ratio (HR) with 95% CI for the incidence density of ACVD-related events and ACM in different categories of the remaining number of teeth, if reported, were also obtained from the articles. The crude RR and HR were defined as the RR or HR of the number of teeth for ACVD-related events and ACM, not adjusted for other variables available in the included studies. The adjusted HR was defined as the HR adjusted for age and sex only or by multiple variables, such as education level, socioeconomic status, lifestyle habits or general health, etc., available in the included studies (see Supplementary Tables 1, 2). The indicators for the potential risk of bias of the included studies contained the approaches to count the number of remaining teeth, patients' ACVD status at baseline, and followup rate. In the review, the approaches to counting the remaining number of teeth were classified into two categories: (1) based on clinical examination and (2) based on patients' self-reporting. Patients' ACVD status at baseline was classified into two categories: (1) ACVD status was presented and adjusted/excluded at baseline, and (2) ACVD status was presented without adjustment/exclusion at baseline or no information. The followup rate was classified into two categories: (1) low follow-up rate (<80%) and (2) high follow-up rate (≥80%). All the data were independently extracted by NB and NS, crossed-checked, and discussed to resolve possible disagreements. Data Analyses Cumulative incidence is the measure of the occurrence of new ACVD-related events or ACM during the follow-up period. To assess the cumulative incidence of ACVD-related events and ACM between different groups of the number of remaining teeth, network meta-analyses with random effect models were carried out using the total number of study participants and the number of ACVD-related events or ACM. Most of the included studies compared exposures (notably, the number of remaining teeth) using four common exposure categories: • 0 teeth vs. Some studies, using other than these four common categories, allowed transformation into the above, whereas the studies in which this was not possible were excluded for meta-analysis. Two of the four common exposure categories included more than two groups. Therefore, network meta-analyses with random effect models were carried out (24). Network metaanalysis is defined as a meta-analysis comprising direct exposure comparisons within trials and indirect exposure comparisons across trials for a common comparator against at least two exposure groups (24). Because all the included studies in the present review reported data for all the exposure groups, the comparisons between any two groups in the network metaanalyses can be made directly. Therefore, the evidence from the indirect comparisons can be ignorable, and the assessment of ranking probabilities of each group and of the inconsistency between direct and indirect evidence was not necessary. The other two common exposure categories included two groups only, and, therefore, pair-wise meta-analyses were performed. Cumulative meta-analyses were used to assess the change in the aggregate estimate by adding study data in the order of the time of their publication (25). The results for studies reporting cumulative incidence data for ACVD-related events or ACM by common exposure category were statistically pooled and expressed as risk ratio (RR) and 95% confidence intervals (95% CI). Incidence density (or hazard) is the measure for the rate of occurrence of new ACVD-related events or ACM per unit of time, the person-time of follow-up. It allows for pooling studies with different durations of follow-up. To compare the incidence density of ACVD-related events or ACM between different groups of the number of remaining teeth, generic inverse variance meta-analyses with random effect models were carried out to pool the hazard ratio (HR) of ACVD-related events or ACM of the individual studies. The HR was pooled separately based on two out of the four common categories in which the number of remaining teeth was classified only into two groups: • 0 teeth vs. 1-32 teeth, • 0-19 teeth vs. 20-32 teeth. The other two common categories, in which the number of teeth was classified into more than two groups, were not included in the assessment of incidence density because the generic inverse variance meta-analysis only allows for the HR between two groups. Besides, the HR was pooled per number of lost teeth when the number of remaining teeth was regarded as a continuous variable. The HR was also pooled separately for different types of HR (crude HR, HR adjusted by age/sex, and HR adjusted by multiple variables). Cumulative meta-analyses were used to assess the change in the aggregate estimate by adding study data in the order of the time of their publication (25). The HR and standard errors (SE) of individual studies were log-transformed based on the formulas provided by Woods et al. (26). The pooled results were expressed as HR and 95% CI. The statistical heterogeneity of each meta-analysis was explored by the I 2 test. The heterogeneity was considered substantial to considerable if I 2 > 50% (27). Funnel plots were used to assess the publication bias for the meta-analyses, which included at least 10 studies. A funnel plot for a meta-analysis including fewer than 10 studies is not recommended because of its insufficient statistical power (28). The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system was used to appraise the strength of the evidence emerging from this review. Two reviewers (NB and NS) rated the strength of the evidence according to the factors that can reduce the quality of evidence (risk of bias, inconsistency of results, indirectness of evidence, imprecision, and publication bias) and the factors that can increase the quality of evidence (large magnitude of the effect, dose-response gradient, and effect of plausible residual confounding) (29). Any disagreement between the two reviewers was resolved by discussion. All analyses were performed with the R software 3.3 (R Development Core Team, Vienna, Austria) and Review Manager 5.4 (The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark). Figure 1 shows a flowchart of the study selection. The database search revealed 3,857 potential articles, of which 1,386 articles were removed for being duplicates. In the first screening based on title, 1,998 articles were excluded, leaving 473 articles for screening on the abstract; based on this, 289 articles were excluded, leaving 184 articles for full-text reading. Based on the full-text reading, another 109 articles were excluded, leaving 75 articles included for further analysis (for the reference list, see Supplementary File 3). The agreements between the two reviewers (NB and NS) based on the screening of titles, abstracts, and full-text reading were 92, 78, and 90%, respectively. Because the categories of the number of teeth in these 75 studies were too heterogeneous to allow for all to undergo transformation into any of the four common categories of the number of teeth, 44 articles were qualified for meta-analysis. Table 1 shows the main characteristics of the 75 included articles for the qualitative synthesis. The articles were derived from all continents with the number of participants ranging from 173 to 4,440,970 [median: 5,688 and interquartile range (IQR): 718-41,000]. The majority of the studies included both males and females, whereas eight studies only included men, three studies only women, and, for one study, the male/female ratio was unknown. Only in 25 out of the 75 studies was the age of the study participants ≥60 years of age, whereas the other studies included participants from a wider age range. The maximum follow-up time ranged from 1 to 57 years. Most studies used clinical examination to assess the number of teeth (51 studies). For reporting of the outcome, 26 studies reported both ACVD-related events and ACM, 24 studies only reported ACM, and 25 studies only reported ACVD-related events. This information was mostly retrieved from medical records, death certificates, and death registers. All studies were prospective observational cohort studies, 67 studies reported findings from Cox proportional hazard analysis, five studies used logistic regression models, two studies used Poisson regression models, and one study applied a linear regression model. Exclusion of participants with prevalent ACVD at baseline was performed in 30 studies, and therefore, these studies could be defined as inception cohorts. In 21 studies, adjustment for prevalent ACVD at baseline was performed, whereas in 24 studies, no adjustment was performed, or no information for ACVD at baseline was available. The follow-up rate was high (≥80%) in The percentage male/female is based on the outcome for CHD. The cohort consists of study participants with prevalent atherosclerotic carotid artery disease, as defined by the presence of non-stenotic plaque or carotid stenosis of any degree. Patients with an ACVD-event (MI/stroke/coronary revascularization/peripheral vascular surgery) during the preceding 6 months were excluded. For history of MI, PAD, history of stroke, baseline degree of carotid stenosis, adjustment in the analyses were performed. Ω In a subgroup analysis performed in 4,164 study participants in whom a history of previous MI and hypertension (drug-treated) was collected, the relationship between the number of teeth and future ACVD death was essentially unaltered compared to the analysis in the total sample when previous MI and hypertension were added as confounders in the analysis. ♦ Baseline data consisted of 256 coronary artery disease patients and 250 age and sex-matched controls and created a prospective follow-up study. ß Known CHD is in the inclusion criteria. Patients were eligible to participate if they had CHD, but the target patients of this paper are the patients with stable CHD. In the statistical analyses, correction for ACVD was done. Supplementary Tables 1, 2 show the descriptive information and summary results of the studies regarding tooth loss and ACVD-related events (Supplementary Table 1) and regarding tooth loss and ACM (Supplementary Table 2). Risk of Bias Assessment To estimate the potential risk of bias, the methodological qualities of the included studies were assessed. Overall, out of the 44 studies, the potential risk of bias was estimated to be "low" for 28 studies, "moderate" for three studies, and "serious" for 13 studies. This overall risk of bias assessment was estimated based on the risk of bias assessment within the seven domains. Within the domain confounding, there were 11 studies estimated as "serious, " and two studies were estimated as "moderate" risk of bias. Within the domain classification of exposures, there was only one study that was estimated as "moderate" risk of bias; within the domain measurement of the outcomes, there was only one study that was estimated as "serious" risk of bias; and within the domain missing data, two studies were estimated as "serious" risk of bias and two studies as "moderate" risk of bias. All the other studies within these domains were assessed as "low" risk of bias. Within the domain selection of participants into the study, departures from intended exposures, and selection of the reported result, all studies were estimated as "low" risk of bias (Supplementary File 5). Meta-Analyses for Tooth Loss and Atherosclerotic Cardiovascular Disease-Related Events Cumulative Incidence of ACVD-Related Events, Categorical Data Using pairwise meta-analysis, 10 studies were pooled, including the number of remaining teeth classified into 0 and 1-32 teeth, and four studies with the number of remaining teeth were classified into 0-19 and 20-32 teeth. Using network metaanalysis, two studies with the number of remaining teeth were classified into 0, 1-19, and 20-32 teeth, and four studies with the number of teeth were classified into 0-10, 11-16, 17-24, and 25-32 teeth. The results showed that the categories with a lower number of remaining teeth always had a significantly higher risk of ACVD-related events than the categories with a higher number of remaining teeth (Figure 2A and Supplementary Table 3 (Figure 2A and Supplementary Table 3). The heterogeneity of those metaanalyses ranged from 0 to 99%. Incidence Density of ACVD-Related Events, Categorical Data Using generic inverse variance meta-analysis for calculation of HR, five studies were included with the number of remaining teeth classified into 0 and 1-32 teeth, and four studies were included with the number of remaining teeth classified into 0-19 and 20-32 teeth. The results showed that the categories with a lower number of remaining teeth had significantly higher HR of ACVD-related events than the categories with a higher number of remaining teeth in both the crude and adjusted (for age/sex and for multiple variables) models (Figure 2B and Supplementary Table 4 (Figure 2B and Supplementary Table 4). The heterogeneity of those meta-analyses ranged from 0 to 58%. Incidence Density of ACVD-Related Events, Continuous Data Using generic inverse variance meta-analysis for calculation of HR, 13 studies were included with the number of remaining teeth regarded as a continuous variable. The results showed an increased hazard for ACVD with an increased number of lost teeth in the multiple adjusted model [HR: 1.02 (1.01-1.03)] (Figure 2C and Supplementary Table 4). The heterogeneity of those meta-analyses ranged from 75 to 79%. Most of the subgroup analyses, based on the approaches to count the remaining number of teeth, patients' ACVD status at baseline, and follow-up rate of the included studies, showed no statistically significant changes in the outcome (Supplementary Tables 3, 4). The heterogeneity was not reduced significantly in the subgroup analyses. This indicated that the three indicators may not significantly influence the outcomes and may not be the important reason to cause heterogeneity of the pooled results. Cumulative Incidence of ACM, Categorical Data Using pairwise meta-analysis, 14 studies were included with the number of remaining teeth classified into 0 and 1-32 teeth, and 11 studies with the number of remaining teeth classified into 0-19 and 20-32 teeth. Using network metaanalysis, eight studies were included with the number of remaining teeth classified into 0, 1-19, and 20-32 teeth. None classified the number of remaining teeth into four categories. The results showed that the categories with a lower number of remaining teeth had a significantly higher risk of ACM than the categories with a higher number of remaining teeth (Figure 2A and Supplementary Table 3). The highest RR was found in the comparison of 0 teeth vs. 20-32 remaining teeth [RR: 2.27 (1.82-2.83)] (Figure 2A and Supplementary Table 3). The heterogeneity of those meta-analyses ranged from 94 to 100%. Incidence Density of ACM, Categorical Data Using generic inverse variance meta-analysis, seven studies were included with the number of remaining teeth classified into 0 and 1-32 teeth, and 10 studies were included with the number of remaining teeth classified into 0-19 and 20-32 teeth. The results showed that the categories with a lower number of remaining teeth had significantly higher HR of ACM than the categories with a higher number of remaining teeth in both the crude and adjusted (for age/sex and for multiple variables) models ( Figure 2B and Supplementary Table 4 Table 4). The heterogeneity of those metaanalyses ranged from 0 to 97%. Incidence Density of ACM, Continuous Data Using generic inverse variance meta-analysis, 11 studies were included with the number of remaining teeth regarded as a continuous variable. The results showed an increased hazard for ACM with an increased number of lost teeth in the multiple adjusted model [HR: 1.02 (1.01-1.03)] (Figure 2C and Supplementary Table 4). The heterogeneity of those metaanalyses ranged from 80 to 93%. Most of the subgroup analyses, based on the approaches to count the remaining number of teeth, patients' ACVD status at baseline, and follow-up rate of the included studies, showed no statistically significant changes in the outcome (Supplementary Tables 3, 4). The heterogeneity was not reduced significantly in all subgroup analyses, except in the subgroup analysis for crude HR of 0 vs. 1-32 remaining teeth. This indicated that the three indicators may not significantly influence the outcomes and may not be the important reason to cause heterogeneity of the pooled results. The cumulative meta-analyses showed a rather limited overtime drift for the pooled RR and pooled HR and their confidence intervals (Supplementary Tables 5, 6). With some exemptions, the pooled RR and HR ranged between 1 and 2. Hence, there is a fairly stable overtime relationship between the number of teeth and ACVD or ACM-related outcomes, but the width of the confidence intervals did not narrow in a constant manner. To estimate the level of publication bias, funnel plots were created for six meta-analyses with at least 10 studies. The results showed that there was no to minor publication bias for these meta-analyses (Supplementary File 6). Supplementary File 7 presents a summary of the various factors used to rate the strength of the evidence according to GRADE. The rating was assessed for all the separate metaanalyses. The cumulative incidence for categorical data of the number of teeth and ACVD showed two meta-analyses having moderate strength of evidence and two meta-analyses having high strength of evidence. The cumulative incidence for categorical data of the number of teeth and ACM showed two meta-analyses having low strength of evidence and one metaanalysis having high strength of evidence. The multivariable incidence density for categorical and continuous data of the number of teeth and ACVD showed one meta-analysis having low strength of evidence and two meta-analyses having moderate strength of evidence. The multivariable incidence density for categorical and continuous data of the number of teeth and ACM showed two meta-analyses having low strength of evidence and one meta-analysis having moderate strength of evidence (Supplementary File 7). DISCUSSION To our knowledge, this systematic review and meta-analyses containing 75 prospective cohort studies from all over the world, including a diversity of populations of both men and women of all ages, is the largest to date providing evidence that tooth loss is related to an increased risk for ACVD-related events and ACM. The crude analyses showed that tooth loss is related to ACVD morbidity, ACVD mortality, and ACM: the lower the number of remaining teeth, the higher the risk of an ACVD event or death. These effects remained after adjusting for methodological weaknesses of the analyzed studies. To interpret and explain the risk of tooth loss for ACVD, several plausible (biological) mechanisms need to be taken into consideration. • Tooth loss and ACVD share risk factors, such as age, sex, SEP, obesity, and smoking (105). As a consequence, their subsequent occurrence during the life course reveals to be associated. While earlier research has shown a doseresponse relationship (20), the direction, size, and precision of association estimates we reported here are consistent throughout all analyses. Hence, it is unlikely that this association is coincidental. • Tooth loss is the ultimate event representing two major dental pathologies. (i) Dental caries is a lifelong disease and traditionally considered an important cause of tooth loss. Dental caries has a multifactorial etiology, but the consumption of dietary carbohydrates is the main factor (1,2). This corresponds to a risk factor for ACVD, namely, overconsumption of carbohydrates leads to overweight and obesity, metabolic syndrome, and diabetes; these are obvious risk factors for ACVD (3)(4)(5)(6). (ii) At older ages, periodontitis is the main cause of tooth loss. Periodontitis is a chronic multi-causal inflammatory disease of the supportive tissues of the teeth with progressive loss of attachment and alveolar bone, finally leading to tooth loss (1,2). Ample research has been performed to identify pathophysiological mechanisms to explain the association between periodontitis and ACVD (11,12). Thus, since periodontitis is associated with ACVD, the ultimate endpoint is also associated. A low-grade systemic inflammation in periodontitis and daily short-lived bacteremias in relation to atherosclerosis have been investigated (106)(107)(108). Lowgrade chronic systemic inflammatory stress in periodontitis is considered to contribute to increased inflammation around atherosclerotic plaques at predilection places and vulnerable arteries; circulating bacteremia contribute to a pro-inflammatory and pro-thrombotic state, may induce autoimmunity, as well as dyslipidemia (11,12). Increased adjusted RR for the relationship between periodontitis and ACVD from 1.50 up to 3.20 has been reported (14,109). • Masticatory dysfunction and related dietary changes have been proposed (5, 110, 111). An impaired masticatory function may lead to inadequate food choices and, therefore, reduced amounts of nutrients. This may include increases in intake of industrially processed rather than natural foods, avoiding hard-to-chew food, and home processing their foods into softer foods. In turn, this leads to increased intake of fermentable carbohydrates, saturated fatty acids, and reduction of the sources of dietary fiber and essential vitamins and minerals, consequently contributing to systemic diseases (3,5,(110)(111)(112)(113). A study reporting about the link between tooth loss, nutritional status, and stroke outcomes showed in the multivariable analysis that tooth loss and a worse nutritional status were independently associated with poor stroke outcomes (OR: 1.33) (114). • Socioeconomic position (SEP) is an explanatory variable for a lower number of teeth for which strong and consistent evidence is available. SEP is defined mainly based on income, education, and employment status. Several studies show that low SEP is associated with significant tooth loss (7,8). SEP influences lifestyle habits like smoking and performing good oral hygiene. Also, access to healthcare centers and periodic dental examination is more limited for people with a low SEP (7,115,116). SEP also plays a role in the development of ACVD. In people with low SEP, biological, behavioral, material, and psychosocial risk factors (like health insurance and financial difficulties, obesity, smoking, physical inactivity, life events, educational inequalities), lower health literacy, and inequalities in access to care and medical treatment, accentuate the link between SEP, ACVD, and mortality (8,117). For low adulthood SEP, multiple adjusted increased risks for ACVD up to 1.84 were reported (8). Despite a large number of studies and the state-of-art methodologies applied, within the findings of this systematic review and meta-analyses, the following aspects observed in the study methods need attention. • The risk of bias assessment in the individual studies, by using the ROBINS-E tool, showed a low risk of bias for 28 studies, a moderate risk of bias for 3 studies, and a serious risk of bias for 13 studies. For the studies with moderate and serious risk of bias, the main issue was the bias due to confounding, where important covariables for the relation between the number of teeth and ACVD or ACM were not available in the multivariable analyses. It is difficult to predict the direction of bias for the outcome, but we can assume that a suboptimal multivariable analysis causes an overestimation of the effect. Funnel plots were made to assess the publication bias for the larger meta-analyses and showed none to a minor level of publication bias. • Some studies defined the determinant (number of remaining or missing teeth) based on patients' self-reported information. This may be less accurate than the number of teeth based on clinical examination and may bias the results of the review. In addition, most included studies use categories containing up to 32 remaining teeth. Whether third molars should be incorporated in defining a full dentition remains controversial. This aspect could introduce bias, but considering the fact that the used categories contain groups of 1-32, 20-32, or 25-32 teeth, the introduced bias is negligible considering a full dentition containing only 28 teeth (excluding third molars) is also incorporated in those categories. Also, a large number of the currently included studies used two groups of present teeth in their statistical analysis: groups 0-19 teeth vs. 20-32 teeth. Most likely, this is based on the fact that in dentistry, 20 teeth present is considered as the minimal number of teeth needed for sufficient chewing ability (118). • Only 15 studies included in the meta-analyses are inception cohort studies, which only included participants without ACVD at baseline. In another 14 studies, a part of the participants had ACVD at baseline, but this was adjusted for in the analysis. However, in the remaining 15 studies, some participants had ACVD at baseline without adjustment in the analysis or the information on participants' ACVD status at baseline, and the adjustment was not reported. For those 15 studies, the association between the number of teeth and the outcomes may be overestimated if the prevalent ACVD at baseline was included without adjustment. This may bias the pooled results of the meta-analyses. • The follow-up rate differed between studies and was classified into <80% and ≥80% follow-up rates. Loss to follow-up may severely compromise the validity of a study and bias the results if the patients who drop out are different from those who do not drop out (119). Based on the rule of thumb, a dropout rate >20% may cause serious bias (120). In the present study, we assumed that the patients who experienced the events at follow-up may be more likely to drop out than those without events, and this may lead to the underestimation of the results. However, the subgroup analyses using the three indicators for risk of bias, i.e., the approaches to count the number of remaining teeth, patients' ACVD status at baseline, and follow-up rate, showed a very minor effect on the heterogeneity of the meta-analyses. This indicated that the level of risk of bias across the included studies was not the main reason for the heterogeneity of the meta-analyses (Supplementary Tables 3, 4). • The harmonization of the number of teeth varied across studies. Some studies reported the number of remaining or missing teeth as a continuous variable, while other studies used different categories for the number of remaining or missing teeth, and these categories were not always comparable and transferrable across included studies. Also, probably in most of these studies, the categories were post-hoc defined. Therefore, 44 studies using the four predefined common categories of exposure comparisons qualified for meta-analyses. Due to the exclusion of 31 studies, an overestimation of effect cannot completely be ruled out. However, by using this predefined approach to meta-analysis, we have found a consistent effect across the reported separate meta-analyses. Moreover, none of the overall estimates of the cumulative meta-analyses showed meaningful changes over time (Supplementary Tables 5, 6). • The heterogeneity in the meta-analyses was high. The I 2 for some of the meta-analyses was larger than 75% (Figures 2A-C and Supplementary Tables 3, 4), which indicated considerable heterogeneity. So perhaps, conventionally, one would refrain from pooling, as a randomeffects model will not allow remediation of this situation (27). Moreover, the sensitivity analyses did not provide clues for sources of heterogeneity. Therefore, the remaining uncertainty is whether this unexplained heterogeneity due to the residual variation results in either an overestimation or an underestimation of the estimates of effect (so bias in the estimates). The high heterogeneity may be caused by differences in follow-up periods and multivariate adjustments across included studies, but also by their variated study populations, notably patients' age and countries of origin. For example, there were quite some studies only including participants at older ages (≥60 years of age) who are known, on the one hand, to carry a higher risk for ACVD events or ACM (121), while the peak incidence of severe tooth loss is at 65 years of age (4). The follow-up time of the included studies was quite diverse. With a longer follow-up period, more study participants may experience an ACVD event or die. Also, the exposure, tooth loss, is a time-dependent variable, while in most studies, it is reported as the number of teeth at baseline, which is seen as time independent. This may have resulted in an underestimation of the effect. In addition, the covariates in the multivariate models are diverse across studies. The multivariate HR values adjusted for various covariates were directly pooled in the meta-analysis. Besides, the rationale for selection of the covariates in most studies was not reported. It is not known whether the covariates were determined a priori or simply based on the availability of potential confounding data at hand. If the latter is the case, this may have biased the findings of the meta-analyses. • Across studies, the evidence emerging from the separate metaanalysis, assessed by the use of GRADE, varies. For the metaanalyses with ACVD as the outcome, the strength of the evidence was mainly moderate. For the meta-analyses with ACM as the outcome, the strength of the evidence was mainly low. Therefore, our conclusion based on the outcome ACVD is more valid than our conclusion based on the outcome ACM. In conclusion, this large systematic review and meta-analysis of survival data shows that a lower number of teeth increases the risk of ACVD-related morbidity or mortality, and ACM. Dental professionals should use this knowledge to inform their patients and the public at large to be aware of their general health and visit their general physician to discuss this aspect; and, vice versa, medical specialists should motivate their patients to visit the dentist regularly and to encourage them to maintain their own teeth as much as possible. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS NB and NS contributed to the conception, design, data acquisition, analysis, interpretation, and drafted and critically revised the manuscript. BL and GH contributed to the conception, design, analysis, interpretation, and critically revised the manuscript. All authors gave final approval and agree to be accountable for all aspects of the work ensuring integrity and accuracy.
2021-05-07T13:15:30.877Z
2021-05-07T00:00:00.000
{ "year": 2021, "sha1": "6f1ce4c16093a81c44e160eedd9fa514b6ba653a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.621626/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f1ce4c16093a81c44e160eedd9fa514b6ba653a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270868228
pes2o/s2orc
v3-fos-license
Pseudomonas aeruginosa infection correlates with high MFI donor-specific antibody development following lung transplantation with consequential graft loss and shortened CLAD-free survival Background Donor-specific antibodies (DSAs) are common following lung transplantation (LuTx), yet their role in graft damage is inconclusive. Mean fluorescent intensity (MFI) is the main read-out of DSA diagnostics; however its value is often disregarded when analyzing unwanted post-transplant outcomes such as graft loss or chronic lung allograft dysfunction (CLAD). Here we aim to evaluate an MFI stratification method in these outcomes. Methods A cohort of 87 LuTx recipients has been analyzed, in which a cutoff of 8000 MFI has been determined for high MFI based on clinically relevant data. Accordingly, recipients were divided into DSA-negative, DSA-low and DSA-high subgroups. Both graft survival and CLAD-free survival were evaluated. Among factors that may contribute to DSA development we analyzed Pseudomonas aeruginosa (P. aeruginosa) infection in bronchoalveolar lavage (BAL) specimens. Results High MFI DSAs contributed to clinical antibody-mediated rejection (AMR) and were associated with significantly worse graft (HR: 5.77, p < 0.0001) and CLAD-free survival (HR: 6.47, p = 0.019) compared to low or negative MFI DSA levels. Analysis of BAL specimens revealed a strong correlation between DSA status, P. aeruginosa infection and BAL neutrophilia. DSA-high status and clinical AMR were both independent prognosticators for decreased graft and CLAD-free survival in our multivariate Cox-regression models, whereas BAL neutrophilia was associated with worse graft survival. Conclusions P. aeruginosa infection rates are elevated in recipients with a strong DSA response. Our results indicate that the simultaneous interpretation of MFI values and BAL neutrophilia is a feasible approach for risk evaluation and may help clinicians when to initiate DSA desensitization therapy, as early intervention could improve prognosis. Supplementary Information The online version contains supplementary material available at 10.1186/s12931-024-02868-1. Background Lung transplantation (LuTx) has a poor long-term outcome, with a current median post-transplant survival of 6.5 years [1], and 8.7 years for recipients surviving the first postoperative year [2].Novel immunosuppressive regimens significantly eradicated acute graft-rejection events [3], yet chronic rejection manifesting as chronic lung allograft dysfunction (CLAD) represents the major complication in long term allograft survival, affecting ~ 50% of recipients at the first 5 years [1,4].No specific treatment is currently available to prevent or reverse CLAD, and the lack of appropriate biomarkers challenge the detection of the early and probably reversible phase of this condition [5]. Respiratory tract infections often generate severe complications in immunosuppressed recipients that are now recognized risk factors of CLAD [17].Continuous pathogenic provocation of the lungs, repetitive inflammatory episodes and impaired repair mechanisms lead to allograft deterioration over time.Pseudomonas aeruginosa (P.aeruginosa) is commonly found in LuTx recipients and aggravates tissue damage [18,19].Recently, P. aeruginosa colonization in respiratory specimens has been directly linked to the DSA response and shortened CLAD-free time [20]. In our present study we aimed to clarify the clinical impact of DSAs on graft survival and CLAD progression, and applied an MFI based risk stratification method to predict these outcomes.In search of factors contributing to DSA development we investigated the role of P. aeruginosa infection in BAL specimens.Additionally, we analyzed BAL immune cell ratios that correlated to DSA levels. Recipient cohort All 116 recipients were transplanted by the Hungarian Lung Transplantation Program, between 12th December 2015-7th August 2021, end of the follow-up time was 15th August 2022, median follow-up time was 735 days.29 recipients who did not undergo DSA testing were excluded.Altogether 87 recipients have been analyzed.All patients were treated and managed similarly, according to standardized institutional protocol [21].In brief, patients (n = 82 and n = 5, respectively) received induction therapy consisting of alemtuzumab (0.4-0.5 mg/ kg) or polyclonal anti-thymocyte globulin (ATG) (2 mg/ kg) as part of their immunosuppressive regimen.Following alemtuzumab induction, either a double combination therapy of tacrolimus and steroids was initiated, or a triple combination therapy of tacrolimus, mycophenolate mofetil, and steroids was applied [21][22][23].Patients were closely monitored for CLAD regularly based on their DSA levels, and if indicated, predefined therapy was initialized before the appearance of clinical symptoms [24].In cases of CLAD with BAL neutrophilia, we administered azithromycin at an immunomodulatory dose (250 mg three times a week) according to international recommendations [25][26][27].Retransplantation was considered as a separate event in the outcome analysis.The outcomes were graft survival (death or retransplantation) and CLAD-free survival.When > 3000 MFI antibodies were detected pre-transplantation, the corresponding donor antigens were avoided.No standardized desensitization therapy was applied, n = 10 DSA positive recipients received plasmapheresis/intravenous immunoglobulin (IVIG) therapy. DSA detection All diagnostic processes were conducted in accordance with the Hungarian National Blood Transfusion Service protocol.Anti-HLA-A, -B, -C, -DQ, or -DR antibodies were detected by LABScreen Single Antigen HLA Class I (LS1A04) and Class II (LS2A01) diagnostic tools (One Lambda, Thermo Fisher Scientific), following the manufacturer's guidelines.In brief, 5 µl of LABScreen beads were incubated with 20 µl of test serum in a 1.5 ml microcentrifuge tube for 30 min.Then, 1 ml of 1X wash buffer was added to each bead/serum solution tube and vortex, followed by centrifugation.Lastly, diluted PE-conjugated anti-human IgG was added to each tube, followed by the addition of PBS to the tubes.As for HLA genotyping in deceased donors, DNA was extracted from whole blood using the MagCore® Genomic DNA Whole Blood Kit and MagCore®Super instrument.Low-resolution HLA typing was obtained by performing DNA amplification and DNA-based, low-resolution typing for HLA-A, -B, -C, -DRB1, -DQB1 antigenic levels (Olerup SSP® HLA Typing Kits).Confirmatory typing was achieved by using LABType SSO A, -B, -C, -DRB1, -DQB1 Locus kits (One Lambda, Inc., Canoga Park, CA).The cutoff value for DSA positivity was > 1000 MFI.Immunodominant DSA defined as the highest MFI DSA for a given recipient. Defining CLAD and AMR CLAD was defined according to the International Society for Heart and Lung Transplantation (ISHLT) guideline [6]: a persistent decline (> 20%) in measured forced expiratory volume (FEV1) value from the baseline value (mean of the best two postoperative FEV1 measurements taken > 3 weeks apart), and after exclusion of other causes of FEV1 decline.CLAD was definite if FEV1 decline lasted over 3 months.CLAD-free time was defined as the period between transplantation and the beginning of persistent FEV1 decline.AMR was classified based on ISHLT guidelines [28].Recipients with or without DSA positivity, complement C4d staining and histology were classified as subclinical AMR.For clinical AMR additionally allograft dysfunction and clinical signs was measured by FEV1, radiology or by exclusion of confounding factors. BAL and microbiological analysis For BAL ~ 120 ml 0.9% saline solution was applied in 40 ml fractions.Following suctioning, the fluid was analyzed for neutrophil percentages out of total inflammatory cells.BAL neutrophils below 25% were defined as "low" and above this threshold as "high".Microbiological analysis for P. aeruginosa, Gram negative bacteria and fungi species have been performed.For active infection in BAL specimens a 10 3 CFU/ml pathogen threshold has been determined. Statistical analysis For data analysis Prism Graph Pad 9. and R version 4.2.1 was used.Ordinary one-way and two-way ANOVA were used to compare multiple groups.Contingency cohort analysis was used to calculate odds ratio, statistical significance and p-value were determined by Chi-square tests.Multiple MFI measurements from the same patient were treated as independent when investigating associations between AMR status, MFI value, HLA-DQ/class specificity and infections.Survival analysis was initially performed by fitting univariate Cox proportional hazard regression models for both graft and CLAD-free survival, while treating variables determined post-transplantation as time-dependent.For CLAD-free survival, events of death unrelated to CLAD were treated as censored observations.Multivariate Cox-regression models were fitted to the data for both outcomes with two sets of predetermined variables (AMR stages [time-dependent], presensitization, percentage of neutrophils in BAL specimens [time-dependent], infection [with P. aeruginosa, Gram negative bacteria or Candida species] [time-dependent] and DSA levels [time-dependent], presensitization, percentage of neutrophils in BAL specimens [time-dependent], infection).As AMR and DSAs are interconnected, we refrained from including both variables simultaneously in multivariate models of survivals.<0.05 p-values were considered statistically significant. Recipient cohort A total of 283 sera from 87 recipients (Suppl.Table 1) have been analyzed.Most recipients were transplanted with chronic obstructive pulmonary disease (COPD) (47%) indication, followed by interstitial lung disease (ILD) (24%) and cystic fibrosis (CF) (20%).Of the cohort 36% tested DSA positive during the follow-up, and most recipients produced multiple antibodies.Of the recipients 19% developed class I, 32% class II specific DSAs and 49% developed DSAs against both classes (Suppl.Figure 1 A).The HLA-DQ specific DSAs were the most common and demonstrated significantly higher MFI values among all subtypes (MFI: 8527, median, p < 0.0001) (Suppl.Figure 1B-C).DSA production is an early event [29], the vast majority of DSAs were generated within the first 3 postoperative months (Suppl.Figure 1D). Graft-and CLAD-free survival in MFI stratified cohorts Investigating the effect of all DSAs on graft survival, we did not detect a significant difference between the sensitized versus non-sensitized groups (HR: 1.67, CI: 0.87-3.17,p = 0.12) (Fig. 2A).This finding prompted us to stratify our analysis based on MFI values.In search of the applicable MFI cutoff, we considered our data of DSAs triggering clinical AMR (MFI 6823) and reviewed a previous report of DSAs of clinical AMR recipients (MFI 7332) [32].We added that HLA-DQ subtypes are overrepresented in clinical AMR that are accompanied with higher MFI values (11,321 MFI), which together pointed towards an average ~ 8000 MFI cutoff (Suppl.Figure 2A).Accordingly, we split the recipients into DSA-negative, DSA-low (1000-8000 MFI) and DSA-high (> 8000 MFI) groups.Using this stratification, we could demonstrate that high MFI DSAs were associated with significantly worse graft survival than recipients without or with low MFI DSAs (HR: 5.77, CI: 2.53-13.13,p < 0.0001 and HR: 6.64, CI: 2.24-19.67,p < 0.001) (Fig. 2B). During the study period 28% of the recipients developed CLAD and showed a significant tendency for graft loss when compared to the CLAD-free group (HR: 5.96, CI: 2.93-12.14,p < 0.0001) (Fig. 2C).Among the MFI stratified groups, we detected a very strong association between high MFI DSAs and shorter CLAD-free survival when compared to DSA-negative and DSA-low cohorts (HR: 6.47, CI: 1.36-30.70,p = 0.02, HR: 10.82, CI: 1.45-80.67,p = 0.02).On the other hand, the DSAlow and DSA-negative groups did not differ significantly (HR: 0.60, CI: 0.14-2.62,p = 0.49) (Fig. 2D).Contingency cohort analysis revealed an 8.6 odds ratio (CI: 1.79-43.63,p = 0.006) for the DSA-high cohort to develop CLAD compared to the DSA-negative group, while the same calculation did not show a significant correlation for the DSA-low recipients (OR: 0.92, CI: 0.23-4.39,p = 0.9).Examining the grade of CLAD across the DSA stratified cohorts we did not observe higher grade in the DSA-high recipients, implying that DSAs impact onset time rather than the severity of CLAD (Suppl.Figure 2B).Additionally, the > 8000 MFI DSAs were predominantly class II (86%) and HLA-DQ (76%) specific, while in the DSA-low group 43% and 32% respectively (Suppl.Figure 2C-D).Analyzing the broad HLA mismatch scores of the DSA stratified groups we could not detect a difference that could explain the high HLA-DQ incidence in the DSA-high recipients (Suppl.Figure 2E).Multivariate Cox regression verified DSA-high status as independent prognostic factor for shortened graft-(HR: 7.37, CI: 2.61-20.82,p < 0.001) and CLAD-free (HR: 22.04, CI: 2.68-181.52,p = 0.001) survival (Table 1). 1 Multivariate Cox-regression models for graft and CLAD-free survival.Variables marked with an asterisk (*) were considered to be time-dependent.Statistically significant results are highlighted in bold.Results are represented as hazard ratio (HR), corresponding confidence interval (CI) and p-value Pseudomonas aeruginosa infection correlates with the DSA response P. aeruginosa colonization in respiratory specimens has been recently linked to DSA development [20].Therefore we analyzed this phenomenon in our MFI stratified recipients, however we distinguished infection from colonization and used BAL specimens that were taken in close time proximity to DSA testing.To ensure that the effect is specific to P. aeruginosa, other Gram negative bacteria (Klebsiella pneumoniae, Klebsiella oxytoca, Escherichia coli, Acinetobacter baumannii, Achromobacter xylosoxidans, Citrobacter freundii, Stenotrophomonas maltophilia) and Candida species (C.albicans, C. crusei, C. glabrata) were analyzed simultaneously.In case of P. aeruginosa among the DSA-positive cohort 40.5% of BAL specimens tested positive for infection, while among the DSA-negative cohort only 13%, that is a ~ 3 fold increase (Fig. 3A).In case of Gram negative bacteria and Candida spp.similar percentages were found when comparing the DSA negative and DSA positive cohorts (21.4% vs. 17.6%, and 13.2% vs. 16.7%,respectively) (Fig. 3B-C).Contingency cohort analysis verified significant association between DSAs and P. aeruginosa infection (OR: 4.5, CI: 1.51-13.77,p = 0.0042), but not with other examined pathogens (Gram negative bacteria: OR: 0.79, CI: 0.23-2.58,p = 0.68: Candida spp.: OR: 0.76, CI: 0.27-2.36,p = 0.64) (Suppl.Table 2).In the DSA-negative cohort only 15.2% of the BAL samples were positive for 2).Previously we showed that clinical AMR is evident in DSA positive recipients.To ensure, that the clinical manifestation is related to DSAs and not P. aeruginosa infection we analyzed the overlap of clinical AMR and P. aeruginosa within a strict 2 weeks testing period.Indeed, when clinical AMR presented in recipients 82% of them was P. aeruginosa free, suggesting that clinical AMR is inherently DSA related, and Pseudomonas infection correlates to DSA emergence but not clinical AMR (Fig. 3E).Of note, in univariate, time-dependent settings, none of the above investigated infections (or the aggregated presence of any of them) influenced either graft or CLADfree survival in a significant manner. BAL neutrophilia correlates with DSA status In search of additional clinically relevant factors that correlate with DSAs we examined BAL immune cells in MFI stratified cohorts.When BAL immunophenotyping and DSA testing time overlapped, we found a significant increase in neutrophils in the DSA high group (DSA-negative: 8.04%, DSA-low: 7.9%, DSA-high: 26.3%, p < 0.001) (Fig. 4A).We further analyzed the BAL samples of only the > 8000 MFI DSA recipients, we split their data into DSA-negative, DSA-low and DSA-high clinical periods, and interestingly, we detected dynamic changes in their samples, showing that elevated MFI values correlated with increased BAL neutrophil ratios, most apparent in the DSA high clinical period (DSA-negative period: 3.7%, DSA-low period: 7.5%, DSA-high period: 26.3%, p = 0.006) (Fig. 4B).In a time-dependent model high BAL neutrophil ratios significantly decreased graft survival (HR: 3.45, CI: 1.66-7.17,p < 0.001) (Fig. 4C).In multivariate Cox regression models of AMR and MFI for graft survival BAL neutrophilia had an independently significant effect (HR: 2.80, CI: 1.18-6.67,p = 0.019 and HR: 2.85, CI: 1.17-6.98,p = 0.022, respectively) (Table 1).Of note, BAL neutrophilia did not influence significantly the occurrence of concurrent infections (p = 0.562; data not shown). Discussion Allograft failure accounts for over 40% of deaths following LuTx [6].DSAs are common during the postoperative period, nevertheless discrepancies are common when examining their roles in graft survival and CLAD progression [4,10,11,[9][10][11][12][13]16].In our current investigation we examined these outcomes in relation to the de novo DSA response when stratified by MFI levels and analyzed the role of P. aeruginosa infection in the humoral response.We identified high MFI DSAs and clinical stage AMR as independent prognostic factors for graft loss and poor CLAD-free survival.In addition P. aeruginosa infection correlates with DSA development, and BAL The connection between AMR and DSAs are best characterized in kidney transplantation, with the least reports on lungs [3].In our cohort recipients with clinical AMR had a strong correlation with graft loss and shortened CLAD-free time and using multivariate Cox-regression models we could identify clinical AMR as independent risk factor for both outcomes, while subclinical AMR could not be identified as such.DSAs eliciting clinical AMR had higher MFI values, and dominantly HLA-DQ specificity, that has been described as a relevant risk factors for AMR and graft damage [11,33]. Analyzing graft survival based solely on DSA positivity we could not identify a difference compared to the DSAnegative cohort.While MFI is routinely used in the risk stratification pre-transplantation, the relevance of MFI by means of pathogenicity following LuTx has not been widely examined [14,34].Using a cutoff based on clinical data, we could clearly demonstrate that DSAs with high MFI have a profound effect on graft survival and MFI stratification is a relevant and widely available tool to evaluate future graft damage. Applying DSA stratification, we could clearly demonstrate that high MFI DSAs shorten CLAD-free survival, while we could not detect the same effect in the DSAlow group.A previous report associated DSAs with a 2 fold CLAD risk [16], and a significantly shorter CLADfree survival [9,11].We found a higher risk for CLAD in our cohort, and we hypothesize that the difference is enrooted in the MFI stratification method.Most > 8000 MFI DSAs were class II and HLA-DQ specific and showed similar traits as in previously reported studies, in which class II DSAs were shown to be risk factors for BOS [35], and 76% HLA-DQ specific DSAs induced CLAD [11].HLA-DQ is the most immunogenic antigen, not only in the case of LuTx, but also in kidney and heart transplantation [11,36,37].We suggest that the pathomechanism is enrooted in the inflammatory environment of the lungs, in which class II HLA expression may increase, as it was shown that inflammatory cytokines (INF-γ, TNF-α, IL-1b) elevate HLA class II expression on endothelial cells [33].High cell surface HLA class II expression may trigger an elevated rate of different allorecognition pathways that ultimately lead to a strong DSA response and pulmonary damage [38,39]. The factors triggering the DSA response are not completely clarified [22,40].Severe pulmonary infections frequently occur in immunosuppressed recipients.The tissue damage caused by pathogens and the impaired resolution are recognized risk factors for CLAD [17].P. aeruginosa is commonly isolated from the airways of LuTx recipients, and its role in CLAD progression [19], and increased DSA risk have been reported [20].Examining BAL specimens in our recipient cohort we found similar correlations.How P. aeruginosa provoke DSAs is unclear.Studies on CF patients showed that P. aeruginosa infected lungs have high B cell numbers [18].The substantial tissue damage may act as potent proinflammatory signals for bystander B cell activation.Pathogenic and allo-antigen load may lead to the breakdown of tolerance in susceptible individuals.Furthermore, it has been shown that severe P. aeruginosa infection increased HLA-DR expression on airway epithelial cells that could enhance allorecognition mechanisms [17]. Several studies examined BAL immune cell compositions and its predictive value in rejection in LuTx recipients [41][42][43].Elevated BAL neutrophil ratios of LuTx recipients correlated with acute rejection episodes [44][45][46][47] and subsequent CLAD progression [31,42,44].Associating serum DSA levels to BAL cellularity we found profound BAL neutrophilia in recipients with high MFI DSAs.What we found particularly interesting is that BAL neutrophilia shifted dynamically in recipients when analyzed in different clinical periods based on DSA level changes.Additionally, BAL neutrophilia had a clear impact on graft loss.We hypothesize that using serum DSA and BAL data simultaneously may provide a unique tool to predict outcome, however a comprehensive and larger cohort analysis is needed to draw such conclusions. Our study has limitations.This is a single center analysis and based on a limited number of recipients.The retrospective nature of the study may confound certain results and the clinical approach inherently left underlying mechanisms hypothetical.MFI is a semiquantitative measure of DSA levels and the lack of standardized diagnostic protocols may alter DSA cutoff results across different centers [14].Moreover, serum DSA levels do not reflect the fraction of antibodies deposited in the lungs, which may falsely lead to reduced MFI values. Nevertheless, few questions remain unclear and yet to be addressed in future studies.Unknown whether DSAs are important in the initiation or the progression phase of CLAD, which could have therapeutic consequences as when to start desensitization therapy.Our result on CLAD grade favors the former concept.Whether the risk of P. aeruginosa to enhance the humoral response is a causality or a bystander effect calls for further investigation. Conclusion DSAs emerge shortly after LuTx, while consequential graft loss or CLAD follow in relative delay and the time between DSA detection and outcome may be sufficient to apply therapy.However, since all desensitization protocols bear side effects and significantly elevates the probability of infections, we suggest that considering MFI and BAL neutrophilia as prognostic factors may be beneficial to certain recipients and could guide clinicians to the right point when aggressive intervention is indicated. Fig. 1 Fig. 1 Analysis of AMR in LuTx recipients.(A) Expected adjusted graft survival curves for subpopulations of no AMR, subclinical AMR and clinical AMR calculated from the fitted univariate Cox-regression model with AMR as a time-dependent variable.The indicated hazard-ratio, confidence interval and p-value correspond to the clinical AMR vs. no AMR comparison.(B) Expected adjusted CLAD-free survival curves for subpopulations of no AMR, subclinical AMR and clinical AMR calculated from the fitted univariate Cox-regression model with AMR as a time-dependent variable.The indicated hazard-ratio, confidence interval and p-value correspond to the clinical AMR vs. no AMR comparison.(C) MFI values of DSAs associated with subclinical and clinical AMR.Each dot represents an individual DSA.MFI values in clinical AMR are significantly higher, (n = 85, median, one-way ANOVA p < 0.001).Black horizontal lines indicate mean MFI values within the groups.(D) The subtype specificity of DSAs causing clinical AMR, each dot represents an individual DSA.HLA-DQ was the most common type and had the highest MFI values, (n = 56, median, one-way ANOVA, p < 0.0001) Fig. 2 Fig. 2 Univariate graft and CLAD-free survival analysis of LuTx recipients.(A) Expected adjusted graft survival curves for subpopulations with and without the presence of DSA calculated from the fitted univariate Cox-regression model with DSA as a time-dependent variable.(B) Expected adjusted graft survival curves for subpopulations DSA-high, DSA-low and DSA-neg calculated from the fitted univariate Cox-regression model with DSA as a timedependent variable.The indicated hazard-ratio, confidence interval and p-value correspond to the DSA-high vs. DSA-neg comparison.(C) Expected adjusted graft survival curves for subpopulations that developed and did not develop CLAD during the follow-up period, calculated from the fitted univariate Cox-regression model with CLAD status as a time-dependent variable.The indicated hazard-ratio, confidence interval and p-value correspond to the CLAD-positive vs. CLAD-negative comparison.(D) Expected adjusted CLAD-free survival curves for subpopulations DSA-high, DSA-low and DSA-neg calculated from the fitted univariate Cox-regression model with DSA as a time-dependent variable.The indicated hazard-ratio, confidence interval and p-value correspond to the DSA-high vs. DSA-neg comparison Fig. 4 Fig. 4 BAL immunophenotyping of LuTx recipients.(A) The % of neutrophils in BAL samples of DSA-neg, DSA-low and DSA-high recipients.Neutrophils in DSA-high recipients showed significant result, p < 0.001.(B) The % of neutrophils in BAL samples of DSA-high recipients taken at their DSA-neg, DSA-low and DSA-high clinical periods, p < 0.006.(C) Expected adjusted graft survival curves for subpopulations of high vs. low percentages of neutrophils in BAL specimens calculated from the fitted univariate Cox-regression model with neutrophil percentage as a time-dependent variable
2024-07-02T13:08:36.584Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "e77deebf987c3c1694a6f5e67fed7d51fcbb86ee", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2d6f89672eed0a9c35ee764f0c2c884801d6605d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236158390
pes2o/s2orc
v3-fos-license
Practical Pharmacist-Led Interventions to Improve Antimicrobial Stewardship in Ghana, Tanzania, Uganda and Zambia The World Health Organisation (WHO) and others have identified, as a priority, the need to improve antimicrobial stewardship (AMS) interventions as part of the effort to tackle antimicrobial resistance (AMR). An international health partnership model, the Commonwealth Partnerships for Antimicrobial Stewardship (CwPAMS) programme, was established between selected countries in Africa (Ghana, Tanzania, Zambia and Uganda) and the UK to support AMS. This was funded by UK aid under the Fleming Fund and managed by the Commonwealth Pharmacists Association (CPA) and Tropical Health and Education Trust (THET). The primary aims were to develop local AMS teams and generate antimicrobial consumption surveillance data, quality improvement initiatives, infection prevention and control (IPC) and education/training to reduce AMR. Education and training were key components in achieving this, with pharmacists taking a lead role in developing and leading AMS interventions. Pharmacist-led interventions in Ghana improved access to national antimicrobial prescribing guidelines via the CwPAMS mobile app and improved compliance with policy from 18% to 70% initially for patients with pneumonia in one outpatient clinic. Capacity development on AMS and IPC were achieved in both Tanzania and Zambia, and a train-the-trainer model on the local production of alcohol hand rub in Uganda and Zambia. The model of pharmacy health partnerships has been identified as a model with great potential to be used in other low and middle income countries (LMICs) to support tackling AMR. Introduction Antimicrobial resistance (AMR) is a globally recognised threat to humanity and has been compared to a slow-burning version of the current COVID-19 epidemic. In 2019, it was recognised as one of the top 10 global health threats, and there is evidence that the COVID-19 pandemic is adding to the risk [1,2]. While substantial efforts have been made to combat AMR globally, concern has been raised in low-and middle-income countries (LMIC) due to the lack of resources, antimicrobial culture and sensitivity availability and increased utilisation of antimicrobials in these settings [3]. Data on AMR is scarce and incomplete in many LMICs due to a lack of funding, limited guidance and lack of surveillance. This makes it hard to quantify the issues and prioritise solutions. In the UK, pharmacists play an important role in increasing the awareness of AMR and supporting the implementation of antimicrobial stewardship interventions and programmes (AMS) [4]. Pharmacists work alongside prescribers to ensure antimicrobials are prescribed when there is a clinical sign of infection and, when indicated, are prescribed appropriately. The pharmacist role in some LMICs is not as developed, with pharmacists less integrated into clinical multidisciplinary teams, resulting in their leadership potential often being overlooked [5]. Trained pharmacists in LMICs have the potential to play a role in leading antimicrobial stewardship programmes like their counterparts in the UK or USA for example and to be part of the solution to overcome the global challenge of AMR. Due to the training requirements, medical staff often have short periods of training before moving locations to continue their development. Pharmacists are often at posts for longer periods of time and so are ideally placed to lead long-term projects. In 2019, the UK Fleming Fund funded and established the Commonwealth Partnerships for Antimicrobial Stewardship (CwPAMS), which uses a health partnership approach. This was managed by the Commonwealth Pharmacists Association (CPA) (as the technical leads) and Tropical Health and Education Trust (THET) (as grant managers) [6]. CwPAMS supported partnerships between health delivery institutions (including NHS bodies in the UK), academic institutions and other health institutions in Ghana, Tanzania, Uganda and Zambia to work together on antimicrobial stewardship (AMS) initiatives. The aim of the partnerships is to enhance the implementation of protocols and evidence-based decision-making to support antimicrobial prescribing and improve the capacity for antimicrobial surveillance to support tackling AMR. Specified within the call for partnership proposals by THET and CPA was a requirement to include pharmacists both in the UK and the four African countries to lead in developing and implementing antimicrobial stewardship interventions [7]. Pharmacists in the UK as part of the projects then linked directly with pharmacists in African countries to support their colleagues in Africa. Traditionally, volunteer opportunities in global health involve spending prolonged time in a country, and improvements are not always sustainable when the volunteer(s) leave [8]. As a novel approach with a strong focus on the pharmacy profession, pharmacists in the UK with limited funding and restricted time focused on building the capacity of pharmacists in Ghana, Uganda, Tanzania and Zambia in order to develop sustainable antimicrobial stewardship interventions. Success through these projects has potential wider implications for antimicrobial stewardship in other LMIC. Five of the twelve funded commonwealth partnerships for antimicrobial stewardship are discussed in more detail in this paper, focusing on the impact of the local pharmacists taking a leadership role within these projects. To further develop the global health leadership skills of pharmacists in the UK, Health Education England and CPA developed a new global health fellowship programme for pharmacists within the projects: The Chief Pharmaceutical Officers Global Health Fellowship [9]. This was set up to provide education on global health/the international development of global AMS, leadership skills and additional project management skills to enhance their own personal development. CwPAMS CwPAMS is a health partnership programme funded by the UK aid Fleming Fund to support partnerships between the UK National Health Service and academic institutions and health institutions in LMICs to work together on tackling AMS initiatives globally. The key eligibility criteria for selection or approval of these health partnerships were: the lead UK institutions must be formally recognised as a health education institution, regulatory organisation, NHS (if the UK) or public/not-for-profit (if overseas) hospital. This can include professional associations, and whilst an academic institution or professional association can act as the official lead for a grant, there must be clear joint leadership from an NHS hospital/institution. The CPA as the technical lead assisted UK AMS pharmacists interested in the programme connect with hospital and academic pharmacists interested in implementing AMS programs in 4 different African countries through various national agencies (i.e., National Pharmaceutical Societies/National Antimicrobial Agencies) in Ghana, Tanzania, Uganda and Zambia to bid for a grant through CwPAMS. Of the twelve selected AMS partnerships that met the eligibility criteria and were awarded the AMS grants, the key aspects from five are presented in this paper. Multidisciplinary UK AMS teams travelled to Ghana, Tanzania, Uganda and Zambia to work in partnership with local health workers during the project period from February 2019 to April 2020. However, there were reciprocal visits from LMIC members to UK facilities with some partnerships to gain first-hand experience of practice in the UK. Support from CPA included the production of a training videos to support the local production of hand sanitisers and the development of tools, e.g., an AMS checklist was developed for CwPAMS partnerships and a behaviour checklist developed in collaboration with the change exchange to identify the knowledge, attitudes and perspectives of healthcare staff related to antibiotic use and prescribing/administration. A new prescribing app called the CwPAMS app was also developed. The app provided, for the first time, easy access to national infection management resources, including Standard Treatment Guidelines (STG) for each country, to improve the appropriate prescribing in-line with the national and international guidelines. Additional resources made available through the app include the WHO essential medicines list, surveillance tools, AMS training and infection prevention and control (IPC) resources. Each project group included at least one pharmacist from the UK and one local pharmacist working in partnership with multidisciplinary teams to support local teams. The remit of the pharmacists was to ensure education was delivered, antimicrobial usage surveillance was undertaken where possible and local working structures were in place to deliver antimicrobial stewardship training to the wider hospital staff and to support long-term antimicrobial stewardship. Multidisciplinary project groups agreed how long UK pharmacists would be present in the country, as this was not predetermined by the grant. The time of the UK pharmacists in the country varied from three five-day visits to one or two visits of longer durations, such as 3 weeks. A summary of the number of pharmacists in each project and the time the UK pharmacists spent in their partnership country is included in Table 1. The duration of visits was short, as such, the majority of communication was conducted though remote working technology/virutally. Projects reported were implemented over the period of a year, from February 2019 to April 2020. In addition to the education and training and surveillance that was undertaken, all projects also developed a local focus for improvement. The projects were selected based on the local priorities agreed on within the multidisciplinary teams and were used to engage staff in and improve AMS. A hospital antibiotic stewardship group was established in KMH to monitor antibiotic use. A Quality Improvement project was initiated and ran from October 2019 to June 2020. The aim of the project was to achieve 70% compliance with the antibiotic policy in outpatient prescriptions for pneumonia by June 2020. Compliance with the antibiotic policy was assessed for prescriptions issued to ambulatory patients presenting in the outpatients' clinic with a diagnosis of mild or moderate pneumonia who received oral antibiotics. Patients admitted to hospital or who received IV antibiotics for pneumonia were excluded. The outcome measured was the percentage of compliance with the outpatient antibiotic policy for pneumonia prescriptions. Compliance with the antibiotic policy was determined based on the choice of antibiotics, and the antibiotic dose and duration were not assessed. Microbiology results were not available for patients to assess if the empiric antibiotic treatment was appropriate. The total number of outpatients prescribed antibiotics for pneumonia per week (Monday-Friday) were retrieved from the hospital's e-prescribing software database. Twenty patients per week were then sampled and assessed using a checklist to record the antibiotics prescribed and compliance with policy. A systematic sampling method was employed in the weekly prescription selection to ensure that prescriptions written by all prescribers (one specialist doctor, four medical doctors and four physician assistants) who attended to patients within a week were part of the sampled prescriptions. A proportional sampling approach based on the number of prescribers within the week was used. The data collection was performed by a pharmacist who was trained to extract the data from the database and was entered onto an excel spreadsheet. Pharmacists assessed prescription compliance with the antibiotic policy. Quality Improvement methodology is a recognised systematic and iterative approach to generate the maximum change in behaviour. Similar projects have also been shown to improve antimicrobial prescribing using these methods in respiratory patients [10]. The project followed Quality Improvement methodology using Plan Do Study Act (PDSA) cycles to test solutions on a small scale. The interventions tested during the project included improving the prescriber's access to the antimicrobial guidelines by training them on the use of the CwPAMS app and adapting and agreeing to the local antibiotic policy with prescribers, which were then displayed widely for easy access via posters in all working areas. After an initial project meeting for prescribers that included training on the AMS principles, monthly feedback was given on their progress. In addition, barriers to change were discussed with prescribers, and the overall outpatient compliance was also routinely discussed with prescribers individually. KBTH 12 Pharmacists Were Trained as CwPAMS App Superusers The NMUH (UK) team visited KBTH (Ghana) in June 2019 and introduced the Cw-PAMS app to the pharmacy department. Twelve pharmacists were trained on its use, and they were then asked to train and recruit their pharmacist and doctor colleagues and encourage them to download it onto their smart devices. The 12 superusers were provided with a data collection tool and recommended script to use when promoting the app. This included advise that they would receive a follow-up email from the NMUH team. With permission, all names and email contact details of newly recruited colleagues were forwarded to the NMUH lead pharmacist. Momentum was maintained with monthly emails and WhatsApp© messages to the 12 superuser pharmacists. The 12 superuser colleagues further promoted the app to the entire hospital during the World Antimicrobial Awareness week activities in November 2019, using the posters and advertising material provided by CwPAMS. The CwPAMS app metrics were obtained from data collected by the Horizon Strategic Partners. They assessed the frequency of page hits, guide opens and the number of registered users and downloads. A 30-point questionnaire was developed based on previously published questionnaire [11] and distributed electronically via SmartSurvey ® . Recruited KBTH colleagues were emailed the survey and asked to complete it over a four-week period between June and July 2020. The results of the survey were analysed using Microsoft Excel for Office 365©. Development of Antimicrobial Management Guidelines Ghana Police Hospital (GPH) The Health Improvement Scotland (UK) visited GPH in May 2019 to partner with a multidisciplinary team of health professionals and facilitate the AMS program using the Scottish Triad Approach: Information, Quality Improvement and Education [12]. A multidisciplinary antimicrobial stewardship committee was established that developed an action plan for the further development of AMS in GPH. This included conducting a Point Prevalence Survey (PPS) of antimicrobial prescribing and development of the identified improvements and guidelines required [13]. The Global-PPS (GPPS) study was conducted for the first time in May 2019 for one day on all inpatients that stayed overnight and were present at 8:00 a.m on survey day. The data was collected manually from prescription charts and patient notes. The data was then entered on the GPPS web-based application. Antimicrobial utilisation was assessed against the quality indicators, including: • indication of antimicrobial use documented in the patient notes; • compliance with guidelines for documented indication (where guidance was available); • missing/undocumented guidelines; • documentation of stop or review date for antimicrobials in the notes (and prescription chart); • proportion of surgical prophylaxis prescribed for less than or greater than 1 day. The guidelines used were the seventh edition of the national STGs of Ghana. Tanzania: Observership and Subsequent Pharmacy Leading Delivery of AMS at Kilimanjaro Christian Medical Centre (KCMC) and within Local Community The core team consisted of pharmacists from Northumbria Healthcare NHS Trust (NHCT) and Kilimanjaro Christian Medical Centre (KCMC) supported by an MDT with IPC nurses, microbiologists and pathologists. At the outset of the KCMC-NHCT partnership, a core team of pharmacists (as well as microbiologist and medical director from KCMC) undertook a 4-week observership at NHCT, including professional shadowing and a leadership development program providing them with the skills and confidence to take the lead in the following AMS project. The programme designed is available as Supplementary Materials 4 (KCMC Visit to UK_Skeleton programme 03. 5.19). The project adopted a multifaceted approach encompassing a wide range of interventions beginning with a robust point prevalence survey to provide antimicrobial usage data that highlighted the focus for education. Materials such as lectures, workshops and posters were developed and translated into Swahili for delivery. Dedicated workshops were organised for hospital staff, and the team added antibiotic awareness training for the first time as part of the school general education trips by hospital staff and traffic accident training for Boda boda drivers. Uganda: AMS Training and Local Production of Alcohol-Based Hand Rub (ABHR) The Kampala Cambridge Antimicrobial Stewardship (AMS) and Infection Prevention and Control (IPC) partnership brought together healthcare staff from various disciplines, including management, medicine, nursing and pharmacy. The project set out to reduce healthcare-associated infections in the obstetric and neonatal wards at the Kawempe National Referral Hospital (KNRH) and Mulago Specialised Women and Neonatal Hospital (MSWNH) through an interactive educational package. The content of the training was developed in collaboration with Ugandan partners and two behavioural scientists and focused on clinical and management training in antimicrobial stewardship and infection control. A unique core element of the training centred around promoting the value of pharmacists in multidisciplinary settings, with the aim of promoting future inter-collaborative working. This was achieved through individualised group activities and role play. The materials and equipment used in the training were shared with the Kampala partners to support them in developing local training with oversight from their medicines and therapeutics committee. IPC is an integral part of global and national action plans to tackle AMR, and hand hygiene is a key component to reducing the spread of infection. Thus, enhancing the availability and the appropriate use of ABHR among health workers remains part of the key AMS strategies in healthcare facilities, especially in LMICs. With the support of the health partnership and hospital administrators, one pharmacist at KNRH received training at a regional district hospital in the large-scale manufacturing of alcohol-based hand rubs to meet the needs of their organisation. A train-the-trainer model was then employed. Zambia: Infection Prevention Control and AMS training Brighton-Lusaka Pharmacy Link (BLPL) conducted a three-day conference in Zambia for national-level stakeholders and institutions interfacing with the national AMR strategy, including an AMS train-the-trainer workshop for UTH pharmacists, doctors, nurses and allied healthcare professionals to increase the awareness of AMS and provide capacitybuilding tools. This included detailing the UTH AMR patterns, use of point prevalence surveillance (PPS) methodology, multidisciplinary team (MDT) approaches, enhancing appropriate prescribing, IPC (including a bare below elbow (BBE) dress code) and AMS rollout training methodologies, therefore encompassing the World Health Organisation AMR prevention strategies. To foster intra-country collaboration, BLPL approached Ndola Teaching Hospital (NTH) (who had previously been IPC trained via a partnership with Guys & St Thomas) to provide initial expertise and training for IPC, which UTH used for rollout and future trainings targeting healthcare workers in Zambia. Context-appropriate posters were developed and alcohol hand rub production capacity and facilities enhanced across three other tertiary level public teaching hospitals in the country. Three-day on-the-job IPC trainings facilitated by UTH and NCTH staff were conducted at UTH, LCTH, KCTH and LMUTH, respectively. This training included WHO-recommended hand sanitisation techniques, including when appropriate to be completed (i.e., the "five moments of hand hygiene") for optimum effectiveness in the prevention of infection plus techniques of producing alcohol-based hand rubs using WHO modified formulations and its subsequent packaging, storage and utilisation. UTH pharmacists led the development of a national AMS training manual, currently undergoing accreditation by the Health Professions Council of Zambia-the national regulatory body for continuous professions development (CPD) of health workers in Zambia. Clinical audit training was commissioned to ensure UTH AMS MDT develop their skills in monitoring and evaluation and ensure ongoing project sustainability and development. CwPAMS The video on using the WHO formulation was viewed 847 times on YouTube. Following the launch of the CwPAMS app for the 12 institutions across the four countries, there were 530 downloads of the app and 2795 guide opens within 12 months. Ghana had more page hits (50.3%) than Uganda (31%), Tanzania (13%), Zambia (1.9%) and others (3.8%) within 12 months. Summary of Key Activities and Outcomes across the Five Partnerships An overall summary of the activities by the partnerships included in this study is included in Table 2. All partnerships conducted AMS activities and delivered education. An initial/baseline point prevalence survey (PPS) of antimicrobial use was conducted in the institutions of four out of five partnerships, and a postintervention PPS was undertaken by two partnerships (Healthcare Improvement Scotland Partnership in Ghana and Northumbria Healthcare NHS Foundation Trust in Tanzania). As a result of the partnership, all institutions now have a functioning formal organisational multidisciplinary structure of AMS (e.g., committee or group) that focuses on or takes responsibility for appropriate antimicrobial use. For each partnership, a description of one or two unique completed and/or ongoing activities is provided, and an assessment of the outcomes given. Quality Improvement Project Keta Municipal Hospital (KMH) A total of 757 prescriptions were assessed during the 39 weeks of the Quality Improvement project. Compliance with the antibiotic policy showed improvement from 18% to 70% within 3 months but then reduced to 57%, which was sustained until June 2020 (Figure 1). The aim of the project to achieve 70% compliance with the policy was met initially but not sustained. Improved compliance with the policy was not sustained due to a prescriber changeover just before the COVID-19 pandemic of approximately 60%. Planning for the COVID-19 response reduced the time available for the training of new staff. However, it was encouraging to see that compliance with the policy did not return to baseline at this time. Successful interventions were giving monthly feedback to prescribers both individually and in a meeting format and improving prescribers' access to guideline through the display of antibiotic policy as posters on the wall in all working areas. Local Pharmacists Lead on Implementing the CwPAMS Antimicrobial Guidelines App at Korle-Bu Teaching Hospital (KBTH) Fifty-five KBTH colleagues' contact details were collected after they were recruited to download and use the app by twelve KBTH superuser pharmacists: 32 doctors, 16 pharmacists and 7 nurses. Of the 55 colleagues eligible to complete the survey, 16 were excluded due to incomplete contact details (five doctors, as they did not provide an email address, and 11 colleagues (10 doctors and one nurse) provided incomplete email addresses. Of the remaining 39 colleagues, less than half 17/39 (45%) responded to the survey. The majority (16) were pharmacists and one microbiologist: 16 heard about the app during the June 2019 visit, 11 used the app once a week and 4 had not used the app at all since downloading it. Thirteen stated that they found the app easy to navigate, and 14 reported that it was easy to recommend to colleagues. Data from the app showed that the highest number of hits on the CwPAMS app Ghana page was in October 2019 (827), followed by July 2019 (569) and then November (438) 2019 ( Figure 2). Hits by doctors were the highest in October 2019 (158), followed by February 2020 (72) and then November 2019 (50). There was a low response rate to the survey to assess the effectiveness of the app from medical colleagues. Issues with using the app were identified as technical issues with the app and lack of data/Wi-Fi. Although 16 respondents stated that they would recommend the app to colleagues, only three (17%) recommended the app to more than six people. The data for the app represented all hits on the CwPAMS app Ghana page, and Table 2 shows that other Ghana partnerships also promoted the app. The CwPAMS rollout of the app in Korle-Bu was in June 2019 and corresponded to the increase seen in July of people accessing the app. The education and promotion of the app in the other partnership hospitals during October corresponded to the October peak in use of the app (Figure 2). Lower levels of engagement with the app were seen starting in March 2020. This may be due to the pandemic and the fact that COVID19 guidance was not included, or it could have been due to prescribers already having the app. Development of Antimicrobial Management Guidelines Ghana Police Hospital (GPH) The baseline PPS conducted at GPH in May 2019 as part of the CwPAMS project highlighted that, for many indications, guideline compliance could not be assessed due to the use of high levels of missing or undocumented guidelines (Table 3). This observation was prominent in two departments: obstetrics and gynaecology (OBGY) and surgery and formed the basis of the work plan developed (Supplementary Materials 1). Following further discussions with surgical and nursing teams, it was highlighted that the protocols for managing a range of conditions, including caesarean operations and post-delivery prophylaxis and surgical wound management, were in use but not available through the national standard treatment guidelines (STGs) or formally agreed-on local protocols. To enable the assessment of appropriate prescribing, the AMS team led by the local pharmacist communicated the findings from the PPS to the hospital staff at a clinical meeting and highlighted the importance of having approved guidelines for caesarean, postdelivery prophylaxis and the management of surgical wounds in their respective facilities. To promote the acceptance and use of the guidelines in the unit, the AMS team collaborated with the various units through their personnel on the AMS to draft the final documents. The OBGY specialists and senior nurses developed the protocols for the OBGY, and the surgeon on the AMS team, who was also the head of the surgical department, led the development of the draft wound management protocols with the surgeons in his unit based on their experience/training (Supplementary Materials 2). For both guidelines, the local pharmacist as part of the AMS Committee in GPH worked with the senior medical/specialist and nursing staff to develop antimicrobial management guidelines within the protocols by assessing the appropriateness of dose, frequency and duration using the British National Formulary, as well as the protocols used at one of the Ghana Tertiary National Referral Hospitals (Korle-Bu Teaching Hospital). These guidelines were then implemented within the hospital. The antibiotics and other drugs required are now included on pre-printed patient medications lists issued routinely for patients receiving caesarean delivery, normal delivery and for those who require post-delivery prophylaxis (Supplementary Materials 3). The adherence to the guidelines developed for the obstetrics and gynaecology department from the results obtained from the post-intervention PPS carried out in February 2020 was found to be 100%. The overall comparison of the compliance by prescribers to the national STG from PPS conducted at GPH in February (2020) revealed an increase from 64.6% in May (2019) to 88%. Tanzania: Observership and Subsequent Pharmacy Leading Delivery of AMS at Kilimanjaro Christian Medical Centre (KCMC) and within Local Community AMS and IPC training was delivered to a total of 1056 people, including healthcare professionals at KCMC, and to the wider community, including schools and boda boda taxi drivers (Figure 3). More than one-third of participants in training sessions (36%) were not aware of the main principles of AMS before training. The education campaign then extended beyond the scope of the sessions above, with links being set up between the Kilimanjaro School of Pharmacy and Newcastle University Pharmacy School. These resulted in a joint AMS module being developed for delivery to early years pharmacy trainees, ensuring that pharmacists of the future continue to lead on AMS activity within Tanzania and the UK. This module is also being considered for adaptation to medical schools and dentistry schools within Moshi, Tanzania. Furthermore, the educational lead pharmacist at KCMC, who took part in the fourweek observership (Supplementary Material 4), was able to share the work done within the project and went on to be appointed the Chair of the National Tanzania AMS Council. Uganda: Local Production of Alcohol Hand Rub Forty-two healthcare workers received the initial training in AMS: five were pharmacists. Using the train-the-trainer model, the pharmacist with large-scale alcohol-based hand rub training then trained five pharmacist interns on how to produce the hand rub from March to September 2020, and a second cohort of interns (six in total) were undergoing training at the time of publication. After receiving the AMS and IPC training, hospital administrators and managers were found to be more receptive in supporting pharmacists with the local onsite production of alcohol hand rub initially at one hospital and then rolling out in a second hospital. In the absence of monthly audits of hand hygiene compliance, a crude measure of alcohol hand rub purchases was used as a proxy for consumption to measure the success of the interventions. Where pure ethanol (96%) was procured by Kawempe National Referral Hospital (KNRH) for alcohol hand rub manufacture, the equivalent final volume was used based on the WHO formula (reference below). Purchases of alcohol hand rub by pharmacists also increased at the second partner hospital Mulago Specialised Women and Neonatal Hospital (MSWNH) as part of a multifaceted approach to improve IPC among healthcare workers and staff (see Figure 4). Zambia: Infection Prevention Control and AMS Training In total, 297 healthcare workers from a variety of professions (including pharmacists, doctors, nurses, allied health workers, hospital administrators, porters and cleaning staff) were trained in IPC, with 36 more trained in hand rub production at the four participating hospitals, respectively (Table 4). The project foundations have coped with demand, with alcohol hand rub production increasing to over 120 litres per day. A local "bare below the elbow (BBE)" dress code for health professionals on the wards was implemented, and BBE practice was also endorsed nationally by the Hospital Pharmacists Association of Zambia (HOPAZ). Additional equipment was procured and installed at the four tertiary hospitals to develop alcohol hand rub production facilities (using a WHO-approved formula) and alcohol hand rub dispensers installed in UTH wards. With pharmacists leading, an AMS training manual was developed by the CwPAMS project team to facilitate further MDT education and continuous professional development (CPD) in AMS among healthcare workers. The AMS training modules were validated by local stakeholders and are currently being reviewed for University of Zambia (UNZA) accreditation. Discussion Empowering pharmacists in Ghana, Uganda, Tanzania and Zambia to lead AMS activities and rise in awareness of AMR through structured collaboration with UK pharmacists have had a positive impact. Pharmacists within the participating countries have raised the awareness of AMR as well as developed and delivered several AMS interventions, including education and training and conducting PPS. They have locally led AMS interventions through engaging with colleagues, e.g., the cascade of the CwPAMS app, an Antibiotic Guardian pledge-based campaign, WAAW activities and guideline compliance, and they have also engaged with the public, e.g., Boda boda drivers in Tanzania. Using a quality improvement approach, helped in the identification of prescribing barriers, such as low trust in generic medication, fear of treatment failure with first-line antibiotics and lack of awareness of policy. Sociocultural barriers of accessing antibiotic policy using the CwPAMS Antimicrobial Guideline app through mobile phones while taking the patient history for fear of being seen to be using social media were also identified, allowing the pharmacists to work with their prescriber colleagues to address the local barriers to change. Engaging staff over a longer period of time with regular feedback, similar to other studies, incorporating the feedback of surveillance data collected from, e.g., PPS within projects, was shown to lead to positive effects, including the development of new protocols led by clinicians with support from the pharmacists as part of institutions' AMS teams [12,13]. In addition, pharmacists also led IPC interventions, including Bare Below the Elbow (BBE) campaigns and the large-scale production of alcohol hand rub during the COVID-19 pandemic [14]. Bidirectional learning was also evident particularly during COVID-19 management in the UK, where pharmacists in Zambia trained UK pharmacists to locally produce alcohol hand rub using the WHO hand rub formula. The pharmacists that were part of the CwPAMS projects are now considered core members of their respective AMS committees, providing a sustainable model for AMS within these hospitals. Using a variety of methods and approaches but always focusing on local priorities and being led by local pharmacists will have an impact on patient care, improving infection management and supporting the actions to tackle AMR. Like their colleagues in the UK [4], they now play an important role in leading AMS within their institutions. The collaboration not only highlighted the importance of tackling AMR, but it also raised the profiles of the pharmacists, resulting in their unique skills as pharmacists being recognised within the MDT environment. The skills and experiences gained during the project have given local pharmacists the opportunity to expand their roles. Some pharmacists have taken on a more clinical pharmacy role. Many are now embedded as key members of MDT clinical teams, working with their prescribing colleagues to improve compliance with the antibiotic policy and conduct ward rounds. Pharmacists involved in the projects were also able to link with national committees to support national efforts to tackle AMR through AMS and the surveillance of antimicrobial use, as well as working with external organisations, including universities, to develop modules for pre-and post-service education on AMR. An example is in Tanzania, where the education lead pharmacist and member of the CwPAMS project at KCMC was recently appointed Chair of the National Tanzania AMS team and has successfully introduced AMS training modules as part of the healthcare school's curriculum within medicine, pharmacy and dentistry. The approach of collaborating via different methods with a focus on education and training and the rollout of a train-the-trainer approach reduced the overall cost and the time pharmacists from the UK required to be in partnership countries. This was a successful approach that could be replicated elsewhere to improve AMS. Education resources from the CwPAMS project and CPA will make it easier for others to be able to increase the awareness of AMS in other LMIC and replicate the local engagements, finding their own focus for improvements in AMS. While there have been many successes in the projects, these have not been without challenges. In Zambia, funding was a barrier initially to scale up the alcohol hand rub production to other facilities. Barriers with prescriber attitudes, the use of apps due to access to internet data and space app took up on the phones and communication within the multidisciplinary team (MDT) were also identified as issues within the Ghana project. Traditional pharmacy roles need to be challenged in order to allow pharmacists to fulfil their potential within the MDT environment. Ongoing support will be needed to ensure sustainability and continued support from hospital managers. Government support will also be required to sustain the IPC interventions, particularly with the current COVID-19 pandemic. These projects have been very successful at the local level, but AMR awareness and the benefits of AMS are still not widely understood or discussed, and more needs to be done to join up those working locally to teams working within governments to tackle AMR nationally but, also, join up with the effort internationally in order to engage more people in AMR and AMS conversations. Urgency is needed to get the message heard on a larger scale. WHO has recognised AMR as a global health threat, and the barriers need to be overcome in order to respond with a global response to this issue. CPA have made great strides in this aim by both supporting this work and by providing practical advice and resources for pharmacists in LMIC, including the CwPAMS toolkit and checklist (https://commonwealthpharmacy.org/cwpams-toolkit/, accessed on 1 November 2020). The newly released CPA continuing the professional development platform, which has an antimicrobial module supporting pharmacists in their efforts to reduce AMR by providing educational and networking opportunities for pharmacists. The CwPAMS toolkit brings together resources and shared experiences from the CwPAMS projects within the framework of the Core Elements for AMS. It also aims to signpost published works or provide CwPAMS project resources to support each component of the Core Elements of AMS. The toolkit may be used by healthcare facilities to identify their own AMS priorities and implement a workplan on the local level. While this is a step in the right direction, more is still needed to spread the AMR message further. One of the strengths of this study is that it adds to the body of evidence on the implementation of AMS interventions/programmes, especially within African countries [15][16][17][18][19][20][21][22][23]. A recent systematic review that focused on the extent of the implementation of stewardship programmes in Africa identified only thirteen primary studies: five studies from three countries in East Africa (Tanzania, Kenya and Sudan) and one study from Egypt in North Africa, with no studies identified within West Africa, although the region has the greatest number of countries in Africa [15][16][17][18][19][20][21][22][23]. The limitation of some of the data presented is that it includes, in some examples, small sample sizes and preliminary data, due to the fact that these are examples of the pharmacy health partnership model. This makes it difficult to draw conclusions from the data presented; however, it provides an illustration of the progress made within the partnerships. The COVID-19 pandemic had substantial impact on most projects, as it has with much improvement work throughout the world at this time. In particular, the rollout and spread of initiatives and data collection were impacted. The need to focus on COVID-19 pandemic response has had a negative impact on implementing AMS interventions in a similar manner to many countries globally [24]. This also prevented the collection of qualitative feedback that would have improved the evidence available for the impact of the projects. The model of UK pharmacists supporting colleagues within African countries has been identified as a model with great potential to be used widely both with other LMICs trying to tackle AMR but, also, for the further development in other areas of pharmacy where UK pharmacists are well-established, e.g., clinical pharmacy roles. Conclusions Empowering pharmacists in Ghana, Tanzania, Uganda and Zambia to lead within their own healthcare settings supported by their pharmacist colleagues in the UK encouraged them to embrace their antimicrobial stewardship role and enhanced the clinical pharmacy practices within the organisations and raised the pharmacy profiles both locally and nationally. Every country and hospital had a different focus for improvement based on local needs, from improving prescription developing guidelines and establishing new roles for pharmacists to a renewed focus on IPC and novel approaches to resolving local problems with local solutions. All local pharmacists in the projects embraced their new roles of promoting antimicrobial stewardship and ensuring the improvements made in partnership with their colleagues in the UK are sustainable. This shows that the model used is a model that has great potential for further development in the future. The examples shared within this article have the potential to enable the setup of similar, pharmacist-led initiatives in other LMIC institutions.
2021-07-22T06:18:01.070Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "9ef462ab13cb5258acff3fca443d36ffc3c74893", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2226-4787/9/3/124/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f3bc390adca28cec9c978833d58d16359dfea14", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53167770
pes2o/s2orc
v3-fos-license
High-Precision Lens-Less Flow Cytometer on a Chip We present a flow cytometer on a microfluidic chip that integrates an inline lens-free holographic microscope. High-speed cell analysis necessitates that cells flow through the microfluidic channel at a high velocity, but the image sensor of the in-line holographic microscope needs a long exposure time. Therefore, to solve this problem, this paper proposes an S-type micro-channel and a pulse injection method. To increase the speed and accuracy of the hologram reconstruction, we improve the iterative initial constraint method and propose a background removal method. The focus images and cell concentrations can be accurately calculated by the developed method. Using whole blood cells to test the cell counting precision, we find that the cell counting error of the proposed method is less than 2%. This result shows that the on-chip flow cytometer has high precision. Due to its low price and small size, this flow cytometer is suitable for environments far away from laboratories, such as underdeveloped areas and outdoors, and it is especially suitable for point-of-care testing (POCT). Introduction Cell analysis using an optical microscope or a flow cytometer is an important technique in biology and medicine [1]. Optical microscopes can obtain focus images of cells for biomedical applications, and flow cytometers can collect the signature of a large number of cells in liquid specimens with high analysis speed. However, these instruments are unsuitable for outdoor and undeveloped areas because of their high price and large size. Currently, there is a need for a small and inexpensive cell analysis device that combines the properties of the above two devices. Over the past decade, lens-less imaging has been considered a good way to reduce the volume and cost of cell analysis tools. Seung Ah Lee and Guoan Zheng designed opto-fluidic microscopes using a complementary metal oxide semiconductor (CMOS) image sensor (CIS) and a microfluidic channel [2][3][4][5][6][7]. To weaken the shadow-imaging diffraction, the distance between the cells and the surface of the image sensor must be shorter than 2 µm. These researchers mounted a micro-channel on a CIS by removing the protective glass and Bayer filter. To improve the spatial resolution of cell images obtained with a 4× object lens, they used a multi-frame, super-resolution algorithm based the sub-pixel movement of cells flowing through the micro-channel. At the same time, Aydogan Ozcan and Serhan O. Isikman designed numerous lens-free on-chip microscopes based on incoherent digital holography [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]. The lens-free on-chip microscopes capture digital diffractive images of cells by using an in-line holographic structure. The diffractive images were used to reconstruct clear images of the cells using angular spectrum theory [26], and the resultant clear cell images are comparable to those obtained by a 10× object lens with a numerical aperture of~0.1-0.2. Later, Se-Hwan Paek and Sungkyu Seo proposed a new method to classify different types of cells using digital diffractive images [27][28][29][30]. Mei Yan and Hao Yu conducted a blood cell analysis with a single-frame super resolution [31]. The concept of a lens-less microscopy technique is a novel idea for the miniaturization of flow cytometry, but the accuracy and speed of cell counting in such a method are challenges. At present, most devices based on a lens-less platform use only one frame to count cells, and this leads to inaccurate cell counting [27]. It is not easy to distinguish between cells and dust using a static image, which has a great influence on the ability to count with high precision. In this manuscript, we propose an on-chip flow cytometer system based on lens-less imaging and a microfluidic control technique to improve the speed of cell analysis. The system causes cells to flow through a micro-channel in a polydimethylsiloxane (PDMS) microfluidic chip above a CIS. A near-coherent light source is mounted above the microfluidic chip (~5 cm), and diffraction shadow images of cells generated by the near-coherent light source are then captured by the CIS. To obtain clear images of cells, a phase iterative reconstruction algorithm is used for image diffraction [32]. In addition, the system can obtain a very accurate image without cells absented for background removal. After the background is removed, images of each segmented cell can be acquired from the whole image more precisely. Therefore, we can more accurately extract features from each cell image and quickly classify and count cells. Because of the low intensity of near-coherent light caused by a pinhole, the exposure time of the image sensor in the system is longer than 400 ms. Therefore, there is stronger motion blur while the cells are quickly flowing in the micro-channel. To solve this problem, this manuscript proposes a method in which the cells in the micro-channel are imaged simultaneously in a large field of view (FOV) instead of with a flow cytometer method in which the cells pass through the testing area at high speed. In other words, the method takes advantage of the larger FOV of the CIS to reduce the cell flow velocity. To utilize the large FOV of the CIS, we design an "S" channel shape. As a result, we can ensure that the CIS captures the maximum possible number of cells in a frame. In addition, the cells in current frame flow out of the micro-channel completely before the next exposure of the CIS. Thus, all the cells in each frame are new cells, and the cells in each individual frame can be evaluated to increase the number of tested cells. Regarding cost, the CIS is commonly used in industrial cameras and mobile phones, so the price is very low (below $10). The microfluidic chip comprises a PDMS channel and a piece of thin glass (0.18 mm), making it very cheap and easy to replace. Overall, this manuscript proposes an on-chip cytometer that can test blood cells, bacteria, and other micro-particles in liquids. Because of the low price and small volume, the system is especially suitable for places far away from the laboratory and undeveloped areas and for family health tests. System Setup The flow cytometer utilized a lens-less imaging technique based on an in-line holography structure, and the overall structure is shown in Figure 1. single-frame super resolution [31]. The concept of a lens-less microscopy technique is a novel idea for the miniaturization of flow cytometry, but the accuracy and speed of cell counting in such a method are challenges. At present, most devices based on a lens-less platform use only one frame to count cells, and this leads to inaccurate cell counting [27]. It is not easy to distinguish between cells and dust using a static image, which has a great influence on the ability to count with high precision. In this manuscript, we propose an on-chip flow cytometer system based on lens-less imaging and a microfluidic control technique to improve the speed of cell analysis. The system causes cells to flow through a micro-channel in a polydimethylsiloxane (PDMS) microfluidic chip above a CIS. A near-coherent light source is mounted above the microfluidic chip (~5 cm), and diffraction shadow images of cells generated by the near-coherent light source are then captured by the CIS. To obtain clear images of cells, a phase iterative reconstruction algorithm is used for image diffraction [32]. In addition, the system can obtain a very accurate image without cells absented for background removal. After the background is removed, images of each segmented cell can be acquired from the whole image more precisely. Therefore, we can more accurately extract features from each cell image and quickly classify and count cells. Because of the low intensity of near-coherent light caused by a pinhole, the exposure time of the image sensor in the system is longer than 400 ms. Therefore, there is stronger motion blur while the cells are quickly flowing in the micro-channel. To solve this problem, this manuscript proposes a method in which the cells in the micro-channel are imaged simultaneously in a large field of view (FOV) instead of with a flow cytometer method in which the cells pass through the testing area at high speed. In other words, the method takes advantage of the larger FOV of the CIS to reduce the cell flow velocity. To utilize the large FOV of the CIS, we design an "S" channel shape. As a result, we can ensure that the CIS captures the maximum possible number of cells in a frame. In addition, the cells in current frame flow out of the micro-channel completely before the next exposure of the CIS. Thus, all the cells in each frame are new cells, and the cells in each individual frame can be evaluated to increase the number of tested cells. Regarding cost, the CIS is commonly used in industrial cameras and mobile phones, so the price is very low (below $10). The microfluidic chip comprises a PDMS channel and a piece of thin glass (0.18 mm), making it very cheap and easy to replace. Overall, this manuscript proposes an on-chip cytometer that can test blood cells, bacteria, and other micro-particles in liquids. Because of the low price and small volume, the system is especially suitable for places far away from the laboratory and undeveloped areas and for family health tests. System Setup The flow cytometer utilized a lens-less imaging technique based on an in-line holography structure, and the overall structure is shown in Figure 1. As shown in Figure 1, the flow cytometer comprised a greyscale CIS (Aptina MT9P031, Micron Technology, Pennsylvania, ID, USA), a PDMS microfluidic chip and a blue light-emitting diode (LED) light source (central wavelength of~465 nm). The pixel size of the CIS was 2.2 µm, the effective pixel size was 2592 H × 1944 V (5.7 mm × 4.2 mm), and the imaging area reached~24.4 mm 2 . To obtain holographic diffraction patterns on the surface of the CIS, the blue light LED was located 5 cm above the surface of the image sensor. In addition, there was a plate with a pinhole (diameter of 0.1 mm) at the front of the LED to obtain a coherent light source. To utilize the large FOV of the CIS, an S-type micro-channel was designed that could easily determine the volume of liquid samples and count the maximum possible number of cells in a frame. Moreover, the concentration of cells in a specimen could be calculated accurately, similar to a classic cell counting chamber. We used a PDMS channel and a piece of thin glass bonded together to obtain a microfluidic chip to capture the holograms of cells (the diffractive shadow images of cell) and fix the microfluidic chip on the surface of the CIS. We briefly introduce the fabricated process of the microfluidic below. The photoresist (SU-8 2015, Microchem, Westborough, MA, USA) and a silicon wafer (4 inches in diameter) were used to fabricate positive model. The 3 mL of photoresist was dropped in the centrality of a wafer, and the photoresist film was 30 µm in thickness after using the spin coater at 1500 r/min for 15 s. Then, the silicon wafer was pre-baked for 15 min at 95 • C. The pre-designed channel photolithography plate was used for exposure on the lithography machine for 125 s. Next, the exposed wafer was after-baked for 3 min at 95 • C, and developed for 3 min. Then, we poured 30 g of liquid PDMS on the positive film, and put it in baking box for 40 min at 95 • C to solidify. The solidified PDMS layer and a piece of thin glass were bonded by vacuum plasma technique. Finally, the PDMS layer was drilled the holes of the inlet and outlet to finish the microfluidic chip. However, since a microfluidic chip was used, a cell sample could be continuously detected, similar to a flow cytometer, as shown in Figure 2. As shown in Figure 1, the flow cytometer comprised a greyscale CIS (Aptina MT9P031, Micron Technology, Pennsylvania, ID, USA), a PDMS microfluidic chip and a blue light-emitting diode (LED) light source (central wavelength of ~465 nm). The pixel size of the CIS was 2.2 μm, the effective pixel size was 2592 H × 1944 V (5.7 mm × 4.2 mm), and the imaging area reached ~24.4 mm 2 . To obtain holographic diffraction patterns on the surface of the CIS, the blue light LED was located 5 cm above the surface of the image sensor. In addition, there was a plate with a pinhole (diameter of 0.1 mm) at the front of the LED to obtain a coherent light source. To utilize the large FOV of the CIS, an S-type micro-channel was designed that could easily determine the volume of liquid samples and count the maximum possible number of cells in a frame. Moreover, the concentration of cells in a specimen could be calculated accurately, similar to a classic cell counting chamber. We used a PDMS channel and a piece of thin glass bonded together to obtain a microfluidic chip to capture the holograms of cells (the diffractive shadow images of cell) and fix the microfluidic chip on the surface of the CIS. We briefly introduce the fabricated process of the microfluidic below. The photoresist (SU-8 2015, Microchem, Westborough, MA, USA) and a silicon wafer (4 inches in diameter) were used to fabricate positive model. The 3 mL of photoresist was dropped in the centrality of a wafer, and the photoresist film was 30 μm in thickness after using the spin coater at 1500 r/min for 15 s. Then, the silicon wafer was pre-baked for 15 min at 95 °C. The pre-designed channel photolithography plate was used for exposure on the lithography machine for 125 s. Next, the exposed wafer was after-baked for 3 min at 95 °C, and developed for 3 min. Then, we poured 30 g of liquid PDMS on the positive film, and put it in baking box for 40 min at 95 °C to solidify. The solidified PDMS layer and a piece of thin glass were bonded by vacuum plasma technique. Finally, the PDMS layer was drilled the holes of the inlet and outlet to finish the microfluidic chip. However, since a microfluidic chip was used, a cell sample could be continuously detected, similar to a flow cytometer, as shown in Figure 2. Next, we prepared an experimental platform to obtain the features and parameters of the proposed system. In addition, we found that the exposure time of the image sensor in this system was greater than 400 ms. Unfortunately, motion blur is caused by the movement of cells in the sample when the image sensor is operating during the exposure time. Therefore, we considered that instead of the cells flowing through the detection area at high speed, a large number of cells passed through the exposure region at one time. In other words, the system utilized the large FOV of the CIS to obtain a large number of images of cells from each frame. To avoid the motion blur caused by cell flowing, we used a method of periodically controlling the flow velocity of the specimen. There was only one inlet and one outlet in the micro-channel, ensuring that the flow of all the tested cells out of the micro-channel and that of the new cells flow into the micro-channel took a short time. To obtain a sufficient processing time for the image processing algorithm, the new cells were injected Next, we prepared an experimental platform to obtain the features and parameters of the proposed system. In addition, we found that the exposure time of the image sensor in this system was greater than 400 ms. Unfortunately, motion blur is caused by the movement of cells in the sample when the image sensor is operating during the exposure time. Therefore, we considered that instead of the cells flowing through the detection area at high speed, a large number of cells passed through the exposure region at one time. In other words, the system utilized the large FOV of the CIS to obtain a large number of images of cells from each frame. To avoid the motion blur caused by cell flowing, we used a method of periodically controlling the flow velocity of the specimen. There was only one inlet and one outlet in the micro-channel, ensuring that the flow of all the tested cells out of the micro-channel and that of the new cells flow into the micro-channel took a short time. To obtain a sufficient processing time for the image processing algorithm, the new cells were injected into the micro-channel during the image processing period. Subsequently, all the tested cells flowed out the micro-channel, and then the flow of the cells stopped and the cell images were captured by the image sensor. With several repetitions, the device was able to collect the maximum possible number of cell signatures to improve the accuracy of the analysis. Sample Preparation The flow cytometer is suitable for samples with a large number of cells, such as blood. Therefore, we performed an experiment with whole blood. The concentration range of red blood cells (RBCs) in whole blood is from~4 × 10 12 /L to 5.5 × 10 12 /L. To ensure the reconstruction of the wavefront in the in-line holography system, we had to reduce the concentration of cells in the whole blood. According to the experiments, we found that 1:400 was a suitable volume dilution to count blood cells. When RBCs were tested, the dilution ratio was 1:400, corresponding to 10 µL of whole blood diluted with 4 mL of phosphate buffer saline (PBS, 0.0067 M PO 4 ), and the resulting solution was pumped into the microfluidic chip for testing. The concentration of white blood cells (WBCs) in whole blood is 4 × 10 9 /L-10 × 10 9 /L, and the ratio of WBCs to RBCs is close to 1:1000. When WBCs were tested, 200 µL of a whole blood sample was diluted with 400 µL of RBC lysis buffer and this was then injected into the micro-channel after one minute of delay. The study was approved by the School of Automation and Information Ethics Committee, Xi'an University of Technology. Reconstruction of Lens-Less Holographic Images The lens-less imaging technique utilizes an in-line holographic structure proposed by Gabor [33] to reconstruct the image of the cell plane. The lens-less holographic imaging system is mainly composed of a blue LED light source, a pinhole plate, a microfluidic chip and a CIS, as shown in Figure 3. into the micro-channel during the image processing period. Subsequently, all the tested cells flowed out the micro-channel, and then the flow of the cells stopped and the cell images were captured by the image sensor. With several repetitions, the device was able to collect the maximum possible number of cell signatures to improve the accuracy of the analysis. Sample Preparation The flow cytometer is suitable for samples with a large number of cells, such as blood. Therefore, we performed an experiment with whole blood. The concentration range of red blood cells (RBCs) in whole blood is from ~4 × 10 12 /L to 5.5 × 10 12 /L. To ensure the reconstruction of the wavefront in the in-line holography system, we had to reduce the concentration of cells in the whole blood. According to the experiments, we found that 1:400 was a suitable volume dilution to count blood cells. When RBCs were tested, the dilution ratio was 1:400, corresponding to 10 μL of whole blood diluted with 4 mL of phosphate buffer saline (PBS, 0.0067 M PO4), and the resulting solution was pumped into the microfluidic chip for testing. The concentration of white blood cells (WBCs) in whole blood is 4 × 10 9 /L-10 × 10 9 /L, and the ratio of WBCs to RBCs is close to 1:1000. When WBCs were tested, 200 μL of a whole blood sample was diluted with 400 μL of RBC lysis buffer and this was then injected into the micro-channel after one minute of delay. The study was approved by the School of Automation and Information Ethics Committee, Xi'an University of Technology. Reconstruction of Lens-Less Holographic Images The lens-less imaging technique utilizes an in-line holographic structure proposed by Gabor [33] to reconstruct the image of the cell plane. The lens-less holographic imaging system is mainly composed of a blue LED light source, a pinhole plate, a microfluidic chip and a CIS, as shown in Figure 3. Because of the infinitesimal size of blood cells (~2-15 μm), the shadows of the blood cells on the surface of the CIS are diffraction images. Due to the influence of diffraction phenomenon, the shorter the wavelength of light source is, the higher the spatial resolution of the microscopic image becomes. In the most commonly used LED of single frequency light sources, a blue light source has the shortest wavelength. So we chose a blue LED as the light source. For convenience, we assumed that the cell plane was the object plane and that the surface of CIS was the image plane. The distance from the pinhole to the object plane was d1, and the distance from the object plane to the image plane was d (d1 >> d). According to the angular spectrum theory of diffraction, we can reconstruct an image Because of the infinitesimal size of blood cells (~2-15 µm), the shadows of the blood cells on the surface of the CIS are diffraction images. Due to the influence of diffraction phenomenon, the shorter the wavelength of light source is, the higher the spatial resolution of the microscopic image becomes. In the most commonly used LED of single frequency light sources, a blue light source has the shortest wavelength. So we chose a blue LED as the light source. For convenience, we assumed that the cell plane was the object plane and that the surface of CIS was the image plane. The distance from the pinhole to the object plane was d 1 , and the distance from the object plane to the image plane was d (d 1 >> d). According to the angular spectrum theory of diffraction, we can reconstruct an image of the object plane by recording the image plane. We assumed that the transmittance of the sample was O(x, y) and that the complex amplitude of the wavefront through the object plane was as below: Here, the image plane is assumed as the plane of z = 0, and the object plane is assumed as the plane of z = d. According to the Rayleigh-Sommerfeld diffraction theory, the transfer function of light waves in two planes separated by a distance d is defined as: Here, ε and η denote the coordinates of a frequency domain and have been transformed by x and y into a spatial domain. λ is the wavelength of the light source. According to the transfer function, we can obtain the complex amplitude of the image plane: where H + d and H − d represent the optical forward and backward propagation operators, which carry out a fast Fourier transform and an inverse fast Fourier transform, respectively, belonging to a convolution operator. d denotes the distance of the light propagation; in other words, it is the distance between the object plane and the image plane. + and − denote forward propagation and backward propagation along the z-axis, respectively. The light intensity in the holographic plane recorded by the image sensor is the square of the amplitude of the light wave, and the light intensity is as below: In Equation (4), U 0 (x,y) is the complex amplitude of the actual light wave in the image plane, but the image sensor can receive only the light intensity, I 0 , and the phase is discarded. Normally, image sensor acquisition of the holographic plane light intensity is a linear process, so the light intensity information collected by image sensor can be expressed as: The amplitude of the light wave in the object plane can be obtained by the reconstruction of the distance image at the back of the image plane: With Equations (1)- (6), we obtain Equation (7): In Equation (7), the first term is the direct current (DC) component; the second term is the focus image; the third term is the holographic image, which is the focus image backward propagated a distance of 2d; and the fourth term is the intermodulation. The second and third terms constitute the twin image, which still appears after the forward transfer reconstruction of the diffraction plane and is difficult to separate. In fact, the twin-image phenomenon, which is caused by the absence of a light phase, is a major problem in the in-line holographic system. In addition, we used micro-bead images obtained with a 10× objective lens to simulate the twin-image problem (Figure 4). According to Gabriel Koren's research [32], we can use only one diffractive to reconstruct a focus image of the object plane and suppress the twin-image phenomenon. In our proposed algorithm, only a holographic diffraction image and a cell-absent background image were needed to reconstruct the phase and obtain a focus image of the object plane. The general steps were as follows: Step 1: Using the square root of the light intensity and the initial value of the phase (generally 0), reversely transfer the diffractive pattern of the image plane back to the object plane by the transfer function to obtain the focus image. However, the initial estimation of the object plane seriously suffers from the twin-image phenomenon. Thus, it is necessary to use the following steps to suppress the twin images. The main operation of the reconstruction process is similar to the frequency domain filter in digital image processing. The transfer function of the filter is shown by Equation (2), and the reconstruction algorithm of the object plane is shown below: Step 2: The region information of the object is extracted from the preliminary estimated object image, which is used for the object plane constraint. Classic image segmentation algorithms, such as the gradient boundary extraction algorithm and the threshold segmentation algorithm can be used to find the object plane constraint. Because of the low signal-to-noise ratio (SNR) of the image extracted by the CIS, the threshold segmentation algorithm is more reliable. The threshold is 0.34 in this manuscript; in other words, the grey value of cell regions on the object plane is usually less than 0.34. Step 3: The cell region is the C region, and the background is the non-C region. Through an iterative algorithm, the cell regions are close to the real image, and the twin-image phenomenon will be weakened on the object plane. The algorithm is , , , , , , , where D(x, y) is the background image, which is obtained by the image sensor without cells, and m is shown with Step 4: The new complex amplitude of the image is obtained by the forward transfer operation. The phase of the newly calculated complex amplitude is retained, and the amplitude is replaced by the original known image plane amplitude. This process is called the image plane constraint: According to Gabriel Koren's research [32], we can use only one diffractive to reconstruct a focus image of the object plane and suppress the twin-image phenomenon. In our proposed algorithm, only a holographic diffraction image and a cell-absent background image were needed to reconstruct the phase and obtain a focus image of the object plane. The general steps were as follows: Step 1: Using the square root of the light intensity and the initial value of the phase (generally 0), reversely transfer the diffractive pattern of the image plane back to the object plane by the transfer function to obtain the focus image. However, the initial estimation of the object plane seriously suffers from the twin-image phenomenon. Thus, it is necessary to use the following steps to suppress the twin images. The main operation of the reconstruction process is similar to the frequency domain filter in digital image processing. The transfer function of the filter is shown by Equation (2), and the reconstruction algorithm of the object plane is shown below: Step 2: The region information of the object is extracted from the preliminary estimated object image, which is used for the object plane constraint. Classic image segmentation algorithms, such as the gradient boundary extraction algorithm and the threshold segmentation algorithm can be used to find the object plane constraint. Because of the low signal-to-noise ratio (SNR) of the image extracted by the CIS, the threshold segmentation algorithm is more reliable. The threshold is 0.34 in this manuscript; in other words, the grey value of cell regions on the object plane is usually less than 0.34. Step 3: The cell region is the C region, and the background is the non-C region. Through an iterative algorithm, the cell regions are close to the real image, and the twin-image phenomenon will be weakened on the object plane. The algorithm is where D(x, y) is the background image, which is obtained by the image sensor without cells, and m is shown with m = mean U i r (x, y) /mean(D(x, y)) Micromachines 2018, 9, 227 7 of 13 Step 4: The new complex amplitude of the image is obtained by the forward transfer operation. The phase of the newly calculated complex amplitude is retained, and the amplitude is replaced by the original known image plane amplitude. This process is called the image plane constraint: The iteration can be completed by repeating the third and fourth steps and can converge after 5-6 iterations. To obtain the missing phase, the algorithm iterates between two planes (object plane and image plane) through the amplitude and makes the iteration convergent using the object plane constraint (Equation (9)) and the image plane constraint (Equation (11)). However, the algorithm converges rapidly in the initial several iterations, and then the convergence is almost stagnant. Furthermore, there is a large error in the estimation of the initial phase when the distance between the object plane and the image plane has a deviation in an actual system. Therefore, the classic phase recovery algorithm is necessary to improve an actual system. The manuscript proposes an initial phase constraint algorithm based on the classic algorithm, in which Equation (8) is replaced by Equation (12): In general, there is no linear relationship between the amplitude and phase in a complex number. However, the phase changes of near-coherent light passing through a cell are related to the cell transmittance, and the cell transmittance is also expressed in amplitude. Therefore, there is a weak correlation between amplitude and phase. Using this property, we can estimate the initial phase of the iteration by transmittance. Through the initial phase constraint, the iterative convergent speed is faster, the reconstruction precision is higher, and the anti-jamming ability is stronger. To test the performance of the algorithm, we used a dyed leucocyte captured by a 20× object lens microscope to perform a simulation. Using Equations (1)-(5) to establish a diffractive degradation model, we obtained the diffractive pattern of the leucocytes. To replicate our flow cytometer, we chose the same parameters as the actual system for simulation. The central wavelength of the light source was 465 nm, the distance between the object plane and the image plane was 0.875 mm, and the pixel size was 2.2 µm. The iterative algorithm without the initial phase constraint was compared to the iterative algorithm with the initial phase constraint, and the result is shown in Figure 5. To test the performance of the two methods, we calculated the root-mean-square error (RMSE) for the reconstructed image of the object plane and original image. Finally, the proposed algorithm was used to reconstruct the cell image on the object plane and compared with the original image to calculate the RMSE: According to the distance between the object plane and image plane, we conducted two groups of comparative experiments. The first was without deviation, and the second was with 20% deviation. The RMSEs of the two method were calculated by Matlab (Version: 2016b, MathWorks, Endogenous, MA, USA) and are shown in Figure 6. In Figure 6, the 'phase constraint' is our proposed method, and the 'non-phase constraint' is the classic method. The proposed method has a faster convergence rate and a lower error rate, making it more conducive to counting and analyzing cells. As shown in Figures 5 and 6, by comparing the two groups with the two methods, we found that the iteration method with initial phase constraints had a faster iteration speed. In the case of a 20% distance deviation, the proposed method was able to restore the cell image, whereas the original method could not restore the image effectively, which has a great influence on the actual system. Moreover, when all the parameters were accurate, the proposed method converged faster, and the RMSE of image reconstruction was smaller. The results in Figure 6 show that our proposed method can greatly reduce the time consumption of the image processing algorithm and provide a guarantee for the real-time implementation of the system. speed is faster, the reconstruction precision is higher, and the anti-jamming ability is stronger. To test the performance of the algorithm, we used a dyed leucocyte captured by a 20× object lens microscope to perform a simulation. Using Equations (1)-(5) to establish a diffractive degradation model, we obtained the diffractive pattern of the leucocytes. To replicate our flow cytometer, we chose the same parameters as the actual system for simulation. The central wavelength of the light source was 465 nm, the distance between the object plane and the image plane was 0.875 mm, and the pixel size was 2.2 μm. The iterative algorithm without the initial phase constraint was compared to the iterative algorithm with the initial phase constraint, and the result is shown in Figure 5. To test the performance of the two methods, we calculated the root-mean-square error (RMSE) for the reconstructed image of the object plane and original image. Finally, the proposed algorithm was used to reconstruct the cell image on the object plane and compared with the original image to calculate the RMSE: (13) According to the distance between the object plane and image plane, we conducted two groups of comparative experiments. The first was without deviation, and the second was with 20% deviation. The RMSEs of the two method were calculated by Matlab (Version: 2016b, MathWorks, Endogenous, MA, USA) and are shown in Figure 6. In Figure 6, the 'phase constraint' is our proposed method, and the 'non-phase constraint' is the classic method. The proposed method has a faster convergence rate and a lower error rate, making it more conducive to counting and analyzing cells. As shown in Figures 5 and 6, by comparing the two groups with the two methods, we found that the iteration method with initial phase constraints had a faster iteration speed. In the case of a 20% distance deviation, the proposed method was able to restore the cell image, whereas the original method could not restore the image effectively, which has a great influence on the actual system. Moreover, when all the parameters were accurate, the proposed method converged faster, and the RMSE of image reconstruction was smaller. The results in Figure 6 show that our proposed method can greatly reduce the time consumption of the image processing algorithm and provide a guarantee for the real-time implementation of the system. Finally, we used a frame image of whole blood cells captured by the lens-less flow cytometer to test the computation time. We used Matlab to reconstruct the holographic image, and the hardware was graphics workstation (Xeon E5-2600, 16 GB DIMM DDR4, Intel, Santa Clara, CA, USA). The time consumed for one iteration was about 2 s with the classic method of phase iterative Finally, we used a frame image of whole blood cells captured by the lens-less flow cytometer to test the computation time. We used Matlab to reconstruct the holographic image, and the hardware was graphics workstation (Xeon E5-2600, 16 GB DIMM DDR4, Intel, Santa Clara, CA, USA). The time consumed for one iteration was about 2 s with the classic method of phase iterative reconstruction, and the proposed method took~0.1 s longer than the classic method. However, the method we proposed only needed 5 iterations, and the classic method needed 10 times to achieve the same reconstruction effect. Therefore, the time consumed by the propose method was~12.82 s, and that of the classic one was~24.42 s. In other words, it means that the proposed method reduced the computational time by 48%. In general, the two algorithms have almost the same computational complexity. Our algorithm only adds one phase constraint to the first image reconstruction, but its time computation is~0.19 s. Blood Cell Analysis Method In the on-chip flow cytometer, the blood cells flow in the micro-channel above the image sensor, and their holographic diffraction image is transmitted onto the sensor surface by the near-coherent light source. To reduce the cost and volume of the device, an ordinary blue LED with a limited light intensity was used. The light on the plane of the cells is further weakened because the light is illuminated through a pinhole. Therefore, the exposure time of the image sensor needs to be longer than 400 ms to capture a bright enough hologram. If the blood cells move during the exposure time of the CIS, there is a motion blur, as shown in Figure 7a. To solve this problem, we used a pulse injection method, which is shown in Figure 7b. illuminated through a pinhole. Therefore, the exposure time of the image sensor needs to be longer than 400 ms to capture a bright enough hologram. If the blood cells move during the exposure time of the CIS, there is a motion blur, as shown in Figure 7a. To solve this problem, we used a pulse injection method, which is shown in Figure 7b. In Figure 7, t1 is the exposure time, and t2 is the injection time. This process can be controlled by a micro-pump. Due to the high precision of micro-pump control, the injection time and stationary time of the blood cells can be fixed. Therefore, the algorithm can be processed according to fixed parameters. After an experiment, accurate cell image collection and injection of new samples in micro-pump mode can be ensured. In addition, instead of the micro-pump method, a hand-push model can be used to reduce the cost and volume of the device. In the hand-push model, the motion state of the cells in the microfluidic chip can be detected by the image processing algorithm. The system acquisition accuracy can generally be guaranteed with t1 > 5 s and t2 > 20 s. The state of cell motion in the microfluidic chip is detected by the RMSE between two frames. In addition, there are two important problems, which relate to cell overlap. The first is the cell overlap in the holographic image. As mentioned earlier, the raw image captured by the lens-less platform is a holographic image, so the size of a diffractive image of the cell is ~4 times bigger than that of a focus image. The inevitable cell image overlap was solved by the phase iterative reconstruction algorithm. The other problem relates to the position of cell overlap and the 3D structure of the micro-channel leads to the shadow image overlap. The problem is difficult to solve by digital image processing algorithms. Therefore, we used a diluted cell sample to solve the problem. According to the experiment, the 1:400 dilution ratio is an acceptable ratio for the blood cells, and cell overlap is almost impossible at 1:1000 dilution. Considering the speed of counting, we chose a 1:400 dilution. The microfluidic chip was mounted above the CIS so that we could easily obtain the background image without cells. In addition, we then injected a fluid sample of cells into the micro-channel to record the holograms and reconstruct the focus images of the cells. The location and size of the cells were determined by threshold segmentation with images with background interference removed. According to our experiment, the flow velocity was 100 μL/min. Because of In Figure 7, t 1 is the exposure time, and t 2 is the injection time. This process can be controlled by a micro-pump. Due to the high precision of micro-pump control, the injection time and stationary time of the blood cells can be fixed. Therefore, the algorithm can be processed according to fixed parameters. After an experiment, accurate cell image collection and injection of new samples in micro-pump mode can be ensured. In addition, instead of the micro-pump method, a hand-push model can be used to reduce the cost and volume of the device. In the hand-push model, the motion state of the cells in the microfluidic chip can be detected by the image processing algorithm. The system acquisition accuracy can generally be guaranteed with t 1 > 5 s and t 2 > 20 s. The state of cell motion in the microfluidic chip is detected by the RMSE between two frames. In addition, there are two important problems, which relate to cell overlap. The first is the cell overlap in the holographic image. As mentioned earlier, the raw image captured by the lens-less platform is a holographic image, so the size of a diffractive image of the cell is~4 times bigger than that of a focus image. The inevitable cell image overlap was solved by the phase iterative reconstruction algorithm. The other problem relates to the position of cell overlap and the 3D structure of the micro-channel leads to the shadow image overlap. The problem is difficult to solve by digital image processing algorithms. Therefore, we used a diluted cell sample to solve the problem. According to the experiment, the 1:400 dilution ratio is an acceptable ratio for the blood cells, and cell overlap is almost impossible at 1:1000 dilution. Considering the speed of counting, we chose a 1:400 dilution. The microfluidic chip was mounted above the CIS so that we could easily obtain the background image without cells. In addition, we then injected a fluid sample of cells into the micro-channel to record the holograms and reconstruct the focus images of the cells. The location and size of the cells were determined by threshold segmentation with images with background interference removed. According to our experiment, the flow velocity was 100 µL/min. Because of the infinitesimal volume of the micro-channel (0.246 µL), the digital injection pump was able to replace all cells in the channel less than 1s. However, it took~15 s to 20 s for the cells in the fluid to become static. Fortunately, were able to use this time to process the cell image. Since the pixel number of the CIS was about 5.04 million, the computer took~16 s to process the full resolution image. The on-chip flow cytometry capability of this method, together with its ease of use, may offer a highly precise and lower cost alternative to existing whole blood analysis tools, urine analysis tools and plankton analysis tools, especially for point-of-care biological and medical tests. Results and Discussion To test the performance of our proposed system, we performed an experiment with whole blood cells. In the results section, we show the results of focus image reconstruction for cell counting in our proposed system. Figure 8 shows that the images of the cells were captured by removing the background in the reconstructed image and that the quantity and size of the sample were determined by the image threshold segmentation algorithm. The system obtained an accurate background image, effectively removed the background effect, accurately acquired the location of the sample, and greatly improved the counting accuracy. Finally, the cell concentration was calculated based on the number of cells and the volume of the micro-channel. The micro-channel was 30 µm in height, 150 µm in width, and~54.6 mm in length; thus, it was easy to calculate its volume as~0.246 µL and projective area as 8.19 mm 2 . The on-chip flow cytometry capability of this method, together with its ease of use, may offer a highly precise and lower cost alternative to existing whole blood analysis tools, urine analysis tools and plankton analysis tools, especially for point-of-care biological and medical tests. Results and Discussion To test the performance of our proposed system, we performed an experiment with whole blood cells. In the results section, we show the results of focus image reconstruction for cell counting in our proposed system. Figure 8 shows that the images of the cells were captured by removing the background in the reconstructed image and that the quantity and size of the sample were determined by the image threshold segmentation algorithm. The system obtained an accurate background image, effectively removed the background effect, accurately acquired the location of the sample, and greatly improved the counting accuracy. Finally, the cell concentration was calculated based on the number of cells and the volume of the micro-channel. The micro-channel was 30 μm in height, 150 μm in width, and ~54.6 mm in length; thus, it was easy to calculate its volume as ~0.246 μL and projective area as 8.19 mm 2 Then, we used different concentrations of the whole blood samples to test the linearity and the accuracy of the proposed method. We diluted seven groups of different concentrations of blood cells with a dilution ratio to perform an experiment. The results of this experiment are indicated in Figure 9. Ultimately, we used whole blood cells to count RBCs and WBCs in order to verify the effectiveness of the cell counting procedure. According to the above preparation method, diluted whole blood cells and lysed whole blood cells were divided twice. In addition, we needed to Then, we used different concentrations of the whole blood samples to test the linearity and the accuracy of the proposed method. We diluted seven groups of different concentrations of blood cells with a dilution ratio to perform an experiment. The results of this experiment are indicated in Figure 9. Table 1 shows that the average fractional errors of RBC and WBC counting were 2% and 1.6%, respectively. In other words, the relative error between the proposed system and the whole blood cell counter was less than 2%. This accuracy indicates the potential for applying the proposed system to the early detection of some liquid samples, such as blood tests, urine tests, semen tests, and microbiological tests. In areas where the use of large-scale, high-precision instruments is inconvenient, such as the outdoors, the battlefield, and underdeveloped areas, the tool proposed here can enable early and real-time detection. Conclusions The current paper improved the twin-image recovery algorithm in a coaxial holography system and increased the convergence speed of the iterative algorithm. In addition, this paper proposed a flow cytometer based on lens-less holographic microscopy that improved the counting accuracy to 2.3%. The on-chip flow cytometer based on a pulse injection method can continuously count cells and continuously collect a large number of cell images for subsequent cell analysis. Ultimately, this on-chip flow cytometer is very suitable for use in underdeveloped areas and areas far away from the laboratory because of its low price and tiny size and is in full compliance with the current development trend of point-of-care testing (POCT). Ultimately, we used whole blood cells to count RBCs and WBCs in order to verify the effectiveness of the cell counting procedure. According to the above preparation method, diluted whole blood cells and lysed whole blood cells were divided twice. In addition, we needed to substract the concentration of WBCs from the concentration of whole blood cells to get the concentration of RBCs. Then, the test results of the proposed system were compared with those of an automatic blood cell analyser (BC-5180, Mindray, Shenzhen, China), and the results are shown in Table 1. Table 1 shows that the average fractional errors of RBC and WBC counting were 2% and 1.6%, respectively. In other words, the relative error between the proposed system and the whole blood cell counter was less than 2%. This accuracy indicates the potential for applying the proposed system to the early detection of some liquid samples, such as blood tests, urine tests, semen tests, and microbiological tests. In areas where the use of large-scale, high-precision instruments is inconvenient, such as the outdoors, the battlefield, and underdeveloped areas, the tool proposed here can enable early and real-time detection. Conclusions The current paper improved the twin-image recovery algorithm in a coaxial holography system and increased the convergence speed of the iterative algorithm. In addition, this paper proposed a flow cytometer based on lens-less holographic microscopy that improved the counting accuracy to 2.3%. The on-chip flow cytometer based on a pulse injection method can continuously count cells and continuously collect a large number of cell images for subsequent cell analysis. Ultimately, this on-chip flow cytometer is very suitable for use in underdeveloped areas and areas far away from the laboratory because of its low price and tiny size and is in full compliance with the current development trend of point-of-care testing (POCT). Author Contributions: Yuan Fang conceived and designed the experiments; Yuan Fang and Yuquan Jiang performed the experiments; Yuan Fang and Chaoliang Dang analysed the data; Ningmei Yu contributed reagents/materials/analysis tools; Yuan Fang wrote the paper.
2018-11-16T19:36:54.765Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "eb7fb3fa666bdd3015b6a5bab951e742165d2572", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/9/5/227/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb7fb3fa666bdd3015b6a5bab951e742165d2572", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
266894251
pes2o/s2orc
v3-fos-license
Effects of photon irradiation in the presence and absence of hindlimb unloading on the behavioral performance and metabolic pathways in the plasma of Fischer rats Introduction: The space environment astronauts experience during space missions consists of multiple environmental challenges, including microgravity. In this study, we assessed the behavioral and cognitive performances of male Fisher rats 2 months after sham irradiation or total body irradiation with photons in the absence or presence of simulated microgravity. We analyzed the plasma collected 9 months after sham irradiation or total body irradiation for distinct alterations in metabolic pathways and to determine whether changes to metabolic measures were associated with specific behavioral and cognitive measures. Methods: A total of 344 male Fischer rats were irradiated with photons (6 MeV; 3, 8, or 10 Gy) in the absence or presence of simulated weightlessness achieved using hindlimb unloading (HU). To identify potential plasma biomarkers of photon radiation exposure or the HU condition for behavioral or cognitive performance, we performed regression analyses. Results: The behavioral effects of HU on activity levels in an open field, measures of anxiety in an elevated plus maze, and anhedonia in the M&M consumption test were more pronounced than those of photon irradiation. Phenylalanine, tyrosine, and tryptophan metabolism, and phenylalanine metabolism and biosynthesis showed very strong pathway changes, following photon irradiation and HU in animals irradiated with 3 Gy. Here, 29 out of 101 plasma metabolites were associated with 1 out of 13 behavioral measures. In the absence of HU, 22 metabolites were related to behavioral and cognitive measures. In HU animals that were sham-irradiated or irradiated with 8 Gy, one metabolite was related to behavioral and cognitive measures. In HU animals irradiated with 3 Gy, six metabolites were related to behavioral and cognitive measures. Discussion: These data suggest that it will be possible to develop stable plasma biomarkers of behavioral and cognitive performance, following environmental challenges like HU and radiation exposure. Introduction Clinical and environmental irradiation can impact the brain function (Raber et al., 2004a;Raber et al., 2004b;Rola et al., 2004;Liu et al., 2010;Braby et al., 2019).There are also concerns that other potential exposures to ionizing radiation via a nuclear accident or terrorist attack could result in behavioral or cognitive changes in people (Braby et al., 2019).Humans are also exposed to substantially lower doses of different types of ionizing radiations in the space environment, and there are concerns that such exposures to the charged-particle environments of space may impact the brain function (Braby et al., 2019).Both galactic cosmic radiation and solar particle events comprise primarily moderate-to-high energy protons, which are largely sparsely ionizing (Townsend et al., 2006).Terrestrial radiation sources that are sparsely ionizing include higher-energy photons produced by X-ray generators and linear accelerators, as well as gamma rays produced by sources such as 137 Cs or 60 Co.Whole-body photon irradiation at moderate doses and a high dose rate (1 and 3 Gy; dose rate of 0.69 Gy/min) induces behavioral alterations, including increased measures of anxiety and cognitive injury, and reduces dopamine and GABA levels in 6-8-week-old C57BL6/J male mice 7 days after exposure (Bekal et al., 2021).Post-fear learning photon irradiation (4 Gy; dose rate: 1.25 Gy/min) impairs the extinction of contextual and cued fear memory of 1-month-old C57BL/6J male mice 2 weeks after exposure (Olsen et al., 2014) and enhances cued fear memory of 3month-old C57BL/6J mice (Olsen et al., 2017).Head-only irradiation of 1.5-month-old Long-Evans male rats with photons (2.3 Gy at a dose rate of 1.9 Gy/min) also causes cognitive injury (Davis et al., 2014).Similarly, whole-brain photon irradiation of 1-month-old Sprague-Dawley rats (20 or 40 Gy; 300 cGy/min) causes cognitive injury at 7 and 20 days post-injury and following exposure to 40-Gy cognitive injury 2 months after exposure (Liu et al., 2010).Head-only irradiation with 27-Gy fractionated exposure of 2.5-month-old Long-Evans rats also impaired cognitive function 5 weeks after radiation exposure (Allen et al., 2018).Whole-brain irradiation of 6month-old Fischer rats with 25 Gy (single dose) also impairs cognitive function (Akiyama et al., 2001).At doses higher than 5 Gy of photons and 0.5 Gy of 56 Fe (600 MeV/n), the detrimental effects of irradiation on hippocampal function might involve reduced neurogenesis (Parent et al., 1999;Raber et al., 2004a;Raber et al., 2004b;Rola et al., 2008;Ko et al., 2009;Rosi et al., 2012;Roughton et al., 2012;Rivera et al., 2013;Cacao and Cucinotta, 2016;Sweet et al., 2016;Begolly et al., 2017;Whoolery et al., 2017). Microgravity is another environmental stressor astronauts experience during space missions.Microgravity affects the brain structure and causes brain dysfunction (Roberts et al., 2019;Hupfeld et al., 2022).Methods devised to simulate microgravity on Earth include hindlimb unloading (HU) (Ray et al., 2001), and brief periods of microgravity have been achieved using parabolic flights (Oman, 2007).Simulated microgravity affects high-level cognition [for a review, see (McNally et al., 2022)].Simulated microgravity reduces hippocampal levels of pyruvate dehydrogenase, part of glucose metabolism and associates with oxidative stress and brain ischemia, and of the structural protein tubulin (Sarkar et al., 2006) and impairs the 3-dimensional visuospatial tuning and orientation of mice during parabolic flights (Oman, 2007).Irradiation and microgravity might interact in how they affect DNA damage (Moreno-Villanueva et al., 2017) and other injury-related cellular pathways, including oxidative stress, mitochondrial function (Yatagai et al., 2019), cardiovascular health (Patel, 2020), bone function (Krause et al., 2017), and the brain (Raber et al., 2021).Recently, we reported on the complex interaction of simulated microgravity achieved by HU and a simplified field of simulated space radiation (GCRSim) on the behavioral and cognitive performance and metabolic pathways in the plasma and brain of WAG/Rij rats (Raber et al., 2021).Shamirradiated WAG/Rij rats exposed to simulated microgravity by HU showed impaired hippocampus-dependent spatial habituation learning, but irradiated WAG/Rij rats (1.5 Gy) did not.In addition, rats exposed to 1.5 Gy of GCRSim showed increased depression-like behaviors in the absence but not in the presence of simulated microgravity.Specific behavioral measures such as measures of activity and anxiety and spatial habituation in the open field and depression-like behavior in the forced swim test were associated with plasma levels of distinct metabolites 10 months after behavioral testing.The phenylalanine, tyrosine, and tryptophan metabolism pathway was most profoundly affected; this pathway was affected by radiation in the absence and presence of microgravity in the plasma and by microgravity by itself. To determine whether these effects of irradiation and simulated microgravity are specific to WAG/Rij rats and whether these effects are specific to simulated space radiation, we characterized the behavioral and cognitive performance of male Fischer rats 2 months after sham irradiation or total body irradiation with photons (3, 8, or 10 Gy) in the absence or presence of simulated microgravity delivered by HU.These radiation doses were based on values used in previous rat studies that determined a relative biological effectiveness for perivascular cardiac fibrosis, defined as an increase in the perivascular cardiac collagen content, in the setting of deep space exploration.The relative biologic effectiveness represents the ratio of doses required by two different irradiation types (e.g., photons and galactic cosmic radiation) to cause the same level of biologic effect.In a previous study in WAG/RijCmcr rats, the increase in perivascular collagen occurred after a much lower total dose of 1.5 Gy from a three-ion beam grouping, representative of galactic cosmic radiation, as compared with 10 Gy of photon irradiation (Lenarczyk et al., 2023).The range of radiation doses in the present study was expanded to include 3, 8, and 10 Gy.The 3-Gy dose was selected as previous studies have shown that behavioral performance can be affected at this low dose (Bekal et al., 2021).In addition, we analyzed the plasma collected 9 months after sham irradiation or total body irradiation for distinct alterations in metabolic pathways and to determine whether changes to metabolic measures were associated with specific behavioral and cognitive measures.The results from the present study may be used to determine the relative biological effectiveness for behavioral performance and metabolic pathways in future studies for Fischer rats to compare our findings with behaviorally tested WAG/RijCmcr rats. Animals and radiation exposures In this study, 344 7-8-month-old male Fischer rats (RRID: RGD_734,478) (n = 97) were shipped to the Wake Forest University School of Medicine, Winston-Salem, NC (see Figure 1 for the experimental design of this study).Rats were selected for this study because previous studies have shown that whole-body exposure to low-linear energy transfer radiation results in the development of cardiovascular disease in a time-and dosedependent manner.In previous studies, WAG/RijCmcr rats were used as they are an established model of sensitivity to radiation injury in the heart, following exposure to photons.To investigate whether another strain of rats would be either sensitive or resistant to ionizing radiation and, thus, suitable for future flight studies aboard the International Space Station, we selected the Fischer rat.This strain of rat, unlike the WAG/RijCmcr rat, does not continue to gain weight as an adult and, so, is better suited to be housed in the Rodent Research Hardware System in use aboard the International Space Station.At 9 months of age, they were sham-irradiated or irradiated with photons (3, 8, or 10 Gy).Experimental groups of simulated weightlessness using HU alone and HU in combination with photon irradiation were included as part of the paradigm.The HU procedure was performed 5 days prior to sham irradiation or irradiation, as described below.The rats remained under the HU condition, 25 days after sham irradiation or irradiation.One week after the HU period, the animals were shipped to the Medical College of Wisconsin (MCW).The animals were maintained on a Teklad 8904 diet (Indianapolis, IN) and fed ad libitum during this study.The animals were housed in a reverse-light cycle room with lights off from 07.30 to 19.30.All behavioral and cognitive testing was performed during the dark period, starting 2 months after irradiation or sham irradiation and over 1 month after the HU procedure.During deep-space exploratory missions, crew members are exposed to space radiation and microgravity for prolonged periods.In a previous study, we characterized behavioral and cognitive performance of WAG/Rij rats 3 months after sham irradiation or total body irradiation with a simplified 5-ion beam representative of GCR in the absence or presence of HU (Raber et al., 2021).To determine whether behavioral performance would be impacted by these spaceflight stressors at an earlier time point, we selected a 2-month period after total body irradiation with photons in the absence or presence of HU before measurements in Fischer rats were made.As the animals received HU 25 days following sham irradiation and irradiation, this provided the animals time to recover from the HU period.All animal procedures were consistent with ARRIVE guidelines and reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) at the Wake Forest University School of Medicine and MCW.All analyses were performed blinded to treatment.The code was broken once the data were analyzed. HU On day 1, the rats were randomly grouped to serve as either full weight-bearing (non-HU) or to be HU via tail suspension.The HU procedure was performed the same way, as previously published (Wiley et al., 2016;Raber et al., 2021).Briefly, the rats were lightly anesthetized with isoflurane (2.5%-3.0%;5% oxygen flow rate).Benzoin tincture was applied to the cleaned, lateral surfaces of the tail before placing a free end of the adhesive medical traction tape (1-cm width, 30-cm length, 3M, St. Paul, MN, United States) at approximately 1 cm from the base of the tail and then adhered to 3/ 4th of the length of the tail.The free end of the tape was looped through a catch on a custom plastic ball-bearing swivel and symmetrically adhered to the other lateral side of the tail.Halfinch micropore tape (3M, St. Paul, MN, United States) strips were then secured perpendicularly over the traction tape.A 1.56-inch carabiner clip (Nite Ize, Boulder, CO, United States) was attached to both the swivel and a zipper hook that could be adjusted vertically along an attached perlon cord (STAS Group, BA, Eindhoven, Netherlands).The cord was attached to a pulley (Blue Hawk, Gilbert, AZ, United States) that permitted the entire setup to slide securely over a 5/16-inch-diameter round, solid steel rod (Hillman, Cincinnati, OH, United States).The adjustment of the zipper lock along the Perlon cord lifted the hind limbs fully off the substrate, with the abdomen and thorax at a 30-degree horizontal form.Sham HU rats received isoflurane but were not tail-suspended.All rats were singly housed, assigned to receive one of the three radiation doses (3, 8, or 10 Gy) or sham irradiation (0 Gy). Wet chow was provided to ensure hydration. Photon irradiation All radiation dosimetry procedures were performed and validated as per previous studies (Walb et al., 2015;Willey et al., 2016).The animals were placed on the treatment couch of an Elekta 6-MV photon linear accelerator (Elekta Synergy, Elekta AB, Stockholm, Sweden).Consistent with our earlier radiation study (Wiley et al., 2016;Raber et al., 2021), doses were delivered in two fractions, using laterally opposed beams at 0.5 Gy/min.For both HU and sham HU groups, 4 cm of homogenous brass shielding was placed between the hind limbs from the beam to permit the repopulation of hematopoietic stem cells; dosimetry was performed with shielding in place.Following irradiation procedures, the rats were immediately returned to normal housing conditions (HU or full weight-bearing). 2.4 Performance in the open field in the absence and presence of objects (week 1) Exploratory behavior, measures of anxiety, and hippocampusdependent spatial habituation learning were assessed in a black open field (90.49cm × 90.49cm × 45.72 cm) for 2 subsequent days, as in an earlier study with middle-aged, male WAG/Rij rats (Raber et al., 2021).For open-field testing, each experimental group had 10 rats/ group, with the exception of the non-HU 8-Gy and HU 8-Gy groups, which had 9 rats/group, for a total of 78 rats.The animal cage was brought into the testing room.The rat was picked up by gently placing our thumb and forefinger behind the forelegs and wrapping the hand around the stomach.The researcher held the animal against the stomach while transporting the animal to or from the enclosure.The animal was placed in the middle of the open field for 5 min per day.The researcher left the testing room during behavioral testing.Following testing of each rat, the enclosure was cleaned with a 70% isopropyl alcohol solution.The outcome measures analyzed using video tracking (ANY-maze software, Stoelting Co., Wood Dale, IL, United States) were the total distance moved, the time spent freezing, and the percentage of time spent in the more anxiety-provoking center zone (45.25 × 45.25 cm).Freezing is defined as the lack of movement besides respiration. On day 3, the rats were tested in the open field containing objects based on the protocol using 3D-printed objects, as reported previously, with the following modifications (Aguilar et al., 2018).The objects were placed 45.72 cm from the top side of the enclosure and 30.48 cm from the left and right sides of the enclosure, with the distance between the two objects being also 30.48 cm.The rats were placed back in the enclosure containing 2 identical blue objects (squares) or 2 identical red objects (cylinders) for an acquisition trial of 5 min.The objects were counterbalanced for this task.Half the rats started with the blue squares, and the other half started with the red cylinders.Then, 60 min following the acquisition trial (A), the rats were placed back in the enclosure with one of the familiar objects replaced with a novel red or blue object for a test trial (T).The outcome measures analyzed using video tracking were the total distance moved and percentage of time spent in the center zone (45.25 × 45.25 cm) containing the objects.In addition, the discrimination index, defined as the time spent exploring the novel object minus the time spent with the familiar object divided by the time spent exploring both objects, was analyzed by manual scoring of the digital videos.The light intensity during open-field and object recognition testing was 50 lux.A white noise generator (setting II) and overhead lights were used during the testing. Anhedonia test (week 2) In the week following open-field and object recognition testing, anhedonia of the rats was tested using the M&M test (Bolton et al., 2018).Chocolate is safe for rats in small amounts (Carpanini et al., 1978;Tarka Jr et al., 1991).The main ingredient in chocolate that causes issues in cats and dogs is theobromine (Carpanini et al., 1978), but rats can process this chemical as humans do.We recognize that it is possible to administer too much chocolate to a rat, but this would require the equivalent of a 150-g chocolate bar. In our study, we only provided 10 g of M&M for 1 h at a time, and there were maybe one or two incidents where we provided an additional 2 g.The symptoms of theobromine poisoning include diarrhea and increased urination.We did not observe any of these symptoms in the rats.Other than theobromine poisoning, long-term chocolate consumption also affects mortality (Sun et al., 2022).Considering that we only provide chocolate access for 1 h every 3 days, it cannot be considered for a long period.On day 1, each rat received 2 M&Ms (multi-colored button-shaped chocolates, Mars Inc., McLean, VA, United States) in the home cage.For anhedonia testing, all experimental groups had 12 rats/group, with the exception of the non-HU 3-Gy group and the HU 0 (n = 11 rats)-and 3-Gy groups (n = 13 rats), for a total of 97 rats.Consumption was assessed after 10 min by weighing the remaining M&Ms.On days 2-4, the rats were moved into a testing room, separate from the housing room.Each rat was placed in a clean cage without bedding (no available food or water) and allowed to habituate for 1 h.A pre-weighed weighing boat containing 10 g of M&Ms was placed in the center of the cage.The weighing boat was attached to the bottom of the cage using a tape.The M&M-filled weighing boat was left in the cage for up to 1 h.If the weighing boat was empty before the end of the hour, a new weighing boat with M&Ms was placed in the cage.After 1 h, the animal was placed back in its home cage.The weighing boat was removed from the testing cage.Any M&M crumbs were removed from the cage and placed in the weighing boat if they were not soiled by rat feces.The weighing boat containing M&Ms was weighed again to calculate M&M consumption for that day.The average consumption of M&Ms over the 3 days of testing and the percentage of M&Ms consumed on days 2 and 3 compared to day 1 were calculated. Elevated plus maze (week 3) Measures of anxiety-like behavior were assessed in the elevated plus maze.The arms were 50-cm long, 10-cm wide, and 40-cm high.The maze was positioned 50 cm above the floor.The duration of the test was 5 min.For elevated plus maze testing, as for open-field testing, all experimental groups had 10 rats/group, with the exception of the non-HU 8-Gy and HU 8-Gy groups, which had 9 rats/group, for a total of 78 rats.The following distinct outcomes were measured using ANY-maze video tracking software: 1) ratios of Metabolomics analysis of plasma Nine months following sham irradiation or irradiation and 7 months after the behavioral and cognitive testing, the blood of 72 rats (9 rats/experimental group) was collected in EDTAcontaining tubes and centrifuged at 2,000 g for 10 min, and the plasma supernatant was collected and stored at −80 °C for plasma metabolomics.Degenerative, radiation-induced cardiovascular disease can follow exposure to conventional photon irradiation (e.g., following breast radiotherapy) and takes many years to develop.Thus, the increased cardiac risk to astronauts would be expected to remain well after they return to Earth, and this is reflected in the design of rat studies of radiation-induced cardiovascular disease.We used a similar experimental timeline for metabolic pathway studies in rats that results in radiationinduced cardiovascular disease.These longer experimental timelines are used to determine whether humans exposed to lower, mission-relevant doses of galactic cosmic radiation develop radiation-induced heart disease and changes in metabolic pathways.Metabolites were extracted from 100 μL of plasma.Untargeted metabolomics analysis was performed as described (Kirkwood et al., 2013).Liquid chromatography (LC) was performed using a Shimadzu Nexera system with an Inertsil Phenyl-3 Column (4.6 × 150 mm, 100 Å, 5 μm; GL Sciences, Rolling Hills Estates, CA, United States) coupled to a quadrupole time-of-flight (Q-TOF) mass spectrometer (AB SCIEX, TripleTOF 5600) operated in the information-dependent MS/MS acquisition mode.The samples were ordered randomly, and multiple quality control samples were included.QC samples were generated by pooling 10-µL aliquots from the plasma sample extracts and analyzed along with the samples.All samples were run in the positive and negative ion modes.In case metabolites were present in both ion modes, the mode with the higher peak value was selected for further analysis.The column temperature was held at 50 °C, and the samples were maintained at 10 °C.The metabolomics data were processed using MarkerView (SCIEX, Framingham, MA) and PeakView (SCIEX, Framingham, MA) software programs for integrated pathway and statistical analyses, respectively.The identification of metabolites was based on mass error (<30 ppm) and MS/MS fragment ions.The metabolites were also confirmed using the retention time and mass-to-charge (m/z) ratio, and by comparing to authentic standards (±1 min) from an in-house library (IROA Technologies, Bolton, MA), allowing for the streamlined identification of metabolites.LIPID MAPS (Welcome Trust, United Kingdom), METLIN (Scripps, La Jolla, CA), and the Human Metabolome Database (HMDB) (University of Alberta, Edmonton, Canada) were used for MS and MS/MS matching.MetaboAnalyst pathway analysis (Montreal, Quebec, Canada) was performed, as described by Xia and Wishart (2010) and Kirkwood et al. (2013).Raw metabolite peak values in the plasma were analyzed without log transformation or Pareto scaling.Four distinct comparative analyses were performed: 1) effects of radiation, for each dose compared to sham irradiation, in rats without HU; 2) effects of radiation, for each dose compared to sham irradiation, in rats with HU; 3) effects of HU in shamirradiated rats; and 4) effects of HU in irradiated rats, for each dose compared to sham irradiation.The pathways were visualized using scatter plots (testing significant features) in MetaboAnalyst, with the "global test" and "relative-betweenness centrality" as parameters for the enrichment method and topological analysis, respectively.In case pathways were revealed to be significantly affected, we used box and whisker plots to indicate the direction of change in the affected metabolites in those affected pathways. Statistical analyses All data are presented as the mean ± standard error of the mean (SEM).Behavioral and cognitive data were analyzing using SPSS v.25 software (IBM, Armonk, NY, United States) and R statistical programming language v.4.3.1.To analyze the behavioral and cognitive performance after sham irradiation or photon irradiation (0, 3, 8, or 10 Gy), we performed analysis of variance (ANOVA) with Dunnett's post hoc tests compared to one control group.To assess the role of microgravity, we performed ANOVA with radiation (0, 3, 8, or 10 Gy) and HU condition (control or HU) as between-group factors with repeated measures when appropriate.For some analyses, as indicated and appropriate, two-sided t-tests were used.We set statistical significance to p < 0.05.Greenhouse-Geisser corrections were used if sphericity was shown to be violated (Mauchly's test). Metabolomics in plasma were analyzed for the comparisons described above.To identify potential plasma biomarkers of radiation exposure or HU conditions in behavioral or cognitive performance, we used univariate linear regression analyses stratified by radiation exposure and the HU condition, with Z-scores for behavioral measures as the dependent variable and metabolite values as the independent variable.We selected those metabolites that were most consistently included in the models and those that were most consistently associated with an outcome (p < 0.05) in the models (at least four times as there were four activity measures and four anxiety measures: activity in the open field with and without objects in 4 subsequent trials, time spent in the center of the open field containing objects, and time spent exploring the objects in two trials); we selected and limited the analysis to 13 behavioral measures to reduce the risk of a type I error.We added an anxiety measure in the elevated plus maze and a depression-like measure (anhedonia) in the current study.Both tests were not included in the previous study.Table 1 lists all behavioral measures used for the regression analysis in the current study, and the footnote describes the comparison of this list with that of our earlier study. MetaboAnalyst software was used to generate impact plots.Graphs were generated using GraphPad software v.8.2.0 (La Jolla, CA, United States).As MetaboAnalyst is not ideal to analyze amino acid-related pathways, we analyzed them separately as described under Metabolomics Analysis above. Effects of HU and radiation on the performance in the open field without and with objects When activity levels were analyzed for 2 subsequent days in the open field, an effect of HU was observed (F (1, 89) = 14.476, p < 0.001), with HU-treated animals moving less than non-HU animals (Figure 2A).An effect of the day was also observed, with all groups showing spatial habitation learning and moving less on day 2 than on day 1 [F (1, 89) = 174.910,p < 0.001].In addition, HU affected the freezing levels in the open field [F (1, 89) = 5.106, p = 0.026], with higher freezing levels observed in HU animals than in non-HU animals (Figure 2B).An effect of the day was also observed, with all groups freezing more on day 2 than on day 1 [F (1, 89) = 313.051,p < 0.001].No effect of radiation was observed on the activity or freezing levels in the open field. Next, we assessed the time spent in the more anxiety-provoking center of the open field.HU [F (1, 89) = 0.656, p = 0.420] or radiation [F (3, 89) = 0.418, p = 0.741] did not affect the time spent in the center of the open field (Supplementary Figure S1). Next, the performance in the open field containing objects was assessed.No significant effect of HU was observed on activity levels [F (1, 70) = 3.335, p = 0.072] (Figure 3A) or a significant radiation × trial interaction [F (3, 70) = 2.434, p = 0.072].When the non-HU group was analyzed, a radiation × trial interaction [F (3, 70) = 5.991, p = 0.002] and an effect of day [F (1, 35) = 15.818,p < 0.001] were observed.Although there was no overall effect of radiation under the non-HU condition, the animals irradiated with 8 Gy moved less than the sham-irradiated animals (p = 0.0255, Dunnett's test).In addition, under the non-HU condition, only sham-irradiated rats moved more in trial 2 than in trial 1 (p = 0.001).In contrast, under the HU condition, only an effect of the trial (F (1, 35) = 14.006, p = 0.001) was observed, and animals irradiated with 3 (p = 0.044) or 8 (p = 0.038) Gy moved more in trial 2 than in trial 1. No significant effect of HU was observed on the time spent exploring the objects, with HU animals spending less time spent exploring the objects than non-HU animals [F (1, 70) = 3.412, p = 0.069] (Figure 3B). We also analyzed the time spent in the center of the open field containing the objects.No significant effect of HU [F (1, 69) = 3.160, p = 0.080] or radiation [F (3, 69) = 1.118, p = 0.348] was observed. Measure Definition Anhedonia Mean percentage change in consumption of days 2 and 3 compared to day 1. a The same measures were used in the current study and our earlier study in WAG/Rij rats with the following exceptions: 1) in the current study, a measure of anxiety in the elevated plus maze was included.The elevated plus maze was not included in the earlier study; 2) in the current study, an anhedonia test was used instead of the forced swim test that was included in the earlier study; and 3) in the current study, there were two trials in the object recognition test: a training trial and a testing trial; in the previous study, there were three trials: habituation, acquisition, and testing trials. Activity Finally, we analyzed the percentage of time spent with familiar and novel objects.As the animals spent very little time exploring the objects and 38% of the animals did not explore the objects at all in the test with the novel object, object recognition could not be reliably assessed.None of the groups had a positive discrimination index significantly different from 0. Anhedonia test First, the M&M consumption over the 3 days of the test was analyzed.An HU × day interaction [F (2, 178) = 3.944, p = 0.021] and an effect of day [F (2, 178) = 55.933,p < 0.001] were observed (Figure 4A).No significant effect of HU [F (1, 89) = 2.777, p = 0.099] was observed.Under the non-HU condition, sham-irradiated rats consumed more M&Ms on day 3 than on day 1 (p = 0.0367).No significant change was observed in M&M consumption on day 2 compared to day 1 (p = 0.057).Under the non-HU condition, the rats irradiated with 8 Gy consumed more M&Ms on day 3 than on day 1 (p = 0.0336).No difference in M&M consumption between days 1 and 3 was observed in rats irradiated with 3 or 10 Gy.Under the HU condition, sham-irradiated rats (p = 0.0396) and those irradiated with 8 Gy (p = 0.0421) consumed more M&Ms on day 3 than on day 1.Like in the non-HU condition, no difference in M&M consumption between days 1 and 3 was observed in rats irradiated with 3 or 10 Gy under the HU condition. Next, the percentage of consumption on days 2 and 3 compared to day 1 was analyzed.An effect of radiation [F (3, 86) = 2.779, p = 0.046] was observed, with a lower percentage of M&M consumption in irradiated rats than in sham-irradiated rats (Figure 4B).An effect of day [F (1, 86) = 14.041, p < 0.001] and a HU × day interaction [F (1, 86) = 5.133, p = 0.026] were observed.Under the non-HU condition, no significant lower percentage of consumption was observed in animals irradiated with 3 Gy than in sham-irradiated rats (p = 0.0521).Under the HU condition, in rats irradiated with 3 Gy, the percentage of M&M consumption on day 3 was higher than that on day 2 (p = 0.006), and there was a trend toward a higher percentage of M&M consumption on day 3 than on day 2 in rats irradiated with 8 Gy, but it did not reach significance (p = 0.059). Elevated plus maze An effect of HU on the measures of anxiety in the elevated plus maze was observed.The ratio of the time spent in the open arms was lower in HU rats than in non-HU animals [F (1, 89) = 6.811, p = 0.011] (Figure 5A).The ratio entry measure was not affected by HU or radiation (Supplementary Figure S2). No significant effect of HU was observed on activity levels in the elevated plus maze [F (1, 89) = 3.824, p = 0.054] (Figure 5B).Under the non-HU condition, the rats irradiated with 8 Gy did not spend significantly less time mobile than sham-irradiated rats (p = 0.093).An effect of HU on the time spent in the center of the elevated plus maze [F (1, 89) = 7.228, p = 0.009] was observed, with HU animals spending less time in the center than non-HU animals (Figure 5C). Metabolomics analysis of plasma Analysis of the effects of radiation in the plasma in the absence of HU showed that the phenylalanine, tyrosine, and tryptophan biosynthesis pathway was affected, comparing sham irradiation versus irradiation with 10 Gy (Figure 6A).With the exception of phosphatidate, the plasma metabolite levels were increased by 10-Gy radiation (Figure 6B). Analysis of the effects of radiation in the plasma in the presence of HU revealed several pathways.Within the HU groups, comparing sham irradiation versus 3-Gy irradiation, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were affected (Figure 7A).The individual metabolites in the pathway are shown in Figure 7B, which were increased following irradiation.The phenylalanine, tyrosine, and tryptophan biosynthesis pathways were also affected comparing sham irradiation versus 8-Gy FIGURE 4 (A) M&M consumption over the 3 days of the test was analyzed.An HU × day interaction [F (2, 178) = 3.944, p = 0.021] and an effect of day [F (2, 178) = 55.933,p < 0.001] were observed.A trend toward an effect of HU was observed, but it did not reach significance (p = 0.099).Under the non-HU condition, sham-irradiated rats showed more M&M consumption on day 3 than on day 1.A trend toward more M&M consumption on day 2 than on day 1 was observed, but it did not reach significance (p = 0.057).Under the HU condition, sham-irradiated rats and those irradiated with 8 Gy showed more M&M consumption on day 3 than on day 1. *p < 0.05.(B) Percentage of consumption on days 2 and 3 compared to day 1.An effect of radiation was observed, with a lower percentage of M&M consumption in irradiated rats than in sham-irradiated rats.*p < 0.05 versus sham irradiation.An effect of day [F (1, 86) = 14.041, p < 0.001] and a HU × day interaction [F (1, 86) = 5.133, p = 0.026] were observed.Under the non-HU condition, a trend toward a lower percentage of consumption was observed in animals irradiated with 3 Gy than in sham-irradiated rats, but it did not reach significance (p = 0.0521).Under the HU condition, in rats irradiated with 3 Gy, the percentage of M&M consumption on day 3 was higher than that on day 2. **p = 0.006.A trend toward a higher percentage of M&M consumption was observed on day 3 than on day 2 in rats irradiated with 8 Gy, but it did not reach significance (p = 0.059). irradiation (Figure 7C).The individual metabolites in the pathway were also increased, following irradiation (Figure 7D).The arginine biosynthesis and glutamine and glutamate metabolism pathways were also affected but with a much lower impact.The phenylalanine, tyrosine, and tryptophan biosynthesis pathways were also most clearly affected comparing sham irradiation versus 10-Gy irradiation (Figure 7E).The individual metabolites in the pathway were also increased, following irradiation (Figure 7F).The alanine, aspartate, and glutamate metabolism pathway was also affected.Similar to sham irradiation versus 8-Gy irradiation, the arginine biosynthesis and glutamine and glutamate metabolism pathways were also affected. We next assessed the effects of HU within each radiation dose.In sham-irradiated animals, no pathway was affected, but plasma levels of leucine and isoleucine were increased under the HU condition (Figure 8A).In animals irradiated with 3 Gy, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were affected by HU (Figure 8B).In addition to glucose-6-phosphate and glucosamine, the metabolites were decreased under the HU condition (Figure 8C).In animals irradiated with 8 Gy, the pentose and glucuronate interconversion pathway was affected to a certain extent (Figure 8D), with reduced plasma levels of glucuronate and glutamate (Figure 8E).In animals irradiated with 10 Gy, the alanine, aspartate, and glutamate metabolism pathway was affected (Figure 8F), and increased plasma levels of 3-methylhistidine and citrate were observed under the HU condition (Figure 8G). Regression analysis of individual metabolites and select behavioral or cognitive measures We used univariate linear regression analyses stratified by radiation exposure and HU condition, with Z-scores for behavioral measures as the dependent variable and the metabolite value as the independent variable, to identify potential plasma biomarkers of radiation exposure or the HU condition on behavioral or cognitive performance.We selected those metabolites that were most consistently included in the models.Table 1 lists all behavioral measures used for the regression analysis in the current study, and the footnote describes the comparison of this list with that of our earlier study. Table 2 shows 27 plasma metabolites that were most consistently included in the models (at least four times) and which of the 13 behavioral measures they were related to.Most associations between metabolites and behavioral measures were positive, as indicated in green in Table 2. Two metabolites were negatively associated with behavioral measures (indole-3-acetate and trimethyllysine), as indicated in red in Table 2. Two metabolites, indoxyl sulfate and lauric acid, were both positively and negatively associated with behavioral measures, as indicated in blue in Table 2.For indoxyl sulfate, only one negative association was observed (distance moved on day 2 of open-field testing), while for lauric acid, only one positive association was observed (ratio times in the open arms in the elevated plus maze). In animals that received HU, eight metabolites were related to behavioral and cognitive measures.In HU animals that were shamirradiated, only one metabolite was related to behavioral and cognitive measures, i.e., indole-3-acetate.In HU animals irradiated with 3 Gy, six metabolites were related to behavioral and cognitive measures, namely, erucic acid, gluconic acid, heptadecanoate, lysoPA [18-1 (9z)-0-0], palmitate, and trimethyl lysine.In the absence of HU, 18 metabolites were related to behavioral and cognitive measures.Remarkably, no overlap in metabolites related to behavioral and cognitive measures was observed in animals that received HU compared with those that did not receive HU.In sham-irradiated animals which did not receive HU, 11 metabolites were related to behavioral and cognitive measures: asparagine, cis-4-hydroxy-d-proline, galactarate, phosphorylcholine, proline, saccharic acid (glucaric acid), serine, succinate, x3-phopshoglyceric acid, 4-hydroxyl-l-proline, and 5aminolevulinic acid.All metabolites that were related to behavioral measures were only observed under one HU condition and in animals that received one radiation dose. In the absence of HU, in animals that were irradiated with 3 Gy, three metabolites were related to behavioral and cognitive measures: homoserine, methionine, and 1-aminocyclopropanecarboxylic acid. In the absence of HU, in animals irradiated with 8 Gy, five metabolites were related to behavioral and cognitive measures: indoxyl sulfate, lauric acid, malate, malate, and 5-methylcytosine. Discussion In this study, we characterized the behavioral and cognitive performance of male Fischer rats after sham irradiation or total body Within the HU groups, comparing sham irradiation versus 10-Gy irradiation, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were also most clearly affected.Glutamine and glutamate metabolism, arginine biosynthesis, phenylalanine metabolism, glutathione metabolism, and alanine, aspartate, and glutamate metabolism pathways were also affected but with a lower pathway impact.Affected pathways with a lower pathway impact were A) purine metabolism, p-value 9.9526E-4; B) pyrimidine metabolism, p-value 0.0016094; G) tyrosine metabolism, p-value 0.0077996; H) cysteine and methionine metabolism, p-value 0.0084818; I) porphyrin and chlorophyll metabolism, p-value 0.025717; and K) glyoxylate and dicarboxylate metabolism, p-value 0.029754.(F) The individual metabolites in the pathway were also increased, following irradiation.Alanine, aspartate, and glutamate metabolism was also affected.Similar to the sham irradiation versus 8-Gy irradiation, the arginine biosynthesis and glutamine and glutamate metabolism pathway were also affected. irradiation with photons, in the absence and presence of HU.In general, the effects of HU were more pronounced than those of photon irradiation.In the open-field test, HU-treated animals moved less and froze more than non-HU animals.In the open field containing objects, animals irradiated with 8 Gy moved less than the sham-irradiated animals, and only sham-irradiated rats moved more on day 2 than on day 1.In contrast, under the HU condition, irradiated with 3 or 8 Gy moved more on day 2 than on day 1.The analysis of anhedonia in the M&M test showed that non-HU sham-irradiated rats and rats irradiated with 8 Gy consumed more M&Ms on day 3 than on day 1.The analysis of the percentage of consumption on days 2 and 3 compared to day 1 showed an effect of radiation with a lower percentage of M&M consumption in irradiated rats than in sham-irradiated rats.In addition, HU rats irradiated with 3 Gy showed a higher percentage of M&M consumption on day 3 than on day 2.An effect of HU on the measures of anxiety in the elevated plus maze was observed, with the ratio of time spent in the open arms and time spent in the center of the elevated plus maze being lower in HU animals than in non-HU animals.Analysis of the effects of radiation in the plasma in the absence of HU showed that the phenylalanine, tyrosine, and tryptophan biosynthesis pathway was affected comparing sham irradiation versus 10-Gy irradiation; most plasma metabolite (except phosphatidate) levels were increased by 10-Gy radiation.The analysis of the effects of radiation in the plasma in the presence of HU showed that, comparing sham irradiation versus 3-, 8-, or 10-Gy irradiation, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were mostly affected, with an increase in metabolites in this pathway.The arginine biosynthesis and glutamine and glutamate metabolism pathways were also affected but with a much lower impact.The analysis of the effects of HU within each radiation dose showed that in animals irradiated with 3 Gy, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were affected; metabolite levels were lower under the HU condition (except for glucose-6-phosphate and glucosamine). In animals irradiated with 8 Gy, the pentose and glucuronate interconversion pathway was affected by HU to a certain extent; plasma levels of glucuronate and glutamate were reduced.In animals irradiated with 10 Gy, the alanine, aspartate, and glutamate metabolism pathway was affected; plasma levels of 3methylhistidine and citrate were increased under the HU condition.These data show that pathways for phenylalanine, tyrosine, and tryptophan metabolism and phenylalanine biosynthesis are substantially changed, following photon irradiation and HU in animals irradiated with 3 Gy.Remarkably, the phenylalanine, tyrosine, and tryptophan metabolism pathway was also mostly affected by simulated space radiation and by HU in the plasma, hippocampus, and cortex of WAG/Rij rats (Raber et al., 2021).Tryptophan is a precursor for the indolamine neurotransmitter serotonin, and tyrosine is a precursor for the catecholamine neurotransmitters dopamine, norepinephrine, and epinephrine.Tryptophan and tyrosine play a role in executive function (Aquili, 2020), which is negatively affected by X-ray irradiation (Hienz et al., 2008;Davis et al., 2014;Tang et al., 2022). In HU animals, plasma tryptophan levels were increased, following 3-and 8-Gy irradiation.A functional deficit of tryptophan and disorders involving the hypothalamic-pituitary-adrenal axis are linked to the pathophysiology of severe depression (Begolly et al., 2017).Consistent with this notion, dexamethasone suppressed the availability of tryptophan, and major depressed patients with melancholia have lower tryptophan levels than those with minor depression (Begolly et al., 2017).In addition, tryptophan is the precursor of serotonin, and increased dietary tryptophan levels suppressed post-stress plasma glucocorticoid levels (Hoglund et al., 2019).Therefore, it is possible that increased plasma tryptophan levels in HU animals following 3-and 8-Gy irradiation might be a compensatory response and contribute to the increased activity in these two groups in activity levels on day 2 in the open field compared to that on day 1, as observed in non-HU sham-irradiated animals.As HU was started 5 days prior to the radiation exposure, the HU challenge might have served as a preconditioning challenge, mitigating the effects of these two radiation doses.As this was not observed in HU animals irradiated with 10 Gy, this dose might be too high to be compensated for by HU. In the current study, plasma was analyzed 9 months following sham irradiation or irradiation in the absence or presence of HU and 7 months after the behavioral and cognitive testing.Together with our earlier study in WAG/Rij rats, following simulated space radiation in the absence and presence of HU, the metabolic pathways affected by HU, and, to a lesser extent, by photon irradiation, and the relationships between the behavioral and cognitive measures and individual metabolite levels in plasma, support that it is feasible to develop stable long-term biomarkers of the response to HU and of behavioral and cognitive performance. HU had detrimental effects on the activity and freezing levels in the open field and the measures of anxiety in the elevated plus maze.These data are consistent with detrimental effects of simulated microgravity on 3-dimensional visuospatial tuning and orientation of mice (Oman, 2007).In the current study, HU sham-irradiated Fischer rats showed spatial habituation learning in the open field and moved less on day 2 than on day 1 in the open field.In contrast, HU sham-irradiated WAG/Rij rats showed impaired spatial habituation learning (Raber et al., 2021).These data suggest that Fischer rats might be less susceptible to HU than WAG/Rij rats.However, we recognize that as Fischer rats received HU at Wake Forest University while WAG/Rij rats received HU at the Brookhaven National Laboratory, we cannot exclude that environmental differences in the housing of the animals at the two institutions or differences in shipping the animals to and from these institutions might have contributed to these divergent findings. In general, the measures of anxiety of the rats in the elevated plus maze were high (ratio times between 0.01 and 0.08).Similarly, the rats spent little time in the center of the open field during the second open-field test (between 10 and 52 s).These relatively high measures of anxiety might have contributed to only 38% of the rats exploring the objects.As the objects used in this study are routinely used successfully for testing object recognition in mice, it seems unlikely that the objects used were anxiety-provoking and more likely that the rats had, in general, elevated anxiety levels. Indole acetic acid is an intestinal bacterium-derived tryptophan metabolite.Injecting indole into the cecum of rats decreased activity levels, and increasing indole in germ-free rats by colonizing them with the indole-producing Escherichia coli caused enhanced anxiety levels in the open field and elevated plus maze (Jaglin et al., 2018).Consistent with these data, in sham-irradiated rats exposed to the HU condition, indole-3-acetate levels were negatively associated with activity measures (activity levels during the second open-field testing and the first test in the open field containing objects) and reduced anxiety measures (time spent in the center of the first openfield test containing objects and time spent exploring objects in that test) in the current study.However, in a mouse model of unpredictable chronic mild stress (UCMS), administration of indole acetic acid (50 mg/kg for 5 weeks) reduced anxiety-like behavior, increased the expression of the brain-derived neurotrophic factor, and reversed the UCMS-induced imbalance of microbial indole metabolites in the colon (Chen et al., 2022).These data suggest that while HU is an environmental stressor, it was a transient stressor, and therefore, the associations of these metabolites with behavioral measures might have been negative. In humans, trimethyl lysine was identified as a strong predictor of the risk of developing cardiovascular disease (Li et al., 2018).Trimethyl lysine is a precursor of carnitine; the buildup of this metabolite could indicate reduced beta-oxidation and oxidative phosphorylation and could lead to fatigue/reduced activity.Consistent with this, the levels of trimethyl lysine in HU animals irradiated with 3 Gy were negatively associated with the measures of activity (first open-field test without objects and first open-field test with objects) and reduced anxiety (time spent in the center in the first and second open-field tests containing objects, and time spent exploring objects in these two tests) in this study. The mono-unsaturated omega-9 fatty acid erucic acid (3 mg/kg) enhanced cognitive performance in drug-naïve mice and ameliorated scopolamine-induced memory impairments associated with increased phosphorylation of polyphosphatidylinositide 3-kinase (PI3K), protein kinase C-zeta, extracellular signal-regulated kinase, cAMP-response elementbinding protein, and additional protein kinase B in the hippocampus (Kim and Al, 2016).Erucic acid showed protective effects in the brain and the intestines of a rotenone-treated zebrafish model of Parkinson's disease and improved activity levels (Ünal and Al, 2022).Other beneficial effects of erucic acid include antibacterial and antiviral activities, as well as cytotoxic anticancer activity (Galanthy et al., 2023).In addition, levels of erucic acid in HU animals irradiated with 3 Gy were positively associated with the change in anhedonia compared to day 1, the activity levels during the second open-field test without objects and the time spent in the center, and activity levels and time spent exploring objects in the first open-field test containing objects.However, animal studies indicate that erucic acid might cause cardiotoxicity as well [for a review, see (Galanthy et al., 2023)]. Gluconic acid, which, once fermented to butyrate in the gut, can improve gut function, increased feed intake and improved feed utilization in weaned piglets (Michiels et al., 2023).Considering the gut-brain axis (Willyard, 2021), it is remarkable that gluconic acid levels in HU animals irradiated with 3 Gy were positively associated (Moazedi et al., 2007).However, palmitate reduced the activity levels and increased anxiety levels in the elevated zero maze and reduced the time spent exploring a novel object 24 h after treatment (Moon et al., 2014).In addition, in the 6-hydroxydopamine model of PD, palmitate levels in plasma were increased and correlated with motor dysfunction (Shah et al., 2019).Furthermore, increased serum levels of palmitate promote insulin resistance and inhibited the clock function in hepatocytes by inhibiting nicotinamide adenine dinucleotide (NAD)-dependent deacetylase sirtuin 1 (SIRT1) (Tong et al., 2015). In sham-irradiated animals without HU, plasma proline levels were positively associated with activity (activity levels during the second open-field test without and with objects) and reduced anxiety measures (time spent in the center of the first and second open-field tests without objects and second open-field test with objects, and time spent exploring the objects in the second open-field test with objects).However, plasma proline levels in humans and preclinical models were linked to depression, and proline supplementation in mice exacerbated depression (Mayneris-Perxachs et al., 2022). L-serine reduced the measures of anxiety in the open field in wild-type and growth hormone-releasing hormone-knockout mice and mitigated cognitive impairments in the novel objectrecognition test and sociability in growth hormone-releasing hormone-knockout mice (Zhang et al., 2023).L-serine was also associated with improved EEGs and reduced seizure in patients with GRIND (mutations in the N-methyl-D-aspartate (NMDA) receptor)-related disorders (Krey et al., 2022).In PD patients, D-serine adjuvant treatment reduced behavioral and motor symptoms (Gelfin et al., 2012).Consistent with these beneficial effects, proline levels in sham-irradiated animals without HU were positively associated with the activity levels and time spent in the center of the second open-field test without objects and activity levels and time spent exploring object in the first open-field test containing objects. Chronic succinate feeding enhanced the endurance exercise of mice (Xu et al., 2022).Consistent with this result, succinate levels in sham-irradiated animals without HU were positively associated with activity 5-Aminolevulinic acid inhibited oxidative stress and reduced autistic-like behavioral in prenatal valproic acid-exposed rats (Matsuo et al., 2020).Consistent with this result, levels of 5aminolevulinic acid were positively associated with activity (activity levels during the second open-field test without objects and first open-field test with objects) and anxiety (time spent in the center and time spent exploring objects in the first open-field test with objects) levels. 1-Aminocyclopropanecarboxylic acid inhibited memory fading in naïve rats and prevented amnesia induced by ketamine or phencyclidine (Popik et al., 2014).1-Aminocyclopropanecarboxylic acid also reduced the measures of anxiety in the elevated plus maze (Trullas et al., 1989).Consistent with these results, levels of l-aminocyclopropanecarboxylic acid in animals without HU irradiated with 3 Gy were positively associated with activity levels (activity levels during the first open-field test containing objects) and reduced anxiety (time spent in the center of the second open-field test without objects and first open-field test with objects, and time spent exploring objects during the first open-field test with objects) levels.We recognize that 1-aminocyclopropanecarboxylic acid is a plant or microbial metabolite and might be related to the rodent diet. The intraperitoneal administration of indoxyl sulfate in mice with unilateral nephrectomy resulted in increased depression-like behavior in the forced swim and tail suspension tests, increased measures of anxiety in the open field, and cognitive impairments in the water maze (Sun et al., 2021).Chronic administration of indoxyl sulfate for 24 days in drinking water (200 mg/kg) reduced activity levels in the open field, elevated plus maze, and chimney and splash tests, and reduced spatial memory in the t-test (Karbowska et al., 2020).Consistent with these animal studies, indoxyl sulfate levels were increased in patients with early-stage chronic kidney disease and associated with poor executive function (Yeh et al., 2016).In contrast to these results, in animals without HU irradiated with 8 Gy, indoxyl sulfate levels were positively associated with the change in anhedonia compared to day 1, the time spent in the second open-field test containing objects, the time spent exploring objects in that test, and the discrimination index, a measure of cognitive performance. Lauric acid improved behavioral performance, motor function, food intake, and weight in the haloperidol-induced PD rat model (Zaidi et al., 2020).In the current study, lauric acid levels were positively associated with reduced anxiety measures in the elevated plus maze (ratio times spent in the open arms) but negatively associated with the anhedonia change compared to day 1 and the activity levels in the first and second open-field tests containing objects. This study had several limitations: 1) brain tissues could not be collected and analyzed because of COVID-19-related modified operations and travel restrictions; 2) plasma was not analyzed at earlier time points closer to the behavioral and cognitive testing; 3) the forced swim test could not be used to assess depression-like behavior in rats, making it harder to compare depression-like behavioral in this study and our previous study; and 4) no female rats were included in this study. In summary, the detrimental effects of HU on the behavioral and cognitive performance and metabolic pathways illustrate the importance of developing mitigators to reduce the effects of microgravity on the brain function of astronauts during and following space missions.The metabolomics data, including the associations of specific metabolites in plasma, collected 7 months after behavioral testing, with behavioral measures, suggest that it will be possible to develop stable plasma biomarkers of HU and behavioral and cognitive performance that could be used for developing and testing such mitigators.There were five metabolites positively related to behavioral and cognitive measures in HU animals irradiated with 3 Gy: erucic acid, gluconic acid, heptadecanoate, lysoPA [18-1 (9z)-0-0], and palmitate.Trimethyl lysine was the only metabolite negatively related to behavioral and cognitive performance in the HU animals irradiated with 3 Gy.These data suggest that these six metabolites may be of benefit to mitigate HU-and radiationinduced behavioral alterations and cognitive injury. (MCW), Wake Forest University, and OHSU.The study was conducted in accordance with the local legislation and institutional requirements. the time spent in the open areas (defined as time spent in the open arms divided by time spent in the open + closed arms); 2) ratios of the entries in the open arms (entries into the open areas divided by entries into the open + closed arms); 3) distance moved in the open and closed arms; 4) time spent mobile; 5) time spent freezing; and 6) time spent in the intersection. Distance moved on day 1 of the open-field test.Activity Distance moved on day 2 of the open-field test.Activity Distance moved in the object-recognition training trial.Activity Distance moved in the object-recognition test trial.Anxiety Percentage of time spent in the center of the open field on day 1 of open-field testing.Anxiety Percentage of time spent in the center of the open field on day 1 of open-field testing.Anxiety Ratio times spent in the open arms/open + closed arms.Cognition Total duration spent exploring the objects during the training trial.Cognition Total duration spent exploring the objects during the test trial.Cognition Percentage of time spent in the center of the open field during the object-recognition training trial.Cognition Percentage of time spent in the center of the open field during the object-recognition test trial.Cognition Discrimination index in the object-recognition test. FIGURE 2 FIGURE 2Performance in the open field in the absence of objects.(A) The rats were tested for exploratory behavior in the open field on 2 subsequent days.The trials last 5 min and were conducted 24 h apart.(A) HU-treated animals moved less than non-HU animals.***p < 0.001.An effect of day was observed in all groups showing spatial habitation learning and moving less on day 2 than on day 1. (B) Freezing levels were higher in HU animals than in non-HU animals.*p < 0.05.An effect of day was observed in all groups freezing more on day 2 than on day 1.No effect of radiation was observed on activity or freezing levels in the open field. FIGURE 3 (A) When the performance in the open field containing objects was assessed, a trend toward an effect of HU on activity levels was observed, with lower activity levels in HU animals than in non-HU animals.# p = 0.072).In the non-HU group, a radiation × trial interaction [F (3, 70) = 5.991, p = 0.002] and an effect of trial [F (1, 35) = 15.818,p < 0.001] were observed.The animals irradiated with 8 Gy moved less than the sham-irradiated animals ( o p < 0.05, Dunnett's test).In addition, under the non-HU condition, sham-irradiated rats moved more in trial 2 than in trial 1. **p = 0.001.Under the HU condition, animals irradiated with 3 or 8 Gy moved more in trial 2 than in trial 1. *p < 0.05.(B) A trend toward the effect of HU on time spent exploring the objects was observed, with HU animals spending less time spent exploring the objects than non-HU animals.# p = 0.069. FIGURE 5 FIGURE 5 Measures of anxiety in the elevated plus maze.(A) The ratio of time spent in the open arms was lower in HU animals than in non-HU animals.*p < 0.05.(B) A trend toward HU animals spending less time mobile than non-HU animals was observed.# p = 0.05.Under the non-HU condition, a trend toward rats irradiated with 8 Gy spending less time mobile than sham-irradiated rats was observed.@ p = 0.093.(C) HU animals spent less time in the center of the elevated plus maze than non-HU animals.**p = 0.009. FIGURE 6 FIGURE 6Effects of radiation on metabolic pathways in the plasma in the absence of HU. (A) Analysis of the effects of 10-Gy radiation in the plasma in the absence of HU showed that the phenylalanine, tyrosine, and tryptophan biosynthesis pathway was affected.Affected pathways with a lower pathway impact were B) glycerolipid metabolism, p-value: 0.010842; C) glycerophospholipid metabolism, p-value: 0.012451; D) pantothenate and CoA biosynthesis, p-value: 0.019314; E): beta-alanine metabolism, p-value: 0.02063; F: glycine, serine, and threonine metabolism, p-value: 0.022268; and two pathways with borderline significance; H): tyrosine metabolism, p-value: 0.051856; and K): cysteine and methionine metabolism, p-value: 0.06513.(B) With the exception of phosphatidate, the plasma metabolite levels were increased by 10-Gy radiation.The metabolome view figures contain all the matched pathways (the metabolome) arranged by p-values (generated as part of the pathway enrichment analysis) on the Y-axis and pathway impact values (generated as part of the pathway topology analysis) on the X-axis.The node color is based on its p-value, and the node radius is determined based on their pathway impact values. FIGURE 7(A) Effects of radiation on metabolic pathways in the plasma in the presence of HU.Within the HU groups, comparing sham irradiation versus 3-Gy irradiation, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were affected.Affected pathways with a lower pathway impact were A) porphyrin and chlorophyll metabolism, p-value 0.011605; D) tyrosine metabolism, p-value 0.020236; E) tryptophan metabolism, p-value 0.020423; F) glutathione metabolism, p-value 0.024727; G) cysteine and methionine metabolism, p-value 0.027695; and H) pyruvate metabolism, p-value 0.049762.(B) Plasma levels of the individual metabolites were increased, following irradiation.(C) Within the HU groups, comparing sham irradiation versus 8-Gy irradiation, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were also affected.The arginine biosynthesis and glutamine and glutamate metabolism pathways were also affected but with a much lower impact.Affected pathways with a lower pathway impact were A) porphyrin and chlorophyll metabolism, p-value 0.0032677; B) valine, leucine, and isoleucine biosynthesis, p-value 0.021871; C) cysteine and methionine metabolism, p-value 0.023914; D) pantothenate and CoA biosynthesis, p-value 0.024702; E) glutathione metabolism, p-value 0.029465; F) ubiquinone and other terpenoid-quinone biosynthesis, p-value 0.02985; G) tyrosine metabolism, p-value 0.02985; H) aminoacyl-tRNA biosynthesis, p-value 0.030508; I) steroid biosynthesis, p-value 0.037586; L) purine metabolism, p-value 0.042312; M) beta-alanine metabolism, p-value 0.044738; and N) glycine, serine, and threonine metabolism, p-value 0.049054.(D) The individual metabolites in the pathway were also increased, following irradiation.(E) Within the HU groups, comparing sham irradiation versus 10-Gy irradiation, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were also most clearly affected.Glutamine and glutamate metabolism, arginine biosynthesis, phenylalanine metabolism, glutathione metabolism, and alanine, aspartate, and glutamate metabolism pathways were also affected but with a lower pathway impact.Affected pathways with a lower pathway impact were A) purine metabolism, p-value 9.9526E-4; B) pyrimidine metabolism, p-value 0.0016094; G) tyrosine metabolism, p-value 0.0077996; H) cysteine and methionine metabolism, p-value 0.0084818; I) porphyrin and chlorophyll metabolism, p-value 0.025717; and K) glyoxylate and dicarboxylate metabolism, p-value 0.029754.(F) The individual metabolites in the pathway were also increased, following irradiation.Alanine, aspartate, and glutamate metabolism was also affected.Similar to the sham irradiation versus 8-Gy irradiation, the arginine biosynthesis and glutamine and glutamate metabolism pathway were also affected. FIGURE 8 FIGURE 8Effects of HU on metabolic pathways in plasma at each radiation dose.(A) In sham-irradiated animals, plasma levels of leucine and isoleucine were increased under the HU condition.(B) In animals irradiated with 3 Gy, the phenylalanine, tyrosine, and tryptophan biosynthesis pathways were affected by HU. (C) In addition to glucose-6-phosphate and glucosamine, the metabolites were decreased under the HU condition.(D) In animals irradiated with 8 Gy, the pentose and glucuronate interconversions pathway was affected.B: Butanoate metabolism pathway.(E) Plasma levels of glucuronate and glutamate were reduced.(F) In animals irradiated with 10 Gy, the alanine, aspartate, and glutamate metabolism pathway was affected.The citrate cycle (TCA cycle) (A; p-value is 0.009424) and glyoxylate and dicarboxylate metabolism (C: p-value 0.0099177) pathways were also affected but with a lower impact.(G) Plasma levels of 3-methylhistidine and citrate under the HU condition were increased. with activity (activity levels in the first open-field test without objects and first open-field test with objects) and reduced anxiety-like behavior (time spent in the center and time spent exploring objects in the first and second open-field tests containing objects) measures in the current study.LysoPA 18:1 reduced activity levels in the open field, increased anxiety levels in the elevated plus maze, and increased depression-like behavior in the forced swim test (Castilla-Ortega et al., 2014).In contrast to this result, lysoPA 18:1 levels in HU animals irradiated with 3 Gy were positively associated with activity (distance moved in the first and second open-field test containing objects) and reduced anxiety measures (time spent in the center of the first and second open-field tests containing objects and time spent exploring the objects in the second open-field test containing objects) in the current study.In the current study, palmitate levels in HU animals irradiated with 3 Gy were positively associated with activity (activity levels in the first and second open-field tests containing objects) and anxietylike behavior (time spent in the center of the open field and exploring objects in the first and second open-field tests containing objects) measures.Consistent with this result, palmitate improved cognitive performance in the T-maze and sensorimotor function in the rotarod test (activity levels in the second open-field test without and with objects) and reduced anxiety-like behavior (ratio times in the open arms in the elevated plus maze, time spent in the center of the first and second open-field tests without objects and second open-field test with objects, and time spent exploring objects in the second open-field test with objects) levels. TABLE 1 Behavioral measures included for the regression analysis with individual metabolites a . TABLE 2 Association of specific metabolites with behavioral measures a . TABLE 2 ( Continued) Association of specific metabolites with behavioral measures a .openfield; no, open field with objects; epm, elevated plus maze; anh, anhedonia test; tot dist, total distance moved; cent time, time spent in the center of the enclosure; tot dur obj, total time spent exploring the objects; anh percent change d1 ave, the percentage of M&Ms consumed on days 2 and 3 compared to day 1.Metabolites positively associated with behavioral measures are indicated in green; metabolites negatively associated with behavioral measures are indicated in red; and metabolites positively and negatively associated with behavioral measures are indicated in blue. b c N of significant associations.
2024-01-10T16:14:44.896Z
2024-01-08T00:00:00.000
{ "year": 2024, "sha1": "85f30503e8386c5efbc5e872bb51ffe7fed22525", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2023.1316186/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86496d41c346a334a10a5154de50f881e6fe729c", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [] }
6427277
pes2o/s2orc
v3-fos-license
On the Worldsheet Derivation of Large N Dualities for the Superstring Large N topological string dualities have led to a class of proposed open/closed dualities for superstrings. In the topological string context, the worldsheet derivation of these dualities has already been given. In this paper we take the first step in deriving the full ten-dimensional superstring dualities by showing how the dualities arise on the superstring worldsheet at the level of F terms. As part of this derivation, we show for F-term computations that the hybrid formalism for the superstring is equivalent to a $\hat c=5$ topological string in ten-dimensional spacetime. Using the $\hat c=5$ description, we then show that the D brane boundary state for the ten-dimensional open superstring naturally emerges on the worldsheet of the closed superstring dual. Introduction There is by now a large class of examples in string theory that realizes the idea of 't Hooft of large N dualities for gauge theories. Most of the arguments for the existence of such dualities derive from the target space perspective: the back-reaction on the gravity modes by the D-branes. However, the original motivation of 't Hooft was a statement visible at the level of the worldsheet, namely he conjectured that somehow the holes in the large N expansions of Feynman diagrams close up and lead to a closed string expansion. Thus these dualities are expected to be visible genus by genus in the worldsheet. Understanding the large N dualities from this viewpoint is crucial because it also will teach us how the large N dualities, unlike U-dualities, are derivable from perturbative considerations of closed string theory. A simple example of large N duality was proposed in [1] which relates large N Chern-Simons theory on S 3 , which is equivalent to open topological strings [2], with topological closed strings on the resolved conifold, where the size of the blown up P 1 is given by the 't Hooft parameter. This duality has been derived from a worldsheet perspective in [3]: Starting from the closed string side and using the linear sigma model description of the worldsheet theory [4], one discovers that in the limit of small 't Hooft parameter, the On the other hand, motivated from the meaning of topological string computations as F term computations in an associated superstring [5], this topological string duality was embedded in superstrings [6], and extended to a relatively large class of superstring dualities (see e.g. [7]), and led to a link between N = 1 supersymmetric gauge theories and matrix models [8]. Even though the worldsheet derivation of the topological string duality would lead, by a chain of arguments, to the F term dualities in superstring context, a direct worldsheet derivation of these dualities was missing in the context of the superstring. In this paper we aim to fill this hole, at least at the level of F terms. A d = 4 spacetimesupersymmetric description of the superstring on Calabi-Yau threefolds is given by the hybrid formalism [9,10,11], which is related to the RNS formalism by a field redefinition. We will show that the computation of F terms using the hybrid formalism is equivalent to the computation of F terms using a ten-dimensional topological string withĉ = 5. We will then use theĉ = 5 topological string to establish the worldsheet equivalence of F terms between open and closed sides. In particular, we will find using theĉ = 5 description that the D brane boundary state for the ten-dimensional open superstring naturally emerges on the worldsheet of the closed superstring dual. The topological string method has been used in motivating some of the results on superpotential terms in gauge theories, for example in [12,13,14], which have then been verified by field theory methods. This paper provides a precise justification of these results from the string theory perspective. While we establish the equivalence of closed and open strings only at the level of F terms, the setup we present should be viewed as the first step in the derivation of the full duality The organization of this paper is as follows. In section 2 we review the worldsheet derivation of large N topological string duality [3]. In section 3 we formulate topological strings directly in ten dimensions, withĉ = 5, and show its equivalence to the hybrid formalism [9,10,11] when evaluating F terms for superstring compactifications. In section 4 we use thisĉ = 5 topological formulation of the superstring to establish the worldsheet equivalence of F terms between open and closed sides. Review of Topological String Duality In this section, we will briefly review the worldsheet derivation [3] of the duality between the A-type topological closed string on the resolved conifold and the open topological string on the deformed conifold with N A-branes wrapping on the S 3 of the conifold. The topological string coupling constants are the same on both sides of the duality and denoted by λ. The Kähler moduli t of the resolved conifold (the "size" of the P 1 ) in the closed string side is mapped to the number N of the A-branes in the open string side by the relation, In this sense, this is an example of the 't Hooft duality. This duality was conjectured in [1], and various evidences for the duality have been found in [15,16,17,18,19,20]. To derive the duality, we start with the closed string side and expand string amplitudes in powers of t. What is expected to emerge from the duality is a sum over open string worldsheets with each boundary weighted by the factor of N λ = −it. The target space becomes singular in the limit t → 0, and the worldsheet in the limit is best described by using the linear sigma model [4]. For the resolved conifold, the linear sigma model consists of four chiral multiplets, whose scalar fields are denoted by a 1 , a 2 and b 1 , b 2 , and one vector multiplet, whose scalar field is denoted by σ. The chiral multiplet fields a 1 , a 2 carry charge +e with respect to the gauge field A in the vector multiplet, and b 1 , b 2 carry −e. After integrating out the auxiliary fields, the potential U for the bosonic fields are given as According to the duality relation (2.1), the Kähler moduli is pure imaginary. In this case, t appears as the theta term for the gauge field ∼ t dA. If we introduce a twisted chiral superfield Σ defined from the vector superfield V as Σ =D + D − V = σ + · · ·, the theta term can be also written as as an F term with the superpotential We will find this description in terms of Σ to be useful in the following discussion. When t = 0, the linear sigma model flows in the infrared limit to the non-linear sigma model for the conifold. The theta term lifts the Coulomb branch and constrains σ = 0. When we expand around t = 0, however, we need to take into account a new flat direction where σ can be non-zero. Due to the potential (2.2), the chiral multiplet fields are now constrained to vanish, a 1,2 = b 1,2 = 0. We call this flat direction as the C branch. In comparison, the branch where σ = 0 is called the H branch. When we quantize the linear sigma-model, we need to integrate over both C and H branches. It is useful to think that the worldsheet is divided into C and H domains, where the fields take values in the C and H branches respectively. Performing the functional integral involves summing over all possible configurations of these two branches. We expect that quantization of the H branch still leads to the sigma-model on the conifold away from the conifold point. How to remove the conifold point would depend on how we divide the integral over σ between the two branches. On the other hand, the 1 The gauge invariant combinations, z ij = a i b j , obey the relation z 11 z 22 − z 12 z 21 = 0 defining the conifold geometry. For a given set of z ij , the original fields a i and b i are determined modulo (a 1,2 , b 1,2 ) → (e ρ a 1,2 , e −ρ b 1,2 ), which is taken into account by the gauge symmetry and the constraint |a 1 | 2 + |a 2 | 2 = |b 1 | 2 + |b 2 | 2 . C branch is non-geometric since a 1,2 , b 1,2 , which are coordinates for the conifold, become massive. We regard C domains as holes on the worldsheet and claim that this is how open strings emerge from the closed string theory. For this interpretation to work, we need that: (1) Every C domain has the topology of the disk. Contributions from all other topologies should vanish in string amplitudes. (2) Each disk in the C branch contributes the factor of −it = N λ. It was shown in [3] that both of these statements are true. To show (1), it was noted that each C domain contributes to a topological string amplitude as where To evaluate (2.4), we note that the C domain has a description as a Landau-Ginzburg model with the superpotential W being given by (2.3). The disk amplitude is then given by an integral of exp(−W ). The only subtlety is the measure factor of σ −2 which arises from the integral over a 1,2 , b 1,2 , which are massive in this domain. Taking this into account, we find, This show that the disk amplitude is indeed multivalued around σ 0 = 0 as Therefore the contribution of the C domain of the disk topology is given by This shows that (1) and (2) are indeed true for the closed string theory. We have found that the closed string amplitude, when expanded in powers of t, can be expressed as a sum over holes on the worldsheet with the power of t keeping track of the number of holes. Namely the closed string theory is indeed equivalent to an open string theory with some boundary condition. Is the boundary condition exactly what we expect from the large N duality? Since the worldsheet variables a 1,2 , b 1,2 become massive in the C domain, near the interface of the C and H domains, they stay near the tip of the conifold. Their precise behavior depend on how we divide the σ integral between the two branches. On the other hand, the A brane for the open string is supposed to wrap on the S 3 of the deformed conifold. Its size is undetermined since changing the radius is a BRST trivial deformation. When the radius is small, the S 3 is near the tip of the conifold. Therefore, modulo the ambiguities that exist in both sides of the duality, the boundary of the C domain correctly reproduces the A brane boundary condition in the open string dual. Equivalence ofĉ = 5 and Hybrid Computation of F Terms In this section we introduce the concept of topological strings in ten dimensions witĥ c = 5, generalizing the topological strings often used in the context of Calabi-Yau threefolds, and establish its direct equivalence to the hybrid formalism for certain F term computations in type II superstrings. In the first subsection, we will show that states in the G + cohomology in theĉ = 5 topological string include supersymmetry multiplets containing massless compactification moduli as well as the multiplet containing the self-dual graviphoton field strength. In the second subsection, we will give aĉ = 5 topological prescription for computing tree and loop scattering amplitudes involving these states which will contribute only to F terms in the low-energy effective action. And in the third subsection, we will show how to describe these states using the hybrid formalism and will prove that the hybrid prescription for their scattering amplitudes agrees with theĉ = 5 topological string prescription. Chiral states using theĉ = 5 description The worldsheet fields in theĉ = 5 formalism include the d = 4 variable x m for m = 0 to 3, the left-moving chiral superspace variables θ α and its conjugate momentum p α for α = 1 to 2, and an N = 2ĉ = 3 superconformal field theory for the internal compactification manifold. Unlike the superstring in the hybrid formalism, theĉ = 5 formalism does not involve dotted superspace variables θ * α or its conjugate momenta p * α , and also does not contain the chiral boson ρ. For the type II superstring, theĉ = 5 formalism also includes the right-moving fermionic variablesθ α and its conjugate momentap α , but does not involveθ * α orp * α . (We will reserve barred notation throughout this paper to denote right-moving variables, and will use the * superscript to denote dotted spinor variables.) For the formalism to be Hermitian, one therefore needs to Wick-rotate to either signature (4, 0) or (2, 2) so that θ α is real. Although the reality conditions for spacetime fields in these signatures are not the standard ones, it is straightforward to Wick-rotate back to the standard Minkowski reality conditions after computing scattering amplitudes and determining the corresponding F terms in the effective action. In the N = 2ĉ = 5 formalism, the worldsheet action is and the left and right-moving twisted N = 2 generators are where x αα = x m σ αα m andα = (+,−), S CY and { T CY , G + CY , G − CY , J CY } are the worldsheet action and twisted N = 2ĉ = 3 generators for the internal compactification manifold, and G + and G − carry conformal weight +1 and +2 respectively. In the traditional description of the topological string, one treats (x α+ , θ α ,θ α ) as holomorphic coordinates on C 2 = R 4 and their superpartners and (x α− , p α ,p α ) as anti-holomorphic coordinates and their partners. The four-dimensional part of the twisted N = 2 theory is then the topological B model whose target space is C 2 . Note that the N = 2 generators of (3.1) only preserve a U (1) × SU (2) (or GL(1) × SL(2)) subgroup of SO(4) (or SO(2, 2)) Lorentz invariance in the signature (4, 0) (or (2, 2)). For simplicity, we will usually restrict our attention to the left-moving sector. Since G + = (θ α ∂x α+ + G + CY ) plays the role of a BRST operator in the topological N = 2 string, it is natural to compute its cohomology. Since θ α ∂x α+ and G + CY involve different worldsheet fields, states V in the cohomology of G + can be written as V = i Φ i σ i where Φ i is constructed from the four-dimensional fields {x m , θ α , p α } and is in the cohomology of θ α ∂x α+ , and σ i is constructed from compactification-dependent fields and is in the cohomology of G + CY . Using the standard quartet argument, states in the cohomology of θ α ∂x α+ can depend only on the zero modes of θ α and x α+ . So the most general state in the cohomology of G + is where σ i is in the cohomology of G + CY . Such states will be called "chiral" states. In this paper, we shall only consider chiral states where σ i contains either +1 or zero where σ i is a chiral primary of (left,right)-moving charge (+1, +1) associated with the internal N = 2ĉ = 3 superconformal field theory. The θ =θ = 0 component of Φ i is the chiral modulus field and the θ =θ = 0 component of D αDβ Φ i is the self-dual Ramond-Ramond (R-R) flux associated with this modulus. For both the Type IIA and IIB superstring, chiral states carrying zero U (1) charge in the internal sector correspond to a multiplet containing the self-dual graviphoton. The associated self-dual graviphoton vertex operator is Although the chiral states of (3.3) and (3.4) do not have fixed charge with respect to the U (1) charges dzJ and dzJ of (3.1), they can be defined to have fixed charge with respect to Note that dzK + dzK is a conserved charge which commutes with the N = 2 generators of (3.1). When (3.3) is independent of x α+ and (3.4) is quadratic in x α+ (i.e. when F αβ and R αβγδ are constants), these chiral states all have charge +2 with respect to (3.5). Scattering amplitudes using theĉ = 5 formalism To compute scattering amplitudes of chiral states using theĉ = 5 formalism, we shall use the topological N = 2 prescription where G + is treated as the BRST charge and G − is treated as the b ghost. For M -point g-loop Type II scattering amplitudes, the N = 2 topological prescription is where µ j denotes the (3g − 3 + M ) Beltrami differentials associated with the worldsheet moduli m j , and 2 signifies the product of left and right-moving terms. Sinceĉ = 5, this amplitude vanishes by charge conservation unless In computing these special scattering amplitudes, it will be convenient to choose 2g of the (3g−3+M ) Beltrami differentials to be associated with the locations of the graviphoton vertex operators. So the formula of (3.6) becomes and G − Ḡ − R signifies the single pole of G − andḠ − with R. It will be useful to note that since (3 − 3g) U (1) charge is needed from the internal sector, only the G − CY term in G − contributes in µ j G − . In order that the µ j G − integrals in (3.6) reproduce the correct Faddeev-Popov measure for integration over worldsheet metrics, it is usually required that the vertex operators V r have no double (or higher-order) poles with G − . This condition guarantees that G − V has no singularities with G − which, together with G + V = 0, implies that V is an N = 2 chiral primary. For chiral states of the two types considered here, this would imply that part of G − , and there is therefore no need to impose (3.10) for consistency of these scattering amplitudes. Furthermore, the fact that only G − CY contributes to µ j G − implies that the amplitude is spacetime supersymmetric. To show this, define the spacetime supersymmetry generators in theĉ = 5 formalism as which anticommute to the usual supersymmetry algebra Note that these supersymmetries preserve the G + cohomology when acting on states that carry no P α+ momentum since {q * α , G + } = 0 and {q α , and T 4d are the four-dimensional contributions to G − and T . Since G − 4d appears only in the integrated graviphoton vertex operator of (3.9), the anticommutator {q * α , G − 4d } = δ+ α T 4d can be ignored since it only shifts the graviphoton vertex operator by a surface term. To obtain the supersymmetric F term associated with the amplitude of (3.8), integrate over the zero modes of (x m , θ α ,θ α ) and use the graviphoton vertex operator of (3.9) to absorb the zero modes of p α . In terms of the self-dual graviphoton superfield F αβ = ∂ α+ ∂ β+ R, one finds where CY denotes a functional integral over the internal compactification-dependent fields and the 2g α indices and 2g β indices in 2g s=1 F s αβ are contracted with each other in all possible combinations. So the F term associated with this scattering amplitude is where the coefficient f i 1 ...i M −2g is defined by the N = 2ĉ = 3 topological amplitude If we denote the Kähler (complex) moduli by t i and denote the topological string amplitude at genus g by F g (t i ), then Hybrid description of chiral states It will be shown here that the scattering amplitudes of chiral moduli states and selfdual graviphoton states computed in (3.12) using theĉ = 5 formalism agree with those computed using the hybrid formalism. Note that hybrid scattering amplitudes involving only self-dual graviphoton states were computed previously in [10]. As discussed in [9,10,11], the hybrid formalism is related to the RNS formalism by a field redefinition. In the hybrid formalism, physical superstring states are described by chiral primary fields of +1 U (1) charge with respect to the twisted N = 2ĉ = 2 generators where and { T CY , G + CY , G − CY , J CY } are the same twisted N = 2ĉ = 3 generators as before. Note that ρ is a negative-energy chiral boson satisfying the OPE ρ(y)ρ(z) ∼ − log(y − z) and d α and d * α are defined such that they anticommute with the supersymmetry generators and satisfy the OPE's To compare scattering amplitudes using the hybrid formalism with those of (3.12), one first needs the hybrid version of the vertex operators for the chiral moduli and graviphoton multiplets. The superstring states corresponding to compactification moduli multiplets are described in the hybrid formalism by the vertex operators where σ i is the same compactification-dependent field as in theĉ = 5 description and carries +1 left and right moving U (1) charge. One can easily check that V is chiral (i.e. is annihilated by G + and Ḡ + ) if D * α Φ i =D * α Φ i = 0 and is a chiral primary (i.e. has Because of the additional condition D α D α Φ i =D αD α Φ i = 0, theĉ = 5 vertex operator V = i Φ i σ i is not necessarily a chiral primary vertex operator in the hybrid formalism. However, as will be seen later in this subsection, the condition D α D α Φ i = D αD α Φ i = 0 will not be necessary for consistency of hybrid scattering amplitudes involving only chiral states. This is because, just as in theĉ = 5 formalism, only the G − CY term will contribute in G − for these scattering amplitudes in the hybrid formalism. So there is no problem if the vertex operators have singularities with the four-dimensional d 2 e ρ term in G − . This implies that one can prove equivalence of scattering amplitudes even for chiral states such as V = (θ −θ) α (θ −θ) α σ which are not N = 2 primary fields in the hybrid formalism and therefore do not correspond to on-shell superstring states. This vertex operator V , which corresponds to a supersymmetric combination of the R-R and NS-NS fluxes associated to the moduli σ, will play an important role in the next section. The superstring state corresponding to the self-dual graviphoton multiplet will be described in the hybrid formalism by the vertex operator V = e −ρ p * + e −ρp * + R(x, θ,θ). (3.17) This vertex operator is chiral if D * α R =D * α R = 0 and is primary if D α ∂ α+ R =D α ∂ α+ R = 0. Although this vertex operator carries zero U (1) charge in the internal sector, it carries +1 left and right-moving U (1) charge in the four-dimensional sector because of its ρ dependence. Using the OPE's of (3.15), one finds that the integrated form of the graviphoton vertex operator is So if one sets θ * α =θ * α = 0, this expression coincides with theĉ = 5 expression of (3.9). To compute scattering amplitudes in the hybrid formalism, one first extends theĉ = 2 to transform together with G + and G − as two doublets under this SU (2). As discussed in [10], the M -point g-loop amplitude is defined by the formula where Note that R-charge in the hybrid formalism is equivalent to picture in the RNS formalism. To compute the A g,M,g−1,g−1 component of A g,M using the formula of (3.19), first note that all terms in this component contain an equal number ofG − andG + operators. To compare with theĉ = 5 prescription of (3.8), it will be useful to first turn all pairs of (G + ,G − ) operators into pairs of (G + , G − ) operators by performing the appropriate contour deformations. For example, suppose one has a pair ofG + (y 1 )G − (y 2 ) operators at y 1 and y 2 . First writeG − = [ G + , J −− (y 2 )] and deform the G + contour off of J −− (y 2 ) until it hits the J(v g ) operator, turning it into G + (v g ). Secondly, writeG + (y 1 ) = [ G + , J(y 1 )] and deform the G + contour off of J(y 1 ) until it hits the J −− (y 2 ) operator, turning it into G − (y 2 ). Finally, write G + (v g ) = [ G + , J(v g )] and deform the G + contour off of J(v g ) until it hits the J(y 1 ) operator, turning it into G + (y 1 ). So this procedure has turned In performing these contour deformations, we have ignored possible surface terms on the moduli space of the worldsheet coming from the commutator [ G + , µ j G − ] = µ j T , where µ j T produces a total derivative on the moduli space. However, for the scattering amplitudes discussed here, one can show that internal U (1) charge conservation implies that these surface terms do not contribute. As in theĉ = 5 computation, internal U (1) conservation implies that the d = 4 part of G − only contributes to the scattering amplitude when it acts on the graviphoton vertex operator. Also, one can argue by internal U (1) conservation that only the d = 4 part of G + contributes. So the only possibility of producing a surface term comes from [ G + 4d , µ j G − 4d ] = µ j T 4d where the subscript 4d denotes the four-dimensional contribution to these generators and µ j is associated with the location of the graviphoton vertex operator. But this type of surface term is harmless since it does not involve the (3g − 3) worldsheet moduli whose boundary describes degeneration of the genus g surface. After replacing all (G + ,G − ) pairs with (G + , G − ) pairs and choosing 2g of the Beltrami differentials to be associated with the locations of the graviphoton vertex operators, one obtains the formula where W s is defined in (3.18) and H denotes the functional integral using the hybrid formalism which includes the (θ * α , p * α ) and ρ fields. To compare this formula with theĉ = 5 formula of (3.8), insert the identity operator 1 = [ G + , θ * α θ * α e ρ (w)] in (3.20) and pull the G + contour off of θ * α θ * α e ρ (w) until it hits J(v g ) to give the formula To derive (3.21), we have used that U (1) charge conservation implies that only G − CY contributes in the µ j G − terms and that only G + 4d contributes to G + (v i ). Finally, one needs to do the functional integral over the worldsheet fields (θ * α , p * α , ρ) which are present in the hybrid formalism but not in theĉ = 5 formalism. Since all p * α variables in G + (v i ) must be used to soak up the 2g zero modes of p * α , none of the θ * α variables in the vertex operators can contribute and the θ * α θ * α (w) soaks up the zero modes of θ * α . Because the ρ chiral boson has negative energy (like the φ chiral boson in the RNS formalism which comes from fermionizing the (β, γ) ghosts), it is subtle to define its functional integral. However, for the amplitudes being considered here, the ρ field always appears together with the (θ * + , p * + ) fields in the combination θ * + e ρ or p * + e −ρ . For this reason, the functional integral over the ρ chiral boson precisely cancels the functional integral over the (θ * + , p * + ), even for the zero modes. So after performing the functional integral over the (θ * α , p * α , ρ) fields, one obtains the amplitude which agrees with theĉ = 5 formula of (3.8). Large N Duality in Superstring It was pointed out in [6] that the duality between the open and closed topological string theories can be uplifted to the type IIA superstring on the conifold times R 4 with N D5 branes wrapping on the P 1 of the conifold and extended in the R 4 direction to another compactification with N units of R-R flux and without D branes. As far as the F terms are concerned, this superstring duality is inferred from the topological string duality combined with the relation between the superpotential terms and the topological string amplitudes [5,15]. This duality is supposed to hold beyond the superpotential computation, along the line of construction described in the closely related papers [21,22]. A derivation of the full duality would require controlling back-reactions of the R-R fluxes to the metric and understanding worldsheet dynamics in such a background, and it would be tantamount to proving the AdS/CFT correspondence. In this section, we will make the first step in this direction by giving a direct worldsheet derivation of the duality restricted to the superpotential computation, where the back-reaction to the metric can be ignored as being a BRST trivial deformation of the background. As we saw in the last section, theĉ = 5 formalism allows us to compute superpotential terms as topological string amplitudes. In this formalism, in addition to theĉ = 3 model discussed in section 2, we have four bosons x αα and four pairs of fermions (p α , θ α ) and (p α ,θ α ). In theĉ = 3 model on the Calabi-Yau space, basic observables are associated to cohomology elements of the Calabi-Yau space. For example, for ω ∈ H 1,1 , we have σ = ω ij ψ i L ψj R . In theĉ = 5 formalism, it can be multiplied by any function of θ,θ as Φ(θ,θ) σ, giving rise to a vertex operator for the N = 2 vector multiplet in four dimensions associated to σ. We can turn on the auxiliary fields in this multiplet to break the N = 2 supersymmetry to N = 1. 2 For example, we can turn on the perturbation, This corresponds to turning on R-R flux through the cycle dual to ω, represented by ǫ αβ θ αθβ σ [6], combined with an appropriate amount of NS-NS flux, represented by ǫ αβ (θ α θ β +θ αθβ ) σ, through the dual cycle. The strength of the NS-NS flux (related to the coupling constant τ of the dual gauge theory) is dictated by the condition of extremization of the glueball superpotential [6], leading to preservation of N = 1 supersymmetry. Note that this term reduces the supersymmetry to N = 1 given by simultaneous shift of θ,θ. With these fluxes turned on and the supersymmetry reduced, the N = 2 vector multiplet is decomposed into an N = 1 vector multiplet v α and the chiral multiplet t. These couple to the worldsheet as where we included the effect of the fluxes. In section 2, we saw that, in theĉ = 3 model, the Kähler moduli appears as a coefficient of the linear superpotential (2.3). The coupling (4.1) in theĉ = 5 model can also be written in term of a superpotential given by where Σ is the superfield in theĉ = 3 model with σ as the lowest component, and Θ,Θ are fermionic superfields whose lowest components are θ andθ. The contribution of W to the worldsheet action is However, in a non-trivial background such as that of [21] or [22], the reality conditions together with supersymmetry may imply a non-zero value for N . Noting that G − is a linear combination of operators acting on the 4d part and the Calabi-Yau part, we can express it as Since W is annihilated by G + and Ḡ + of (3.1), W is a chiral superpotential which implies that (4.3) is in the BRST cohomology. Actually, annihilation by G + andḠ + of [12,13]. Thus it is reasonable to expect a phenomenon dual to it in the closed string side. In the following, we will consider the large N duality in the absence of the gravity field strengths. As in theĉ = 3 model for the conifold, theĉ = 5 model has two branches, the H branch with σ = 0 and the C branch with σ = 0. We identify each C domain as a hole on the worldsheet. Whereas the C branch of theĉ = 3 model is described as the Landau-Ginzburg model with the superpotential (2.3) (and with the path integral measure dσ/σ 2 ), the C branch in theĉ = 5 model is the Landau-Ginzburg model with (4.2). In particular, its target space is the supermanifold with coordinates (Σ, Θ α ,Θ α ). As in theĉ = 3 case, the C branch does not contribute to a string amplitude unless its domain has the topology of the disk. This statement just follows from the functional integral over Σ and the operation of dσ∂/∂σ and is independent of whether there are extra degrees of freedom. The functional integral over the disk C domain indeed gives the correct boundary condition for the N D branes extended in the R 4 direction with the gluino field W α turned on. To see this, let us integrate over Σ first. As in the case of theĉ = 3 model [3], it gives Using this, the right-hand side of (4.4) can be written as We can then identify (θ −θ) 2 as the boundary state for the D brane extended in the R 4 direction. As in any state which is invariant under the topological BRST symmetry, the boundary state can be decomposed into a chiral primary state and a BRST trivial part. It was shown in [23] that the chiral primary part is determined by the (quantum) period of the cycle on which the D brane is wrapped. For the D brane extended in the R 4 direction, the chiral primary part is (θ −θ) 2 ; indeed it imposes the correct boundary condition θ α =θ α , which is associated with Neumann boundary conditions for x m . We can then identify the action of the differential operator exp W α ∂ ∂θ α as an insertion of W α (p α +p α ) on the boundary of the disk, giving rise to the correct coupling of the gluino on the boundary. This shows that the superpotential for t and the kinetic term for v α computed in the closed string theory agree with those for the glueball superfield and the U (1) part of W α in the open string theory according to the correspondence (4.5). This is what we wanted to show. We note that one can start with a different combination of fluxes, for example, and repeat the derivation. (We can also consider more general quadratic combinations of θ andθ that preserve 4 supercharges. Here we are presenting simple ones for an illustration.) One will then find the boundary state whose chiral primary part is represented by (θ 1 ± θ 1 )(θ 2 ±θ 2 ). We can interpret it as the boundary state for a D 2n+2 brane wrapping on the S 3 of the deformed conifold and extending in a 2n-dimensional plane in R 4 , where n is the number of minus signs in (4.7). This is consistent with what one expects from T-dual of the open/closed string duality that we discussed in this paper. The original argument [24] for the existence of the large N dualities of the type discussed in this paper starts with the conjectured equivalence of the D brane description involving open strings and the closed string description motivated by the computation of the R-R charges [25]. The result of this paper provides the worldsheet explanation for the equivalence of the two descriptions, at the level of F terms. For the closed string, the vertex operator N (θ −θ) 2 σ represents the closed string background with N units of R-R flux turned on. We have found that turning on this worldsheet interaction generates the open string sector whose boundary state for the 4d part of the target space is represented by N (θ −θ) 2 . This boundary state indeed carries the correct amount of R-R charge expected from the duality. We hope that our result in this paper will turn out to be a useful step toward deriving the full large N duality in the superstring.
2014-10-01T00:00:00.000Z
2003-10-13T00:00:00.000
{ "year": 2003, "sha1": "56067431241f3bb4cfa4c67b78fbd66839493886", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/24901/1/BERcmp04preprint.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7b82d1156f47bc92d7fcafcdcdc430511a843f9b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
258366904
pes2o/s2orc
v3-fos-license
Expanding the Knowledge of KIF1A-Dependent Disorders to a Group of Polish Patients Background: KIF1A (kinesin family member 1A)-related disorders encompass a variety of diseases. KIF1A variants are responsible for autosomal recessive and dominant spastic paraplegia 30 (SPG, OMIM610357), autosomal recessive hereditary sensory and autonomic neuropathy type 2 (HSN2C, OMIM614213), and autosomal dominant neurodegeneration and spasticity with or without cerebellar atrophy or cortical visual impairment (NESCAV syndrome), formerly named mental retardation type 9 (MRD9) (OMIM614255). KIF1A variants have also been occasionally linked with progressive encephalopathy with brain atrophy, progressive neurodegeneration, PEHO-like syndrome (progressive encephalopathy with edema, hypsarrhythmia, optic atrophy), and Rett-like syndrome. Materials and Methods: The first Polish patients with confirmed heterozygous pathogenic and potentially pathogenic KIF1A variants were analyzed. All the patients were of Caucasian origin. Five patients were females, and four were males (female-to-male ratio = 1.25). The age of onset of the disease ranged from 6 weeks to 2 years. Results: Exome sequencing identified three novel variants. Variant c.442G>A was described in the ClinVar database as likely pathogenic. The other two novel variants, c.609G>C; p.(Arg203Ser) and c.218T>G, p.(Val73Gly), were not recorded in ClinVar. Conclusions: The authors underlined the difficulties in classifying particular syndromes due to non-specific and overlapping signs and symptoms, sometimes observed only temporarily. Introduction The KIF1A (kinesin family member 1A) gene is located on chromosome 2q37.3 and is expressed mainly in the brain and spinal cord. The gene codes the KIF1A protein-one of the kinesin superfamilies of microtubular-dependent molecular motors involved in retrograde axonal transport of dense-core vesicles [1,2]. One of the neuropeptides transported in those vesicles is the brain-derived neurotrophic factor (BDNF) [2]. Because of that, KIF1A may play a crucial role in neuronal development, synaptic maturation, and function [3]. Its mutations have been associated with three different disorders in OMIM (https://www.omim.org 25 March 2023), all of which include severe neurological symptoms. Pathogenic variants in the KIF1A gene are responsible mainly for three phenotypesautosomal recessive and dominant spastic paraplegia 30 (SPG30, OMIM 610357), autosomal recessive hereditary sensory and autonomic neuropathy type 2 (HSN2C, OMIM 614213), and autosomal dominant neurodegeneration and spasticity with or without cerebellar atrophy or cortical visual impairment syndrome NESCAVS (OMIM 614255). Occasionally, KIF1A variants have also been linked with progressive encephalopathy with brain atrophy, progressive neurodegeneration, PEHO-like syndrome (progressive encephalopathy with edema, hypsarrhythmia, optic atrophy), and Rett-like syndrome. Both autosomal dominant or recessive forms of SPG30 and autosomal recessive HSN2C are relatively milder forms with onset in the first decade of life. NESCAV syndrome (OMIM 614255), formerly known as autosomal dominant intellectual disability 9 (MRD9), is a severe neurodegenerative disorder. Its clinical presentation may vary, but it is characterized by cognitive impairment and progressive spasticity, predominantly in the lower limbs. Global development is usually profoundly delayed, with intellectual disability, speech delay or absence, and behavioral problems. Cerebellar atrophy is frequently present in NESCAV patients, usually manifesting with ataxia. Ophthalmologic assessment may present optic nerve atrophy, cortical visual impairment, and nystagmus. Patients may also be diagnosed with peripheral sensorimotor axonal neuropathy [4]. Most identified KIF1A variants associated with NESCAV syndrome were heterozygous and occurred de novo [5][6][7]. The authors present the first group of Polish patients diagnosed with known and novel KIF1A heterozygous mutations and analyze their genotypes and clinical phenotypes in comparison with previous reports. Materials and Methods Patients The authors present the first nine Polish patients with confirmed heterozygous pathogenic and potentially pathogenic KIF1A variants. All the patients were of Caucasian origin. A consanguineous background was absent. Five patients were females, and four were males (female-to-male ratio = 1.25). Age of onset of the disease was set at the appearance of the first symptoms and ranged from 6 weeks to 2 years. A spectrum of clinical features presented by the patients is summarized in Tables 1 and 2. Signed written informed consent for genetic analysis was obtained from all the parents of individuals enrolled in the study. Molecular Study Whole-exome sequencing (WES) was performed in all nine Polish patients to search for the cause of complex neurodevelopmental symptoms. SureSelectXT Human All Exon kit (Agilent, Agilent Technologies, Santa Clara, CA, USA) was used according to the manufacturer's instructions. The enriched library was paired-end sequenced (2 × 100 bp) on a HiSeq 1500 (Illumina, San Diego, CA, USA) to the mean depth of 85×. Raw data analysis and variant prioritization were performed as previously described [8,9]. Variants considered as causative were validated using DNA samples from the proband and proband's parents by amplicon deep sequencing (ADS) performed with a Nextera XT Kit (Illumina) and paired-end sequenced (2 × 100 bp) on a HiSeq 1500 (Illumina, San Diego, CA, USA). Table 2 shows the variants of the KIF1A gene (LRG_367; NM_001244008.1) detected in the WES tests. Exome sequencing identified seven different de novo variants and one variant which appeared to be inherited from the mosaic parent (paternal origin) from nine independent patients. All the identified variants were absent in the gnomAD database. The mosaic father of patient 5 was unaffected. In addition, we tested semen collected from the father. Variant c.946C>T was detected, using the ADS method, in patient 5's father's blood at VAF (variant allele frequency) 21% (coverage: 8788x) and in semen at 22% (coverage: 2966x). Every variant found in our patients causes a change in one amino acid (missense type). Among the nine patients, one variant, previously described, was detected in two patients: c. KIF1A-Related Phenotypes The KIF1A-related recessive disease variants of the motor domain 6.7 may have different consequences on the protein's function than the dominant variants, which could explain why they do not cause disease in heterozygous carriers. Structural modeling suggests that the dominant disease variants affect ATP binding, γ-phosphate release, or microtubule binding. Based on structural analyses, the recessive variants were predicted to disrupt the back door structure or the neck linker between the motor domain and cargobinding regions, respectively [1,2,14,15]. Functional experimental studies suggest that recessive variants impair motor function to a lesser extent than dominant ones [1,2,14,15]. Thus, it is usually challenging to predict the disease progression, even in the same families, and KIF1A-related disorders can best be thought of as a spectrum of diseases, ranging from mild symptoms to severe, life-threatening complications, including the main clinical spectra such as neurodegeneration and spasticity with or without cerebellar atrophy or cortical visual impairment (NESCAV syndrome; OMIM 614255), formerly mental retardation, autosomal dominant 9 (MRD 9) and hereditary sensory neuropathy type IIC (HSN2C; MIM 614213), as well as autosomal recessive and dominant spastic paraplegia 30 (SPG30; OMIM 610357). The last of the KIF1A-related syndromes is spastic paraplegia 30 (SPG30) (OMIM610357). It has been associated with both autosomal dominant and autosomal recessive transmission patterns. Some patients have also been identified as having de novo heterozygous mutations [10][11][12][13][14][16][17][18][19]. SPG30 is characterized by slowly progressive spastic paraplegia with onset in adolescence or adulthood. Although in some cases SPG30 affects only the locomotor system with lower limb spasticity and pyramidal signs, other neurological problems may occur [21]. The main, practically omnipresent additional sign in complicated SPG30 is intellectual disability (ID) ranging from severe (more often) to mild. The reported clinical spectrum covers microcephaly, epilepsy, optic atrophy, ataxia, axonal neuropathy, dystonia, brain MRI abnormalities (cerebral and/or cerebellar atrophy, hypogenesis/thinning of corpus callosum, white matter lesion), epilepsy, optic atrophy, blindness of central origin, axonal neuropathy, axial hypotony, athetosis, dystonia HSP which may be complicated by cerebellar atrophy, intellectual disability and/or axonal neuropathy, and severe neonatal presentation with progressive encephalopathy with brain atrophy [6,9,21,22] Recently, Montenegro-Garreaud reported evidence supporting the association of hip subluxation, dystonia, and gelastic epilepsy with KIF1A dysfunction [23]. Klebe et al. reported frequent sphincter disturbances, mild ataxia, and sensory deficit in SPG30 patients [3], whereas other studies indicate that cognition is usually normal; however, some studies show the presence of mild intellectual disability and learning difficulties [17,18]. Pure SPG30 resembles SPG3 or SPG4, with a slow course and an age of onset between one year and seventy years. Most of the described patients seem to be diagnosed with cHSCP. With the clinical data of the Polish cohort presented here, we can support the importance of KIF1A variants in the development of spastic paraplegia. KIF1A mutations may also result in hereditary sensory neuropathy type IIC (HSN2C) (MIM #614213) [4,14,15]. Given the mentioned KIF1A function, it may play a critical role in the development of axonal neuropathies resulting from impaired axonal transport. It was described in 2011 by Rivière et al. as a progressive distal sensory loss leading to ulceration and amputation of fingers and toes [4]. Position and vibration senses were impaired the most with accompanying distal motor deterioration. HSN2C is inherited with an autosomal recessive pattern, and its first symptoms are present in the first decade [14,21,22]. Molecular Characteristics and Clinical Correlation of Polish Patients Analyzing the clinical symptoms of our patients, we noticed that the phenotypic variation in KIF1A mutations is much broader than previously described. We had difficulty classifying them among individual phenotypes due to overlapped clinical pictures and dynamically changing phenotypes. Our patients presented mainly a severe phenotype. The onset of the symptoms was observed in early infancy. Six of them were classified as NESCAVS. Older patients had an abnormal MRI with brain atrophy. The overlapped clinical picture between KIF1-related disorders and Rett-like syndrome (psychomotor retardation/arrest, abnormal breathing pattern, stereotyped hand movements) may be due to the common target gene, neurotrophin-brain-derived neurotrophic factor (BDNF) [9]. Tables 3-5. KIF1A-related disorders probably remain underdiagnosed because of the dominant relation with HSP and less known coincidence with a multisystem and progressive course with upper motor neuron dysfunction and extrapyramidal signs with neuropathy [24][25][26][27][28][29][30][31]. Nicita et al. performed genotype-phenotype correlations in 19 patients aged 3-65 years, including 14 children [14]. The patients were divided into 2 groups: group 1 with a complex phenotype: dominant pyramidal signs and additional features: epilepsy, ataxia, peripheral neuropathy, and optic nerve atrophy, and group 2 with an early-onset or congenital ataxic phenotype. In our group, all patients presented at the beginning with psychomotor retardation [14]. Conversely to the group described by Nicita et al., most of our patients had normal MRI findings (7/9). Considering the age of the children, we can speculate that the progression of cerebellar and cerebral atrophy may appear with time. According to the literature data, a pure cerebellar ataxia phenotype has been reported very rarely. Epilepsy was diagnosed in four patients. The semiology of seizures varied, resulting in differences in the AEDs used. According to the observation of our patients with KIF1A mutations, epilepsy may present in age-dependent stages. In the onset phase, epilepsy manifests with a very high activity, sometimes before the development of other recognizable clinical features. With age, like in many genetic syndromes, there is a tendency for seizures to decrease up to cessation. The detected variants in the KIF1A gene in our group of patients are consistent with the autosomal dominant mode of inheritance. The correlation of genotype-phenotype is hard in this case, as was previously reported, and because of that, dominant forms of KIF1A-related disorders are characterized by a more complex phenotype, wider spectrum of symptoms, and higher severity and age of onset [14]. Tables 3-6 compare the phenotypes of our patients with those previously reported. It is worth highlighting how variable the symptoms are. Interestingly, contrary to the literature, ataxia was not observed at the last assessment of our patients [24,25]. In one individual (patient 1), cerebellar atrophy was noted in neuroimaging, and he presented with dystonia. It is very important to note that the diagnosis in our patients may change with age as symptoms of HSP increase, and the clinical picture may correspond to HSP with neuropathy more than NESCAV. Therefore, it is important to differentiate between clinical forms of KIF1A-related disorders. Moreover, further detailed clinical and genetic characterization of patients should be performed with broad international cooperation. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons. Conflicts of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2023-04-28T15:16:41.385Z
2023-04-25T00:00:00.000
{ "year": 2023, "sha1": "2e4ed1c9c5a685163828e3ff1e45f62949714d59", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/14/5/972/pdf?version=1682503366", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a53bff90c6f2da3048dbbad2c517c9c1b1387ae9", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
204975081
pes2o/s2orc
v3-fos-license
Representative sample survey on factors determining the Czech physicians’ awareness of generic drugs and substitution Background Generic drugs and generic substitution belong to the tools by which healthcare costs may be reduced. However, low awareness and reluctance among healthcare professionals towards generic drugs may negatively affect the rational use of generic substitution. Methods The study aimed to analyze opinions and attitudes towards generic drugs and generic substitution among Czech physicians including their understanding of generic substitution legislative rules and the physicians´ previous experience in this field. Using random allocation, 1551 physicians practicing in the Czech Republic were asked to participate in the sociological representative survey conducted from November to December 2016, through face-to-face structured interviews comprising 19 items. Factor analysis as well as reliability analysis of items focused on legal rules in the context of physicians’ awareness were applied with p-value of < 0.05 as statistically significant. Results Of a total of 1237 (79.8%) physicians (43.7% males; mean age 47.5 ± 11.6 years, 46.3% general practitioners) 24.8% considered generic drugs to be less safe, especially those with specialized qualification (p < 0.01). However, only 4.4% of the physicians noticed any drug-related problems, including adverse drug reactions associated with generic substitution. The majority of physicians felt neutrally about performing generic substitution in pharmacies, nor they expressed any opinion on characteristics of generics, even though a better understanding of the legislation and higher need of accordance of substituted drugs were associated with more positive attitudes towards generic substitution (p < 0.05). Physicians showed low knowledge score of legislative rules (mean 3.9 ± 1.6 from maximum 9), nevertheless they overestimated the law, as they considered some rules valid, even if the law does not require them. Cronbach alpha of all legislative rules that regulate generic substitution increased from 0.318 to 0.553 if two optional rules (physician consent and strength equivalence) would be taken into account. Conclusions There is no sufficient awareness of generic drugs and generic substitution related issues among Czech physicians, although a deeper knowledge of legislation improves their perception about providing generic substitution. Background Generic drugs are off-patent products containing the same active substances as the previously approved brand name drug with the same bioequivalence, the same dosage form, the same route of administration, and the same therapeutic characteristics [1]. According to the European Medicines Agency, drug bioequivalence is defined as the ratio of pharmacokinetic parameters (maximum plasma concentration, C max , and area under curve, AUC) ranging from 80 to 125% (90% confidence interval). Two drugs are assumed to be therapeutically equivalent if they are claimed bioequivalent [2,3]. One of the reasons for introducing generics to the market was to reduce healthcare costs [4,5], not only because the development of generic drugs is cheaper, but also because pharmaceutical companies are competing for their market place [6]. It gives space that, for example, pharmacists can substitute brand name drugs for less expensive generic alternatives. Moreover, since the introduction of generic drugs brought better access to medicines and healthcare for patients in general, cost savings could be redirected to the area of rare or costly diseases [7,8]. Generic drugs have been available in both acute and chronic disease therapies for many years. In 2013, the proportion of prescriptions filled with generic drugs ranged from 17% in Switzerland to 83% in the United Kingdom [9]. In the same year, generic drugs accounted for almost 40% in the Czech Republic (CR), while the increasing or sustained trend in prescribing generic drugs is still apparent in other countries as well [10]. Generally, despite the increased acceptance of generics, some mistrust and lack of confidence among stakeholders still prevail [11]. Previously published literature have shown that negative opinions about generic drugs in terms of effectiveness, quality, or safety are apparent both in the group of health professionalsphysicians and pharmacists [3,12,13] and in the lay public [14] across different geographical areas. There are several studies assessing patients´attitudes, however, the opinions of prescribing physicians are also crucial since it often reflects patients' behavior. Managing mutual partnerships between healthcare providers and patients can therefore decisively influence the use of generic drugs and generic substitution [11]. According to a recent systematic review, physician-related factors belong to the seven domains that play a significant role in the implementation and sustainability of generic substitution in healthcare. Consequently, the physicians´knowledge is essential in establishing future policies, education, as well as interventions supporting accurate generic drug use in clinical practice [15]. In the CR, GS was legalized in December 2007 [16]. Pharmacists may substitute the prescribed drug for its alternative upon the patient's consent, also after considering all the possible generic substitution related risks especially so as to reduce the patient's financial burden. In short, drugs are regulated by price (maximum price) and reimbursement in the CR. The maximum price is determined based on the external and internal price referencing. External approach embraces an average of the three lowest prices from reference countries. If not applicable, the maximum ex-factory price is determined based on the price of the closest therapeutically comparable drug in the CR or in the reference basket countries. Reimbursement is always determined identically for all interchangeable drugs that are listed in the reference group. These drugs have similar efficacy, safety, and position in clinical practice. Reimbursement price is set according to the lowest price of drug within a reference group in the European Union, while all drugs within the same reference group have the same reimbursement price for the usual daily therapeutic dose. Currently, the price and reimbursement of the first generic drug has to be at least 40% lower than the price and reimbursement of the reference drug in the CR. In the case of another generic drug, the only the price is decreased [17,18]. The first studies in the group of Czech general practitioners (GPs) and pharmacists, conducted a year after the legislative move in 2007, reflected distrust on account of low awareness of principles and results of bioequivalence, also due to inadequate knowledge of legislation, as well as based on negative personal experience [3,19]. Nevertheless, pharmacists apparently perceived generic drug aspects more appropriately [20]. A current study follows up a previous survey (realized in 2008-2009), on the GPs´views and attitudes towards generic substitution. The aim was to analyze opinions and attitudes towards generic drugs and generic substitution among a representative sample of Czech physicians including their understanding of legislative requirements for providing generic substitution, also inquiring the physicians´experience in this field, almost a decade after the introduction of generic substitution in the CR. Participants and setting The sociological cross-sectional survey was conducted during November and December 2016 through face-toface structured interviews. Such surveys have been performed regularly since 1995, in which trained interviewers asked both lay people and health professionals on different healthcare issues in the CR, including their opinions on and the level of awareness of the development of health services, prevention, or therapeutic strategies. In our survey, the Czech physicians´opinions and attitudes towards generic drugs and generic substitution were explored by structured questions. The local Ethical Committee determined that the current study did not need formal ethical approval in conformity with national regulations. The study was conducted according to the principles stipulated by the Declaration of Helsinki and ICC/ESOMAR International Codex on Market, Opinion and Social Research and Data Analytics [21]. A national sample survey, in which equal representation of gender, age characteristics, as well as regional distribution of clinical practice was ensured by random allocation of 1551 physicians practicing in the CR who were asked to participate in the study. Participation was anonymous and voluntary as well as participants were fully aware of the purpose, nature, potential benefits or risks of the study. There were GPs for adults, GPs for children and adolescents and other medical specialists except dentists. Parameters for addressing the convenient participants came from the Institute of Health Information and Statistics by Ministry of Health database [22]. Except for data on socio-demographic characteristics (3 items) and medical specialties (3 items), the survey (Additional file 1) focused on the opinions on 10 statements related to brand name drugs, generic drugs and generic substitution. Responses were recorded on a five-point Likert scale (from strongly agree to strongly disagree). Further, previous experience with drug-related problems of generic drugs and providing generic substitution to patients (1 item), understanding of 9 legal rules for generic substitution in the CR (1 item with multiple choice), as well as attitudes towards performing generic substitution in pharmacies (1 item on Likert scale from positive to negative) were also solicited. The questionnaire was developed at the Department of Social and Clinical Pharmacy, Faculty of Pharmacy in Hradec Kralove, Charles University, according to the published material explained in detail elsewhere [19]. A pilot test was performed with 156 responding physicians to reveal the understanding of the questions and comprehensibility of the survey. Finally, the understanding of legal rules and attitudes towards generic substitution of GPs for adult patients were compared with previously published results [19]. Statistical analysis For the characteristics of the tested cohort, descriptive statistics was expressed as either absolute and relative frequencies or metric items given as the median, lower (25%) and upper (75%) quartiles (IQR), or mean ± standard deviation (SD). Pearson Chi-Square test was processed for correlation analysis by SPSS, version 20.0. Reliability analysis (Cronbach alpha) of items focused on legal rules as well as factor analysis for reduction of legal rules to independent factors were also applied [23,24]. Kendall tau (τ) correlation and Kruskal Wallis test as appropriate in attitudes analysis as well as plot generation were performed using Wolfram, Mathematica, version 11.2 (Wolfram Research Inc.). Indeed, t-test was used for comparison between current and previously published results of GP cohort. A p-value of < 0.05 was considered statistically significant. In addition to the statistical significance value, the corresponding effect size was calculated based on Cohen's convention (smallmedium-large) [25]. Characteristics of participants A total number of 1237 (79.8%) respondents agreed to participate, of which 540 (43.7%) were male, mean age 47.5 ± 11.6 years (median 48; IQR: 38-58), and 255 (20.6%) respondents reported the capital city, Prague, as their place of practice. Gender, age categories, and locations of clinical practice were distributed with deviation of 0.1, 0.3 and 0.1% of the general sample in the CR, respectively. Physicians who refused to be involved in the survey (304; 20.2%) mainly reported lack of time (61.2%), no interest (22.4%), or distrust in any such research (6.9%). Opinions on brand name drugs, generic drugs and generic substitution Physicians' opinions on statements related to generic and brand name drugs as well as generic substitution are summarized in Table 1. Concerning the therapeutic equivalence between generic and brand name drugs, there were mostly positive opinions among the respondents (749; 60.6%), more frequently in GPs for children and adolescents (p < 0.001). Indeed, positive opinions outweighed in number, especially in male sample (p < 0.05), the view that generic drugs are therapeutically equivalent to each other (697; 56.3%). A slightly lower agreement was reported in terms of bioequivalence. Generic drugs in respect to brand name drugs were considered to be bioequivalent mostly by physicians in ambulatory care, more often than by physicians working in inpatient settings (p < 0.05). Interestingly, about one third of respondents (364; 29.4%) expressed no opinion at all on bioequivalence, as well as almost one quarter of respondents (276; 22.3%) did not even know whether the results of bioequivalence may be useful for their responsible decision-making process. Generic drugs were regarded as equal to brand name drugs in terms of quality, effectiveness, and incidence of adverse drug reactions (ADRs) by most of the physicians. However, a relatively large proportion of respondents were not able to express any opinion. Concerning ADRs, no opinion was reported particularly among physicians preparing for specialized qualification, opposed to the physicians who passed qualification, as they considered generic drugs to be less safe (p < 0.01). On the other hand, 1183 (95.6%) of respondents have noticed no ADRs or other drug-related problems associated with generic substitution in their patients in the previous 3 months. If any negative reactions at all, allergic reactions, ineffectiveness and highlighted ADRs of generic drugs were reported most commonly, particularly by physicians aged 40-59 years (p < 0.05). Understanding the legislation for providing generic substitution in pharmacies Apart from items based on incorrect answers (such as false assumptions on the requirement of physician's consent and strength equivalence), all the 7 legal requirements of providing generic substitution in pharmacies must be respected, according to the Czech legislation in force ( Table 2). Each correct response scored one point (of maximum nine points) and the respondents gave correct answers to 3.9 ± 1.6 questions on average (median 4; IQR: 3-5). Respondents mentioned both physician's consent and same strength as correct answers quite often, shown among the 4 most frequent responses. None of the respondents reported correctly all the legal rules for generic substitution. Based on the respondents´answers, factor analysis has identified two factors in all 9 items focused on legal requirements for generic substitution (Fig. 1). The correlation of both factors with individual items has shown that the physicians´interpretation of legislative requirements is more precise than their actual knowledge. This means that physicians overestimated the law, as they considered some rules valid, even if the law does not require them. The first factor explained 24.5% of the questionnaire variability and mostly correlated with items concerning the need for accordance on generic substitution, i.e. how much the substituted drugs must be equivalent to each other ("factor of accordance"). For an item targeting the rule of strength equivalence, the factor of accordance correlated positively with the incorrect answer and negatively with the correct answer. Respondents with the maximum score in factor of accordance answered this question (incorrectly) positively, thus showed higher compliance with rules of providing generic substitution than it was required by the law. The second factor explained 14.7% of the questionnaire variability and correlated mostly with questions focusing on the necessary consensus among the persons involved in generic substitution ("factor of consensus"). Indeed, for an item targeting the physician's consent, the factor of consensus correlated negatively with the correct answer. Respondents with the maximum score regarding this factor, considered beyond the law that generic substitution requires physicians´consent. The interpretation presented above was supported by reliability analysis. The Cronbach alpha of all 9 questions was 0.318, if all correct and incorrect answers were analyzed. Both rules of strength equivalence and physiciansć onsent showed an increase in Cronbach alpha if those items had been deleted (0.528 or 0.336, respectively). The rule of strength equivalence showed a significantly negative item-total correlation (− 0.393), whereas the physician's consent close to zero (0.038). However, if the attitudes towards these two items were taken into account (i.e., how respondents answered), Cronbach alpha increased to 0.553. Overall, it can be said that the better knowledge the respondents showed, the less they were aware of the consensus. There were no statistically significant associations between the knowledge of legal rules including both factors and sociodemographic or medical specialty characteristics. The median of the knowledge score among different medical specialties was 4. Attitudes towards performing generic substitution in pharmacies The majority of physicians (497; 40.2%) felt neutral about performing generic substitution in pharmacies, while positive and negative attitudes were reported by 434 (35.1%) and 306 (24.7%) respondents, respectively. Generic substitution was very negatively considered by 88 (7.1%) physicians. Physicians with completed specialized qualification more often expressed negative attitudes (p < 0.01), however with small effect size. A significant correlation between understanding the legislation for generic substitution and attitudes towards generic substitution was revealed (tau = 0.06; p < 0.05); the better understanding the legislation, the more positive attitude towards generic substitution (Fig. 2). Respondents with higher knowledge score and higher factor of accordance perceived generic substitution more positively (p < 0.05). On the other hand, higher factor of consensus negatively correlated with the attitudes of providing generic substitution at the pharmacy, but nonsignificantly. Comparison of the knowledge and attitudes between the GP cohorts The knowledge of legislation in the group of GPs for adults compared with the results obtained in the previous study [19] was lower with a mean score 3.9 versus 4.7 (η 2 = 0.172; p < 0.001 with large effect size). Similarly, both groups significantly differed in their attitudes towards performing generic substitution in pharmacies (η 2 = 0.172; p < 0.001 with large effect size). Positive and rather positive (7.7 and 28.7%) attitudes were reported by GPs in the current study compared to the previous one (5.3 and 16.0%, respectively). Neutral, rather negative or negative attitudes were currently reported by 42.0, 12.5, and 9.1% GPs, compared to 19.4, 36.1 and 23.2% attitudes reported previously. Discussion The findings of our study suggest that the Czech physicians have rather positive attitudes towards generic drugs and generic substitution, and consider generic drugs therapeutically equivalent and similarly bioequivalent to the brand name drugs, as well as generic drugs themselves to each other. Compared to the previous survey on GPs [19], positive opinions on therapeutic equivalence and bioequivalence were shown to be better and the improvement may be justified, among others, due to growing experience with generic substitution in the CR, more precise knowledge of principles of bioequivalence studies or greater interest in this issue. In recent years for example, relatively extensive relevant studies have also been published that have not refuted the therapeutic equivalence of brand name and generic cardiovascular drugs [26]. Since the opinions on ensuring safety measures and guaranteeing the quality of generic drugs compared to their original counterparts were also very positive, higher confidence in the processes of approval of medicinal products on the Czech market can be expected. Similarly, positive opinions have been reported in studies conducted in developed countries, where transparent, clear, and effective regulatory rules for generic substitution had been set up [27,28]. On the other hand, healthcare professionals from countries with a less mature healthcare system appeared to have more concerned about the manufacturing sources of generics [20,29,30]. From the associations of socio-demographic characteristics, positive opinions on bioequivalence can be highlighted in ambulatory physicians, who are presumably more apt to prescribe drugs with higher benevolence than in physicians in the inpatient setting. The restrictions imposed by the hospital positive lists and the influence of pharmaceutical companies may also lead to lower willingness to prescribe generics [27]. However, the associations of individual results with sociodemographic characteristics should be generally considered with caution and confirmed in future research. Such results differ and occasionally are contradictory in the literature. Therefore, it is not possible to summarize the influence of age, length of practice or specialization on the physicians´attitudes towards generic substitution. The results can be associated with the diversity of national healthcare policies, including various strategies of regulatory authorities, physicians´knowledge, or the general development of the given country, as well as with the changes that may occur over time [11,28]. It is alarming that a relatively large group of respondents did not express any opinion, either on drug equivalence, or on quality and safety guarantees. This fact, accompanied with the lack of confidence in generic substitution, can limit that generic drugs and generic substitution be fully adopted by physicians [11]. With regard to ADRs, no opinion was expressed by younger practitioners in specialized training, which may be explained by their low awareness or inexperience with providing generic substitution in clinical practice [15]. In older physicians, skepticism was more prevalent as they viewed generics are less safe. However, the majority of respondents have not witnessed recently any ADRs or drug-related problem. In case of any occurrence, allergic reactions were predominant, which is in line with previous research results [19]. Some drugs (e.g. antiepileptics) can be signaled to increase the risk of ADRs or lack of efficacy during generic substitution in a particular patient [31], however, data from large randomized controlled trials are either missing or the findings of the observational studies do not confirm their conclusions. Despite this fact, the substitution cannot be performed in all patients and all drugs [32]. It is also possible that our respondents did not have such frequent contact with these groups of drugs or patients or they failed to identify any ADRs. More than half of the respondents reported that the primary principle of generic substitution is based on reducing the healthcare cost for patients, which may be debatable. As pharmaceutical companies compete for their position on the market, the prices (including the need for patient co-payment) rapidly decrease after the original patent expired. Therefore, generic drug prices may remain more or less equal, while brand name drugs tend to stay closer to the generic price [5]. In some countries, the reimbursement of generic and brand name drugs is the same, and the financial relief for the patient is therefore not a generic substitution motivating priority [33]. Consequently, the choice of drug to be prescribed by the physician may be affected by the specific brand to which the physician wishes to be loyal; also marketing efforts by the pharmaceutical industry can play a role [28]. The consumer (i.e., the patient) does not even have to see the difference in the price and may choose the drug according to their own preferences. Therefore, it is necessary to underline the importance of cooperation between patients and physicians regarding the right choice of a suitable therapy, as well as the relationship with pharmacists, who should consider all the risks of medication (e.g. contraindications, ADRs, drug interactions) and patients characteristics, in particular if any change has occurred in the pharmacotherapy. Inappropriate generic substitution may lead to the patient's medication non-adherence and early discontinuation of the therapy, thereby losing confidence in healthcare professionals [34]. In the other way round, sufficient understanding of the patient's treatment plan result in the patients´increased care for their own health, hence to better adherence [35,36]. Moreover, regulatory authorities and professional societies provide healthcare professionals with guidelines or lists of drugs unsuitable for generic substitution, especially drugs with a narrow therapeutic window [20]. Compliance with these principles is important for safe medication practices, as well and similarly, higher awareness among healthcare professionals towards generic substitution is perceived as a positive drug policy tool, and never as a possible means to cause harm to the patient [37]. Controversies or negative attitudes often result from poor knowledge of the principles of generic substitution. As mentioned above, the understanding of therapeutic equivalence and bioequivalence seems to have improved among Czech physicians. The appropriate knowledge of generic substitution legal rules is still contradictory. None of the participants responded correctly to all the questions on knowledge of legislation, however, they prompted stricter rules for generic substitution in the pharmacy (physician's consent and same strength), although both of those rules can be circumvented by avoiding generic substitution by indicating" dispense as written" on the prescription, or dispensing different strength in a modified dosing, respectively. Similar patterns can be seen in other studies. For example, up to 65% of physicians denied providing a liberty to the pharmacists for changing brand name drug for generic medicine in Malaysia or Pakistan [29,38]. In the CR, therefore, patients have the right to decide themselves if the proposed generic substitution is convenient, even if it is required that they understand the generic substitution principles, as explained by the pharmacist. According to the published literature, pharmacists have shown a deeper knowledge and more positive attitudes towards generic substitution than the prescribing physicians, which may be due to their former education focused more on drugs including generic substitution [3,20]. The pharmacists performing generic substitution in the CR play an important role in the partnership with the patients, both aiming to reveal the risk factors of generic substitution for the patients, as well as to identify the drugs and their indications unsuitable for generic substitution. Interestingly, whereas the knowledge of GPs showed decreased scores, compared to the previous survey [19], the attitude towards providing generic substitution in pharmacies has improved. Lower knowledge can be acknowledged to the selection of respondents in the previous study (conference participants, i.e. probably more educated participants); on the other hand, the improvement of attitudes can be explained by growing experience with generic substitution, as well as improved interdisciplinary cooperation. Providing information on generic substitution by pharmacists to patients demonstrated to have a major impact on the perceptions of physicians in a study by Lewek [39]. Strengths and limitations This study provides results of a representative sample survey on the physicians´attitudes towards generic drugs and generic substitution in the CR with a high response rate, which is rather unique, especially compared to similar studies published so far. We can consider our study to be novel as reflects the current perception of generic drugs and generic substitution by the prescribing physicians. Even though this was a questionnaire survey, questions were asked face-to-face, respondents were not honored, the scope of the questionnaire was not too large, as well as the clarity of questions was piloted and published a forehand [19]. On the other hand, it is still a cross-sectional study, which does not evaluate the situation over time, and had no aim to identify variable factors affecting the attitudes of physicians. The latter could be facilitated by qualitative research with openended questions in the future to understand the issues concerning generic drugs and generic substitution more in detail, although this may gain smaller group of respondents [12,40]. Conclusion The study showed insufficient awareness of generic drugs and generic substitution among Czech physicians. Higher knowledge of Czech legislation seemed to improve their perception of providing generic substitution in pharmacies, however, they frequently overestimated legislative requirements. Attitudes towards generic substitution are slightly more positive compared to opinions observed at the time of introducing generic substitution into the market in the CR, however, quite large proportion of physicians was not able to express any opinion in terms of quality, effectiveness, or regulatory standards. Majority physicians have experienced no drug-related problem; still, a better understanding of generic substitution by physicians can contribute to higher patient safety during pharmacotherapy. Additional file 1. Questionnaire survey. Specific items included in the questionnaire concerning statements related to brand name drugs, generic drugs and generic substitution, previous experience with drug-related problems of generic drugs and generic substitution, understanding of legal rules for generic substitution in the Czech Republic, and attitudes towards performing generic substitution in pharmacies.
2019-10-31T19:07:37.225Z
2019-10-30T00:00:00.000
{ "year": 2019, "sha1": "7bb83948e6b4eb8ba48f71c856d644ee4e5cc244", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-019-4631-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "54906df617f1ee85807eebaac37ff6f166ddfde6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56473389
pes2o/s2orc
v3-fos-license
An approach to emotion recognition in single-channel EEG signals: a mother child interaction In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology. Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains. Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness. Introduction To provide an exact definition of emotions is difficult to establish [1], [2], but for convenience, we consider emotions as a neuronal response to external stimuli. The study of these stimuli is very important, because it could bring help to build a psychological profile of a person. This profile will can be usefull for the diagnosis of possible psychopathology of the individual, or will make suitable a prevention strategies with proper treatment, evaluating their behavior into a social or specific situations in the enviroment. Different approaches have been reported for the determination of an emotional state, such as facial expression recognition, speech analysis, and interpretation of corporal gestures [3] [4][5] [6]. A different approach to the detection of emotions is through the analysis of electroencephalographic signals, making them the extent of the interaction between neurons to perform any action or work, in our case, "creating emotions"; the importance of emotion detection using these signals is that they are directly related to the brain activity and could be mapped to several areas of brain cortex. This will allows to determine or validate phisiologically the emotional state. Also, it is not necessary to perform a phonetic activity or a gestural expression, which is quite useful for the determination of lies or detecting emotions in infants. This study seeks to determine when the emotional states of happiness, and sadness appears in a mother-child dyad with the analysis of some descriptive characteristics in time and frequency domain, after conducting an experiment to evoke emotions, using a single channel from a EEG signal, located in the x position of the standard 10-20 configuration. The purpose of this work is to validate the psychological results adressed in [7][8] [9] by a triangulation procedure of experts. Also, extrapolate some methodological results previosly obtained in emotion recognition from audio signals and finally to establish a procedure for lack of information in EEG signals in order to find the relevant features of the signals to proceed with a multi channel procedure. It is necessary to give a definition of what is a dyad, Kenny define the dyad as "the fundamental unit of interpersonal interaction and interpersonal relations" [10]. That is, a group of two persons in this case mother and child. The importance of this dyad relays in the high impact of the mother's behavior in the social and emotional child development [11]. The importance of the study of these dyadic interactions focuses on the fact that can help to create a complete scheme on mental development of infants. In this case the mother-child dyad was analyzed for the evaluation of the interaction mother-child into a developmental psychological study. The study evaluated the capabilty of the emotional induction over a child from the mother without verbal communication. Also, detecting emotions evoked in the mother and its consequence on the evocation of emotions in her child [7][8] [9]. Therefore, this research seeks for a classification system to estimate the emotional state of sadness and state of happiness in this dyad and validate the previous results, also develop an analysis method for EEG single channel signals. The raw EEG signal usually has an amplitude of the order of µV and contains frequency components of up to 300Hz [12], in several studies report the optimal ranges for the detection of emotions in EEG signals in different frequency bands of the EEG signal. This bands are classified as: Delta band (0.5 to 4 Hz), Theta band (4 to 8 Hz), Alpha band (8 to 13 Hz), Beta band (13 to 30 Hz), and Gamma band (30 to 70 Hz) [12], [13]. Detecting emotions from EEG signals has been a study field in recent years [14] [22]. Pentrantonakis used the Delta, Theta, Alpha and Beta band with a HOC analysis for determinated six emotions: Happiness, Surprise, Anger, Fear, Disgust, and Sadness with a minimal rate of classification for Fear with 75% and a maximum rate for Happiness with 99.373% [14]; Lee used Theta, Alpha, Beta and Gamma band for demostrated different patterns for differents emotional state, with different combination of this bands, performing a complete analysis of correlation, coherence, and phase synchronization index of each band in different emotional states, [15]; Varun executes a more conservative analysis as to the frequency range, taking all the frequency bands previously exposed in a range of 0.5 Hz to 100 Hz, implementing a classification between happiness, neutral, sadness and fear based on multiwavelet transform, with a mean acuracy of 80% [16]; Murugappan proposes the implementation of a Surface Laplacian filter to remove noises and artifacts and used an extraction method based on multi-resolution analysis of Wavelet Transform, to determinated between Disgust, Happiness, Surprise, Fear and Neutral with different classifier as LDA and KNN, comparing the number of channels of EEG from 62 to 8, with a mean acuracy of 85% [17]; and Li proposed the classification of two emotions, sadness and happiness using Common Spatial Patterns on the Gamma Band with a mean acuracy of 93.5%. [18] This paper relates a study carried out on a small cohort of mother child "dyads" in order to detect a binary emotion of either happiness or sadness. The work revolves around using some standard statistical measures applied to a single channel EEG signal obtained from the subjects adult and child alike. Different window sizes were used to perform the statistical analyses and these were then used a features in a MLP neural network to classify outcomes into either happiness or sadness. The classifier gave a 99% success rate. In this work we propose an approach to a methodology for emotion recognition in EEG signals from one channel information. Also, presents an easily and understandable solution with preprocessing of the signal, windowing and time-frequency classical features extration and finally a neural network classifier [23] [24]. No specific EEG frequency bands were taken into account and for simplicity, statistical measurements were performed to extract relevant information from features. Signals are not only sampled from adults but also from children and emotional states were confirmed by human experts. This work is organized as follows: Section 2 presents the methodology, preprocessing, features extraction and the classifier proposed. Section 3 presents the results obtained and finally the conclusions and future work will be adressed. Methodology The proposed methodology, looks for the extraction of simple features from the single channel signal. The first approach to the analysis of the information relays on the classical features extracted from EEG signals as descriptors of some physical phenomenon such as energy and frequency components. More sophisticated features will be considered in future work. The relevance of the proposed methodology is based on the use of time and frequency variables for feature extraction (typically used on voice features [25]; instead of the traditional features based on frequency bands used on EEG treatment. Also, the choice of these features related to the physical phenomena allows to perform a classification task with a high rate of accuracy. The classifiers were trained separating the complete set of data in three subsets for training (60%), test (20%) and a final set for validation (20%). Several MLP architectures were tested used at least the features provided by different sets obtained from three windows size on sampling stage as detailed in section II.A.This section presents the basic scheme for filtering and addresses some descriptive features in a simple way. Pre-processing The signal filtering process was implemented with three IIR filters: a high pass filter of 10 th order with response Butterworth at 0.5Hz, a low pass filter of 10 th order with response Butterworth at 70Hz, and a notch filter of 10 th order with response Butterworth at 60Hz. To perform the analysis of the signal, a windowing process is implemented with a Hamming window type, which satisfies the following equation: Where L is the length of the window. An overlap of 50% was done, and three window sizes: 100ms, 40ms and 20ms; each windows sizes were essayed , in order to extract features as detalied below. Descriptive Characteristics For training a classifier system, it was decided to find some descriptive characteristics of the EEG signal in a temporal and frequency approach. Given the random behavior of the signal [12], some type of statistical analysis must be performed [26]. The employed features were: Where N is the length of the samples, in this case the length of the window, and x i the input signal. RMS The root mean square of the signal: Mean The mean of the signal: ZC The number of zero crossing of the signal: Where Y is: MDF The median frequency of the signal: Where P is the Power Spectrum. MNF The mean frequency of the signal: Where P is the Power Spectrum, and f is the frequency. 2.2.10. WL The wavelength of the signal: 2.2.11. WL2 The wavelength of the standardized signal: Classifier Several training options such us the training method the activation functions and neural networks architecture were evaluated. The final classifier is a well known multi layer perceptron (MLP) with an architecture of 11 inputs, 3 hidden layers, each one with 11, 11 and 10 neurons and a single output. Activation function were tansigmoidal and purelinear in output layer. Since the noisy behavior of the MLP's output, it is low pass filtered and a binarized. The low-pass filter is a moving average filter, given by the following equation: Where x f (i) is the filtered signal x(i), as this filtered signal can take different values of 0 and 1, and our classifier has only two possible states: sadness and happiness, we use the binarizared function that is show in following expression: Experimental Protocol The study was conducted with 8 subjects, 4 women (mothers), and 4 children (3 males and 1 female) with a mean age of 22 months, each mother complete a form of Informed consent, and each mother was informed about the purpose of this experiment. The experiment was approved by the local Ethical Commitee. The stimulus of happiness for the mother consisted into hear the happiest story of her own life and the stimulus of sadness for the mother consisted into hear the saddest story of her own life, each story was told in a previously session with a professional in charge of the experiment. The stimulus for the child in every emotional state was evoked by the presence of his mother face to face. To perform the experiment the following protocol was followed: First, each mother was asked to make a recording in a room, the happiest moment of her life for the stimulus of happiness, and the saddest moment of her life to the stimulus sadness, then every dyad was placed face to face; the mother had headphones and listen in each case (happy or sad) the story that her previously recorded, causing the feeling of happiness or sadness, as the case, creating an evocation of emotions on her child who was staring at her. Each of these moments was videotaped, the measures of the EEG signals were taken from single channel to minimize the level of invasiveness of the experiment and it was taken at a sampling rate of 1000 Hz and It was counted with a team of psychologists who verified the validity of this protocol that meets the ethical principles of Helsinksi [7][8] [9]. Results and Discussion The emotion detection of sadness and happiness was performed with 3 different sizes of windows. In the Table 1 we present the averages of each of the extracted features, for happiness and sadness, with difference between the mother or child for differents length of windows. It may be noted that each characteristic is affected by the window size, increasing its magnitude proportionally with the size of the window. It can be seen that each characteristic is affected by the window size, but these changes are acceptably small, where it can been said after averaging all results, that could be used bigger window sizes for purposes of computational costs. With the matrix resulting from the extraction of descriptive characteristics we proceed to train a multilayer perceptron for each characteristics individually and one for the set of characteristics, which gives us a total of 12 MLP, the same procedure was performed with 3 different sizes of window. After having each MLP trained, was calculated the percentage of correct classification, the results are in Table 2. It may be noted that each feature individually does not provide enough information for recognition of Happiness state or Sadness state, in contrast can be seen that the percentage of classification of all these characteristics together on average has a percentage of 99.7% correct classification. Which allows infer, that the size of the window may slightly affect the success rate of the classifier, but reduce computational costs, window sizes could be around 100ms to be affected by 0.4%, which can be disregarded in this case. This could be remedied by performing an analysis of emotional states in the previous and next segment, seeking to avoid irregularities and misclassification. Future works related to single channel emotion detection will be focused on multiresolution analysis with Dabeuchies wavelet family to features extraction to improve accuracy and provide more phenomenon related characteristics. Conclusions The detection between the happiness and sadness emotional state, is possible through an analysis of a single channel of EEG. The analysis of the signal using temporal and spectral features, allows the classification of the emotional states with a high rate with a nonlinear classsifer as the MLP. Moreover, the effect of window size on the results was also evaluated. Given the high classification rates, it can be concluded that the size of the analysis window is important but stable. Allowing the selection of larger window sizes to reduce computational costs, and trying to reduce the small effect of misclassification of the signal, assesing the segments before and after the emotional states obtained by the classifier. This analysis can be used to study the mental mappings during the mother-child dyads interactions. As future work, we will study the detection of a greater number of emotions, and different number of channels.
2018-12-18T06:51:32.466Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "6d771f1cce6d42da44f0dab7398fc6c1e53b2bb7", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/705/1/012051/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8fd69a339acbc4f1525b55af78aa11b169ba93d5", "s2fieldsofstudy": [ "Computer Science", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
248930410
pes2o/s2orc
v3-fos-license
Promotion of Cyst Formation from a Renal Stem Cell Line Using Organ-Specific Extracellular Matrix Gel Format Culture System Researchers have long awaited the technology to develop an in vitro kidney model. Here, we establish a rapid fabricating technique for kidney-like tissues (cysts) using a combination of an organ-derived extracellular matrix (ECM) gel format culture system and a renal stem cell line (CHK-Q cells). CHK-Q cells, which are spontaneously immortalized from the renal stem cells of the Chinese hamster, formed renal cyst-like structures in a type-I collagen gel sandwich culture on day 1 of culture. The cysts fused together and expanded while maintaining three-dimensional structures. The expression of genes related to kidney development and maturation was increased compared with that in a traditional monolayer. Under the kidney-derived ECM (K-ECM) gel format culture system, cyst formation and maturation were induced rapidly. Gene expressions involved in cell polarities, especially for important material transporters (typical markers Slc5a1 and Kcnj1), were restored. K-ECM composition was an important trigger for CHK-Q cells to promote kidney-like tissue formation and maturation. We have established a renal cyst model which rapidly expressed mature kidney features via the combination of K-ECM gel format culture system and CHK-Q cells. Introduction Polar epithelial cells are a cultured cell model useful to evaluate material transportation for drug screening [1][2][3]. Additionally, many have studies reported that reconstructed in vitro tissue models, such as the kidney and liver, are promising for drug transport and metabolic testing [4][5][6][7]. Kidney-like tissue can be generated by differentiating stem cells based on embryology. For example, differentiation induction of human-induced pluripotent stem cells (iPSCs) for 21-35 days formed kidney organoids [8]. Nephron organoids were formed in 3-4 weeks from human embryonic stem cells (ESCs) and iPSCs [9]. These organoids represent an advanced kidney model, but a combination of adding growth factors and cytokines during induction steps extends the culture period. Scaffold design is one of the key parts in accelerating cell differentiation and kidneylike tissue formation. Rat renal stem/progenitor cells cultured in Matrigel simultaneously with the addition of growth factors formed kidney-like tissues in 2 weeks [10]. Human primary renal cell aggregation promoted kidney-like cystic tissue formation after 10 days in collagen gel [11]. Although the cysts lack the complex structure of the kidney, they can be used as a simple kidney-like tissue model. Furthermore, the short culture period and simple preparation procedure are attractive for application tools such as material transportation tests. The organ-specific extracellular matrix (ECM) regulates the surrounding environment, affecting development and functional expression [12][13][14]. Human ESCs differentiate toward the kidney when cultured on ECM derived from kidney or lung [15]. Thus, the scaffold environment, similar to the organ from which cells are derived, is important factors affecting the phenotype of cultured cells for kidney-like tissue formation. Together, the combination of cell types and culture systems with special substrates are important considerations for the rapid fabrication of a kidney-like tissue. Per the existing literature, the long-term supply of multiple factors and/or feeder cells are required for the passage culture and differentiation of ESCs and renal progenitor cells [16,17]. This complex culture process increases culture costs and often hinders applications. Based on the above-mentioned issues, we focused on CHK-Q cells, which are spontaneously immortalized from the renal stem cells of the Chinese hamster [18]. CHK-Q cells grow actively without special culture medium and feeder cells. The growth limitation of CHK-Q cells can induce functional expression. Therefore, CHK-Q cells are a promising cell source for kidney-like tissue construction, although further research is needed for its widespread use. We developed an excellent culture system for rapidly forming a kidney-like tissue model using CHK-Q cells. We focused on the organ-derived ECM as a trigger to promote cell organization. In particular, the cell-cell contact environment and a three-dimensional (3D) gel format culture system using kidney-derived ECM (K-ECM), which accelerates cell aggregation and the construction of matured kidney-like tissues. Promotion of Renal Cyst Formation in Collagen Gel Sandwich Culture The rapid proliferation of CHK-Q cells as a monolayer was observed on a collagencoated dish, under collagen gel after adhering to a collagen-coated dish, and on collagen gel culture ( Figure 1A). In contrast, CHK-Q cells in a collagen gel culture formed 3D cyst-like tissues. More cysts were formed in a gel sandwich culture compared with a gel-embedding culture where the cells were dispersed (shown with * in Figure 1A). environment, affecting development and functional expression [12][13][14]. Human ESCs differentiate toward the kidney when cultured on ECM derived from kidney or lung [15]. Thus, the scaffold environment, similar to the organ from which cells are derived, is important factors affecting the phenotype of cultured cells for kidney-like tissue formation. Together, the combination of cell types and culture systems with special substrates are important considerations for the rapid fabrication of a kidney-like tissue. Per the existing literature, the long-term supply of multiple factors and/or feeder cells are required for the passage culture and differentiation of ESCs and renal progenitor cells [16,17]. This complex culture process increases culture costs and often hinders applications. Based on the abovementioned issues, we focused on CHK-Q cells, which are spontaneously immortalized from the renal stem cells of the Chinese hamster [18]. CHK-Q cells grow actively without special culture medium and feeder cells. The growth limitation of CHK-Q cells can induce functional expression. Therefore, CHK-Q cells are a promising cell source for kidney-like tissue construction, although further research is needed for its widespread use. We developed an excellent culture system for rapidly forming a kidney-like tissue model using CHK-Q cells. We focused on the organ-derived ECM as a trigger to promote cell organization. In particular, the cell-cell contact environment and a three-dimensional (3D) gel format culture system using kidney-derived ECM (K-ECM), which accelerates cell aggregation and the construction of matured kidney-like tissues. Promotion of Renal Cyst Formation in Collagen Gel Sandwich Culture The rapid proliferation of CHK-Q cells as a monolayer was observed on a collagencoated dish, under collagen gel after adhering to a collagen-coated dish, and on collagen gel culture ( Figure 1A). In contrast, CHK-Q cells in a collagen gel culture formed 3D cystlike tissues. More cysts were formed in a gel sandwich culture compared with a gel-embedding culture where the cells were dispersed (shown with * in Figure 1A). Time-lapse observation revealed cyst formation and fusion history ( Figure 1B). On the first day of culture, cells began to gather and 20 µm cysts were formed. Each cyst expanded until day 3. Then, the cyst walls were reconstructed (yellow arrowhead), and the cysts were grown by fusion. On day 7 of the culture, 3D observation of the cytoskeleton confirmed cyst fusion, maintaining the 3D cyst structure ( Figure 1C and Videos S1-S3). Differentiation of Cysts of CHK-Q Cells to Mature Kidney-Like Cells Gene expression profiles for mesoderm, ureteric bud, collecting duct, nephron progenitor cells and metanephric mesenchyme, renal corpuscle, and uriniferous tubule-related genes are shown as a heatmap ( Figure 2). During cyst formation under the collagen gel sandwich culture, there were numerous highly expressed genes associated with kidney development, such as the mesoderm, ureteric bud, and nephron progenitor cell and metanephric mesenchyme, on day 3 of culture ( Figure 2A-C). In particular, the cyst tissues showed higher levels of gene expression involved in transcription factor, Wnt signaling, and growth factor for kidney maturation compared with CHK-Q cell suspensions and monolayers (collagen-coated dish). The nephron progenitor marker was upregulated on later culture days. Kidney differentiation markers such as uriniferous tubules and collecting ducts were increased over the conventional 2D monolayer ( Figure 2E,F). Genes related to transporters such as Aqp1 were notably upregulated. Time-lapse observation revealed cyst formation and fusion history ( Figure 1B). On the first day of culture, cells began to gather and 20 μm cysts were formed. Each cyst expanded until day 3. Then, the cyst walls were reconstructed (yellow arrowhead), and the cysts were grown by fusion. On day 7 of the culture, 3D observation of the cytoskeleton confirmed cyst fusion, maintaining the 3D cyst structure ( Figure 1C and Videos S1-S3). Differentiation of Cysts of CHK-Q Cells to Mature Kidney-Like Cells Gene expression profiles for mesoderm, ureteric bud, collecting duct, nephron progenitor cells and metanephric mesenchyme, renal corpuscle, and uriniferous tubule-related genes are shown as a heatmap ( Figure 2). During cyst formation under the collagen gel sandwich culture, there were numerous highly expressed genes associated with kidney development, such as the mesoderm, ureteric bud, and nephron progenitor cell and metanephric mesenchyme, on day 3 of culture ( Figure 2A-C). In particular, the cyst tissues showed higher levels of gene expression involved in transcription factor, Wnt signaling, and growth factor for kidney maturation compared with CHK-Q cell suspensions and monolayers (collagen-coated dish). The nephron progenitor marker was upregulated on later culture days. Kidney differentiation markers such as uriniferous tubules and collecting ducts were increased over the conventional 2D monolayer ( Figure 2E,F). Genes related to transporters such as Aqp1 were notably upregulated. The highest seeding density demonstrated large cysts from the beginning. The confluency increased with the seeding density ( Figure 3D). Additionally, the 3D thickness increased with the seeding ( Figure 3E). Under the K-ECM gel format culture system, cyst formation, rapid fusion, and expansion were observed compared with the collagen gel ( Figure 4A-C). In L-ECM, small cysts were formed, and fusion was restricted. Culture confluency on day 7 was similar regardless of the gel components ( Figure 4D). Then, an increase in 3D thickness was observed within the collagen gel condition compared with an organ-derived ECM gel culture system ( Figure 4E). CHK-Q cells formed cysts by latest on day 3 of culture in the collagen sandwich culture, regardless of seeding density ( Figure 3A). Cysts fused and expanded by culture day 7. Cyst fusion reduced the number of cysts ( Figure 3B) and increased their area ( Figure 3C). The highest seeding density demonstrated large cysts from the beginning. The confluency increased with the seeding density ( Figure 3D). Additionally, the 3D thickness increased with the seeding ( Figure 3E). Under the K-ECM gel format culture system, cyst formation, rapid fusion, and expansion were observed compared with the collagen gel ( Figure 4A-C). In L-ECM, small cysts were formed, and fusion was restricted. Culture confluency on day 7 was similar regardless of the gel components ( Figure 4D). Then, an increase in 3D thickness was observed within the collagen gel condition compared with an organ-derived ECM gel culture system ( Figure 4E). Data presented as the mean ± SD, ** p < 0.01 and * p < 0.05 (compared with culture conditions within the same culture time), and † † p < 0.01 (compared within the culture time under the same culture conditions) (two-way analyses of variance). Gene Expression Changes Related to Kidney Development by Kidney-Derived ECM The expression of typical markers for the mesoderm, nephron progenitor, and mature renal cells was investigated using real-time RT-PCR. The CHK-Q cells in the collagen gels tended to express increased mesoderm (T) and renal progenitor cell markers (Osr1, Pax8) at lower seeding densities ( Figure 5A-C); however, significant differences were observed only on culture day 7 ( Figure 5B). The differences in mature kidney markers such as uriniferous tubules and collecting ducts were minimal ( Figure 5D-H). Conversely, K-ECM gel format culture system strongly promoted the expression of mature kidney markers, such as transporter (Slc5a1), potassium channel (Kcnj1), and transcription factor Gene Expression Changes Related to Kidney Development by Kidney-Derived ECM The expression of typical markers for the mesoderm, nephron progenitor, and mature renal cells was investigated using real-time RT-PCR. The CHK-Q cells in the collagen gels tended to express increased mesoderm (T) and renal progenitor cell markers (Osr1, Pax8) at lower seeding densities ( Figure 5A-C); however, significant differences were observed only on culture day 7 ( Figure 5B). The differences in mature kidney markers such as uriniferous tubules and collecting ducts were minimal ( Figure 5D-H). Conversely, K-ECM gel format culture system strongly promoted the expression of mature kidney markers, such as transporter (Slc5a1), potassium channel (Kcnj1), and transcription factor (Gata3) (Figure 5E-G). In the L-ECM condition, these markers were less expressed or downregulated compared with the collagen gel conditions ( Figure 5D-H). . Open and closed columns represent 3 and 7 days of culture, respectively. Data presented as the mean ± SD, ** p < 0.01 and * p < 0.05 (compared with culture conditions within the same culture time), † † p < 0.01 and † p < 0.05 (compared within the culture time under the same culture conditions) (two-way analyses of variance). Discussion Polar epithelial cells have been used as an organ model for membrane transport tests of drugs [1][2][3][4][5][6][7]. Caco-2 cells derived from human colorectal adenocarcinoma are most commonly used for such tests. They form polar differentiated intestinal epithelium-like tissues after 3 weeks of over-confluent culture [1,19]. MDCK cells, an epithelial cell line derived from canine kidney, can be used after just a few days of culture [20]. MDCK cells autonomously form a domed liquid pool between the substrate and cell monolayer. After a week, the lumen forms under the collagen gel. Creating a stem cell-based kidney model can be an attractive alternative tool. Supplementing cytokines and organ-specific factors based on embryology greatly contributes to stem cell maturation and the stable expression of organ-specific functions [8][9][10]. A 3D culture also promotes cell differentiation. The formed cellular organoids enable the reconstruction of 3D structures that mimic the living body and promote the restoration of cell polarity [21]. However, in either method, no combination of cell type and culture system immediately converts to functional differentiation with cell polarity. Discussion Polar epithelial cells have been used as an organ model for membrane transport tests of drugs [1][2][3][4][5][6][7]. Caco-2 cells derived from human colorectal adenocarcinoma are most commonly used for such tests. They form polar differentiated intestinal epithelium-like tissues after 3 weeks of over-confluent culture [1,19]. MDCK cells, an epithelial cell line derived from canine kidney, can be used after just a few days of culture [20]. MDCK cells autonomously form a domed liquid pool between the substrate and cell monolayer. After a week, the lumen forms under the collagen gel. Creating a stem cell-based kidney model can be an attractive alternative tool. Supplementing cytokines and organ-specific factors based on embryology greatly contributes to stem cell maturation and the stable expression of organ-specific functions [8][9][10]. A 3D culture also promotes cell differentiation. The formed cellular organoids enable the reconstruction of 3D structures that mimic the living body and promote the restoration of cell polarity [21]. However, in either method, no combination of cell type and culture system immediately converts to functional differentiation with cell polarity. Generally, cell proliferation and functional expression are inversely correlated. CHK-Q cells can switch from cell growth to protein production via the addition of small molecular compounds (Kawabe, Y., et al., unpublished data). This feature is positive for the development of functional kidney models. However, it is unclear how the scaffold dependence alters cell phenotype. Here, the characteristics of CHK-Q cells were investigated when they were cultured in organ-derived ECM gel format culture systems. Generally, cell proliferation and functional expression are inversely correlated. CHK-Q cells can switch from cell growth to protein production via the addition of small molecular compounds (Kawabe, Y., et al., unpublished data). This feature is positive for the development of functional kidney models. However, it is unclear how the scaffold dependence alters cell phenotype. Here, the characteristics of CHK-Q cells were investigated when they were cultured in organ-derived ECM gel format culture systems. CHK-Q cells grown as a monolayer on collagen-coated dishes and collagen gel grew actively with a doubling time of approximately 15 h. CHK-Q cells did not show 3D morphology like MDCK cells even after reaching confluence ( Figure 1A). In contrast, cystic tissues were formed immediately in the gel sandwich culture system ( Figure 1B). Cell growth was dramatically restricted by both 3D scaffold-embedding and cell-cell adhesion conditions, and CHK-Q cells were induced to functional differentiation. Gene expression of Mki67, a cell growth marker, decreased to 1/80 (DNA microarray result). Cyst formation was more prominent in collagen gel sandwich culture, which is abundant in cell-cell adhesions compared with gel-embedded culture. Thus, for CHK-Q cells, the gel sandwich culture system may be the only trigger to switch from cell growth to functional differentiation. This feature of CHK-Q cells is important in rapid preparing a tissue model. CHK-Q cells in the collagen gel sandwich culture formed cysts after 1 day of culture ( Figure 1B). Maturation to kidney-like tissue was confirmed via microarray analysis on day 3 of culture ( Figure 2). Because the cysts were characterized by both ureter bud and metanephric mesenchyme, cross-interactions and subsequent inductions to both differentiations will promote kidney maturation [22]. Consequently, it was hypothesized that kidney differentiation markers related to uriniferous tubules and collecting ducts were strongly expressed. Data analysis showed that gene expressions at all parts of uriniferous tubules were enhanced ( Figure S1). Gene group expressions related to proximal tubules and distal convoluted tubules were particularly increased. The differentiation from stem cell to kidney-like cell is processed by adding various growth factors and/or through the formation of cellular organoids over several weeks [23][24][25]. For example, it takes 2-4 weeks using optimized methods for kidney-like tissue formation from human iPSCs or rat renal progenitor cells [8][9][10]. Even when human primary renal cells were used, cysts were formed after 10 days in collagen gel-embedded culture [11]. In contrast, CHK-Q cells capable of cyst formation using the gel format culture system showed accelerated maturation. This process requires no growth factor addition and is relatively fast. This methodology reduces culture medium consumption and costs. It will be also useful for drug discovery assays. The high cell density of CHK-Q cells increased the cell-cell contact. It affected the number and size of the cysts, but the differentiation to the kidney was not so different (Figures 3 and 5). This result did not support the theory that the organoid size of stem cells determines the direction of differentiation [26,27]. In CHK-Q cell culture, cyst formation might work as a special strong trigger for kidney component cell maturation. Although human ESCs promoted renal differentiation when cultured on lung-and kidney-derived ECM, no difference was observed with organs of origin [15]. In contrast, organ-derived ECM contributed profoundly to the maturation of CHK-Q cells into the kidney-like cells. In K-ECM conditions, the discriminative gene expression patterns showed high expression of mature kidney markers from metanephric mesenchymes, such as proximal tubules and intermediate tubules ( Figure 5E,F). Kidney ECM contains type-IV collagen, type-I collagen, laminin, and sugar chains, in that order [28]. Reports suggest that the expression of type-IV collagen and laminin is important for the formation of kidney based on embryology [29]. A portion or combination might be mimicked in the K-ECM condition. However, the gene expression pattern when using L-ECM was not so different from that of the type-I collagen gel. This observation was supported by the fact that the main component of L-ECM is type-I collagen [30]. In the CHK-Q cell culture, the direction of differentiation depends on the organ-derived scaffold. In the future, based on the K-ECM component, it is necessary to clarify how the combination of ECM components affects cyst formation and kidney maturation from CHK-Q cells. Conclusions Here, rapid cyst formation and kidney maturation from CHK-Q cells were achieved using the novel organ-derived ECM gel culture system. CHK-Q cells in a collagen gel sandwich were promoted to form cysts early, on the first culture day. Maturation toward kidney-like tissue can be due to cyst fusion and enlargement. Under the K-ECM gel format culture system, cyst formation and maturation were rapidly achieved. In particular, it was found that the cell polarity of transporters, which are important for material transport, was restored. In this model, the expression of mature kidney features was rapidly enhanced by the combination of K-ECM and CHK-Q cells. In addition, the culture system does not require growth factors for the maintenance and maturation processes of CHK-Q cells. These advantages lead to the reduction of the culture medium consumption and cost and could be important for assay applications. In the future, practical functions such as protein expression and drug transport verification will be required. Subculture of CHK-Q Cells CHK-Q cells, which are spontaneously immortalized from the renal stem cells of the Chinese hamster, were kindly provided by Prof. Kamihira [18]. To maintain the characteristics of CHK-Q cells, the cells, cryopreserved in liquid nitrogen using a preservation medium (Cellbanker 2; Takara Bio, Shiga, Japan), were used for each independent experi- ment. CHK-Q cells were cultured in 90 mm diameter collagen-coated dishes (Asahi Techno Glass, Tokyo, Japan) at 1.8 × 10 5 cells/cm 2 with 10 mL of D-MEM/Ham's F-12 (Fujifilm Wako Pure Chemical, Osaka, Japan), supplemented with 10% fetal bovine serum (Biowest, Nuaillé, France), 100 U/mL penicillin, and 100 mg/mL streptomycin (Thermo Fisher Scientific Inc., Waltham, MA, USA) as a continuous monolayer. After 2 days of culture, cells reaching 90% confluence were treated with a 0.25% trypsin-EDTA solution (Invitrogen) and subcultured at 5.3 × 10 4 cells/cm 2 . Cells at passage 2 were used for the experiments. Organ-derived ECM was prepared according to the literature [31]. Briefly, organs were cut into about 2 mm × 2 mm × 2 mm pieces using a scalpel. Tissues were soaked in CMF-PBS containing 1% Triton X-100 (Fujifilm Wako Pure Chemical, Osaka, Japan) for 4 days and washed with CMF-PBS for 4 days. These solutions were changed daily and stirred slowly at 4 • C. Dialysis was performed using the Spectra/Por 6 dialysis membrane (MCWO: 1000, Spectrum Laboratories, Inc., Milpitas, CA, USA) at 4 • C for 2 days to remove salts and impurities. Decellularized organs were lyophilized for 24 h, milled, solubilized using 1 mg/mL pepsin (Sigma-Aldrich, St. Louis, MO, USA) in 0.01 N HCl at room temperature for 2 days, and stored at 4 • C until use. The concentrations of organ-derived ECM solutions were calculated from the amount of added components and insoluble matter. Gel Sandwich Culture Type-I collagen (Cellmatrix Type I-A; Nitta Gelatin, Osaka, Japan), K-ECM, and liverderived ECM (L-ECM) solutions were mixed as shown in Table S1. Mixtures were added to a culture plate at 52 µL/cm 2 and incubated at 37 • C for 1 h to form a gel. CHK-Q cells were inoculated at 0.33, 1.0, 1.7, 2.3, and 3.5 × 10 5 cells/cm 2 and adhered for 1 h. Mixtures were added to the cells. After gel sandwich formation, culture medium was added and changed every 2 days. Several culture conditions using type-I collagen gel were performed to compare the characters in CHK-Q cells. Type-I collagen mixture was added to CHQ-cells after they adhered to a collagen-coated dish (under collagen gel). CHK-Q cells were also inoculated on a collagen gel substrate (on collagen gel). To evaluate the gel-embedding condition, CHK-Q cells were cultured in a collagen gel with uniform cell dispersion (collagen gelembedding). Under all the above-mentioned conditions, the CHK-Q cells were inoculated at 1.0 × 10 5 cells/cm 2 . Cyst Formation Assay To investigate CHK-Q cell morphology under several culture conditions, CHK-Q cells were cultured on collagen-coated dishes (Asahi Techno Glass, Tokyo, Japan), under collagen gel, on collagen gel, in a collagen gel sandwich, and embedded in the gel at 1.0 × 10 5 cells/cm 2 . To track cyst formation, CHK-Q cells in a collagen gel sandwich (1.7 × 10 5 cells/cm 2 ) were time-lapse observed using a phase-contrast microscope (BZ-9000; Keyence Corporation, Osaka, Japan). CHK-Q cells in a collagen or organ-derived ECM gel sandwich were fixed with 10% formalin (PFA) in CMF-PBS (Fujifilm Wako Pure Chemical, Osaka, Japan) for 30 min. Fixed samples were permeabilized with 0.1% Triton X (Sigma) in CMF-PBS for 30 min and blocked in 1% bovine serum albumin (BSA) for 1 h. They were incubated with CMF-PBS buffer containing 5 unit/mL rhodamine phalloidin (Thermo Fisher Scientific Inc., Waltham, MA, USA), 2 µg/mL Hoechst 33342 (Dojindo Laboratories, Kumamoto, Japan), and 1% BSA for 1 h. Fluorescence images were captured using a fluorescence microscope or a confocal laser scanning microscope (TCS SP8; Leica Microsystems, Wetzlar, Germany). To characterize cyst morphology, the number and area of cysts were analyzed from fluorescence images using ImageJ software version 1.53e. Cyst thickness was analyzed from 3D confocal images. Gene markers related to kidney differentiation were analyzed [32]. Gene expression heat map generation and hierarchical clustering of mesoderm, ureteric bud, collecting duct, nephron progenitor cells and metanephric mesenchyme, renal corpuscle, and uriniferous tubule-related genes were conducted using Heatmapper (http://www.heatmapper.ca/; 3 June 2021). Pearson's correlation was selected as the distance metric; average linkage clustering was selected as the linkage method. The QuantStudio™3 Real-Time PCR System and TaqMan Gene Expression Assay Kit (Thermo Fisher Scientific Inc., Waltham, MA, USA; Table S2) were used for PCR. The reaction mixture contained 1 µL cDNA sample, 5 µL TaqMan Fast Advanced Master Mix solution, and 4 µL nuclease-free water in a plate predispensed with the TaqMan Gene Expression Assay probes (Thermo Fisher Scientific Inc., Waltham, MA, USA). Forty amplification cycles consisted of 1 s at 95 • C and 20 s at 60 • C. The comparative cycle time (∆∆CT) method was used to quantify gene expression levels. Expression levels were normalized to Gapdh, and CHK-Q cell suspension was the control condition. Statistical Analysis Data are presented as mean ± standard deviation (SD). Means of continuous numerical variables were compared via one-way or two-way analyses of variance (ANOVA) on GraphPad Prism version 9.1.2 for Windows (GraphPad Software Inc, San Diego, CA, USA). Values of ** p < 0.01 and * p < 0.05 were considered statistically significant compared to several conditions within the same experimental time (Tukey's multiple comparison test); † † p < 0.01 and † p < 0.05 were statistically significant compared within the culture time under the same conditions.
2022-05-21T15:20:58.599Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "639fc8f62a82e89987b87eadb5d3dba7bcf908d3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/8/5/312/pdf?version=1652932968", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b38fad6f1a07f3501f56eba1a14f21d950e50dd", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [] }
211193694
pes2o/s2orc
v3-fos-license
Kelch-like protein 42 is a profibrotic ubiquitin E3 ligase involved in systemic sclerosis Systemic scleroderma (SSc) is an autoimmune disease that affects over 2.5 million people globally. SSc results in dysfunctional connective tissues with excessive profibrotic signaling, affecting skin, cardiovascular, and particularly lung tissue. Over three-quarters of individuals with SSc develop pulmonary fibrosis within 5 years, the main cause of SSc mortality. No approved medicines to manage lung SSc currently exist. Recent research suggests that profibrotic signaling by transforming growth factor β (TGF-β) is directly tied to SSc. Previous studies have also shown that ubiquitin E3 ligases potently control TGF-β signaling through targeted degradation of key regulatory proteins; however, the roles of these ligases in SSc–TGF-β signaling remain unclear. Here we utilized primary SSc patient lung cells for high-throughput screening of TGF-β signaling via high-content imaging of nuclear translocation of the profibrotic transcription factor SMAD family member 2/3 (SMAD2/3). We screened an RNAi library targeting ubiquitin E3 ligases and observed that knockdown of the E3 ligase Kelch-like protein 42 (KLHL42) impairs TGF-β–dependent profibrotic signaling. KLHL42 knockdown reduced fibrotic tissue production and decreased TGF-β–mediated SMAD activation. Using unbiased ubiquitin proteomics, we identified phosphatase 2 regulatory subunit B'ϵ (PPP2R5ϵ) as a KLHL42 substrate. Mechanistic experiments validated ubiquitin-mediated control of PPP2R5ϵ stability through KLHL42. PPP2R5ϵ knockdown exacerbated TGF-β–mediated profibrotic signaling, indicating a role of PPP2R5ϵ in SSc. Our findings indicate that the KLHL42–PPP2R5ϵ axis controls profibrotic signaling in SSc lung fibroblasts. We propose that future studies could investigate whether chemical inhibition of KLHL42 may ameliorate profibrotic signaling in SSc. Systemic scleroderma (SSc) is an autoimmune disease that affects over 2.5 million people globally. SSc results in dysfunctional connective tissues with excessive profibrotic signaling, affecting skin, cardiovascular, and particularly lung tissue. Over three-quarters of individuals with SSc develop pulmonary fibrosis within 5 years, the main cause of SSc mortality. No approved medicines to manage lung SSc currently exist. Recent research suggests that profibrotic signaling by transforming growth factor ␤ (TGF-␤) is directly tied to SSc. Previous studies have also shown that ubiquitin E3 ligases potently control TGF-␤ signaling through targeted degradation of key regulatory proteins; however, the roles of these ligases in SSc-TGF-␤ signaling remain unclear. Here we utilized primary SSc patient lung cells for high-throughput screening of TGF-␤ signaling via high-content imaging of nuclear translocation of the profibrotic transcription factor SMAD family member 2/3 (SMAD2/3). We screened an RNAi library targeting ubiquitin E3 ligases and observed that knockdown of the E3 ligase Kelch-like protein 42 (KLHL42) impairs TGF-␤-dependent profibrotic signaling. KLHL42 knockdown reduced fibrotic tissue production and decreased TGF-␤-mediated SMAD activation. Using unbiased ubiquitin proteomics, we identified phosphatase 2 regulatory subunit B'⑀ (PPP2R5⑀) as a KLHL42 substrate. Mechanistic experiments validated ubiquitin-mediated control of PPP2R5⑀ stability through KLHL42. PPP2R5⑀ knockdown exacerbated TGF-␤-mediated profibrotic signaling, indicating a role of PPP2R5⑀ in SSc. Our findings indicate that the KLHL42-PPP2R5⑀ axis controls profibrotic signaling in SSc lung fibroblasts. We propose that future studies could investigate whether chemical inhibition of KLHL42 may ameliorate profibrotic signaling in SSc. Systemic sclerosis (SSc) 3 or scleroderma is an autoimmune rheumatic disease affecting numerous tissue and organ systems and is characterized by hardening and fibrosis of connective tissues. SSc manifests prominently in the skin through skin thickening or increased inflammation; however, the primary cause of mortality is SSc-associated interstitial lung disease (SSc-ILD) (1,2). Over three-quarters of SSc patients develop lung fibrosis within 5 years of diagnosis, with a subset developing progressive fibrotic disease (3). SSc-ILD is characterized by decreased respiratory function because of excessive hardening and fibrotic scarring because of extracellular matrix deposition in the lung. In combination with an increased inflammatory response, epithelial injury, and cellular fibrotic transition, fibrous production and collagen deposition in the lung interstitium lead to respiratory failure (4,5). Although new therapeutic agents are under evaluation, current treatment options that target the underlying pathophysiology are limited (6). Profibrotic cellular signaling pathways are associated with pathological remodeling of the lungs, leading to SSc-ILD, particularly that of the transforming growth factor ␤ (TGF-␤) pathway (4). Research has shown TGF-␤ cytokine and downstream signaling to have stark causal effects on SSc disease models (7). TGF-␤ signaling proceeds through cellular recognition of the TGF-␤ cytokine by receptor complexes and activation of mothers against decapentaplegic homolog (SMAD) transcription factors (8). In canonical TGF-␤/SMAD signal transduction, receptor-regulated SMAD2/3 protein is phosphorylated, complexed with the co-SMAD protein SMAD4, and shuttled to the nucleus, where profibrotic transcription programs are activated (9). Nuclear localization and activating phosphorylation of SMAD transcription factors are essential for their signaling transduction, as numerous studies have shown inhibition of TGF-␤ signaling upon SMAD inactivation or nuclear exclusion (10 -14). SMAD-dependent TGF-␤ signaling is tightly regulated through a variety of mechanisms, including posttranslational protein degradation pathways (15,16). Protein degradation is an evolutionarily conserved process for regulating cellular protein longevity. Dysfunctional or aberrant protein degradation is at the heart of many diseases, including fibrotic signaling in the lung. The major mechanism governing protein degradation is the ubiquitin proteasome system (17). Briefly, the small protein ubiquitin is conjugated to target substrate proteins through an enzymatic cascade, among which the ubiquitin E3 ligase enzyme class plays an essential role in substrate recognition and completion of the ubiquitination process. Ubiquitination is a crucial mechanism to regulate homeostasis in TGF-␤ signaling, as E3 ligase-mediated degradation of TGF-␤ receptors and of SMAD2/3 transcription factors helps to dampen signaling (18,19). However, ubiquitination is associated with pathologic fibrotic signaling; we and other groups have observed ubiquitination proteins affecting multiple facets of fibrotic signaling in the lung, including the TGF-␤ pathway (15,(20)(21)(22)(23). Further, profibrotic E3 ligases have also shown promise as targets for chemical inhibition to reduce deleterious fibrotic signaling (21,(24)(25)(26). To uncover new ubiquitin E3 ligases regulators of SSc fibrotic signaling, we utilized a high-throughput imaging system to screen an RNAi library targeting E3 ligases for their effect on SMAD2/3 translocation in SSc lung fibroblasts. Here we report the development of SSc lung fibroblasts as a screening tool for TGF-␤ activation based on SMAD2/3 nuclear translocation. We identified the ubiquitin E3 ligase KLHL42 as a profibrotic mediator of TGF-␤-SSc fibrotic signaling. KLHL42 was also shown to be a regulator of the stability of the protein phosphatase 2A regulatory subunit PPP2R5⑀ and may regulate fibrotic signaling through the PP2A pathway. This study provides a new model of E3 ligase control for fibrotic signaling in SSc lung disease. Development of the SMAD2/3 translocation ratio screening assay The Systemic Sclerosis Center of Research Translation at the University of Pittsburgh works with the University of Pittsburgh Medical Center for collection of tissue and explant samples from SSc patients for research. Through this center, we utilized primary SSc patient lung fibroblasts in culture as the basis for the high-content screening assay. Previous research has demonstrated that SSc cell cultures have increased TGF-␤/ SMAD signaling and exhibit a strongly profibrotic phenotype, which would make them ideal for RNAi loss-of-function screening (7,27,28). We first sought to validate SSc cells as a proper tool for highcontent screening in a 384-well plate format. Immunofluorescence studies of SSc lung fibroblasts (SSc cells) show respon-siveness to TGF-␤1 stimulation, leading to an increased SMAD2/3 fluorescence signal in the nucleus (Fig. 1A). As a surrogate for overall fibrotic activity, we calculated a translocation ratio metric based on nuclear SMAD2/3 immunostaining relative to cytoplasmic SMAD2/3. As SMAD transcription factors are shuttled to the nucleus for fibrotic signaling, we hypothesized that an increased translocation ratio represented a more fibrotic response. The translocation ratio metric showed robust results through titration of primary and secondary antibodies for both baseline and TGF-␤1-stimulated treatments (Fig. 1B). We calculated a Z' factor of 0.39 for this assay; this signal window proved to be adequate for screening (Fig. 1C) (29). Silencing of an essential protein for the TGF-␤/SMAD signaling process would result in a decreased translocation ratio (Fig. 1D). We then proceeded to screen for key ubiquitin E3 ligase modulators. The E3 ligase KLHL42 affects fibrotic signaling and SMAD activation in SSc With the validation of SSc fibroblasts as a potential screening tool, we sought to screen an Endoribonuclease-prepared siRNA (esiRNA) library (Sigma) against ubiquitination proteins for their effect on SMAD2/3 localization. Following siRNA knockdown and TGF-␤1 stimulation, we collected and immunostained SSc fibroblasts for SMAD2/3 protein, conducted automated microscopy, and calculated a translocation ratio for each esiRNA ( Fig. 2A). We observed that most siRNAs had little effect on the changing translocation ratio relative to control esiRNA, but we observed hits whose silencing reduced the SMAD2/3 translocation ratio (Fig. 2B). Given that knockdown of these targets resulted in a reduced translocation ratio (less nuclear SMAD2/3 protein), we hypothesized that these ubiquitination proteins functioned as profibrotic mediators. Of these hits, we observed Kelch-like protein 42 (KLHL42) to be a relatively undercharacterized E3 ligase. Previous studies have suggested that KLHL42 functions as part of the Cullin-3 E3 ligase complex, in which KLHL42 functions as the key substrate-engaging protein to facilitate substrate ubiquitination (30,31). However, the effect of KLHL42 on fibrotic signaling is unclear. We validated the initial screening data through confocal microscopy, as KLHL42 knockdown resulted in decreased SMAD2 activation upon TGF-␤1. Knockdown of the TGF-␤R1 receptor was used as a positive control, as loss of a major membrane receptor for TGF-␤1 would impede TGF-␤1-mediated activation of SMAD signaling (13) (Fig. 2, C and D). KLHL42 knockdown significantly reduced production of fibrotic proteins such as fibronectin (Fig. 2, E and F). Further, we observed, through an immunoblot assay, that KLHL42 knockdown affects SMAD2 activation (Ser-465/467 phosphorylation) without changing total SMAD2 protein levels, suggesting that the effect of KLHL42 occurs upstream in the signaling process and not through degradation of the SMAD2 protein (Fig. 2, G and H). These initial studies suggest that KLHL42 may be a profibrotic E3 ligase. Unbiased determination of PPP2R5⑀ as a putative KLHL42 substrate Ubiquitin E3 ligases primarily exert biological function through targeted ubiquitination and degradation of substrate proteins. To uncover the putative KLHL42 substrate, we con- KLHL42 affects sclerotic signaling through PPP2R5e ducted unbiased ubiquitin proteomics MS. We utilized Life-Sensors tandem ubiquitin-binding entities (TUBE) technology to purify ubiquitinated proteins from control or KLHL42 siRNA-treated SSc fibroblasts prior to MS (32, 33) (Fig. 3A). This technology uses multiple ubiquitin-binding moieties linked to resin to affinity-purify polyubiquitinated proteins from the lysate. Our hypothesis was that the putative KLHL42 substrate protein would be less ubiquitinated in KLHL42 siRNA-treated cells (KLHL42-depleted) relative to the control. Less ubiquitinated substrate would prevent its precipitation with the TUBE pulldown; thus, the candidate substrate would be less represented in the proteomics study. The study detected 2486 total proteins, among which 155 were detected to be directly ubiquitinated (Fig. 3B). Among the detected proteins, 291 unique proteins were detected in solely the KLHL42 knockdown, 464 were found only in the control sample, and 1731 were found in both samples (Fig. 3C). We focused our analysis on proteins disproportionately detected in the control siRNA treatment relative to KLHL42 siRNA (464 proteins), as these proteins may have been less ubiquitinated when KLHL42 was silenced. We analyzed the subset of interest for any Gene Ontology terms that would be enriched relative to the total dataset using the Gene Ontology enrichment analysis and visualization tool (34). We observed that several ontology terms were significantly enriched in the control siRNA treatment group relative to the total dataset ( Fig. 3D), including protein phosphorylation. This suggests that proteins disproportionately represented in the control siRNA group (and potential KLHL42 substrates) may be involved in protein phosphorylation. Next we investigated the set of proteins detected to be directly ubiquitinated and only found in the control siRNA treatment group for their relevance to the enriched gene ontologies (49 proteins). Of these proteins, protein phosphatase 2A regulatory subunit ⑀ (PPP2R5⑀) was one of the proteins to be detected as directly ubiquitinated and related to the process of protein phosphorylation. PPP2R5⑀ functions as a regulatory subunit for protein phosphatase 2A (PP2A) (35,36). Intriguingly, PP2A has been implicated in regulation of fibrotic and TGF-␤ signaling, including SSc patient samples, suggesting that PPP2R5e might function through this pathway in SSc and be a candidate substrate of KLHL42 (37)(38)(39). Detected PPP2R5⑀ ubiquitination on Lys-84 controls protein stability The ubiquitin proteomics assay on SSc fibroblasts detected that PPP2R5⑀ was directly ubiquitinated, with the highest probability on Lys-84 (Fig. 4A). Given the proximity of two other lysine residues to Lys-84, (Lys-86 and Lys-89), there was a chance that the detected ubiquitination was on these Lys sites instead of Lys-84. Initial data analysis used a minimum local- Higher SMAD Activation D Figure 1. Assay development for scleroderma lung fibroblasts. A, immunofluorescence analysis of SSc lung fibroblasts for SMAD2/3 localization without and with short-term TGF-␤1 stimulation. B, optimization of assay conditions. Translocation ratios were calculated for titrations of primary and secondary antibody without and with TGF-␤1 treatment. C, Z' factor calculation of the SMAD2/3 translocation ratio with vehicle or TGF-␤1 treatment (n ϭ 64 wells/ treatment). D, Schematic of the high-content screening assay to measure SMAD translocation upon treatment with ubiqutination RNAi library. SSc cells were treated with the siRNA library in 384-well glass plates prior to TGF-␤1 treatment, fixation, and staining for SMAD2/3 and the nucleus. The nucleus was segmented based on nuclear counterstain signal, and the cytosolic region was defined through expansion of the nuclear segment to the threshold of the SMAD2/3 fluorescent signal. The translocation ratio was defined as the ratio of nuclear to cytosolic signal, with a higher signal suggestive of greater SMAD activation. Translocation ratios were calculated with Gen5 software (BioTek) or CellProfiler (49). Scale bar ϭ 200 m. KLHL42 affects sclerotic signaling through PPP2R5e ization probability of 0.75 to remove ambiguous sites. Previous proteomic studies have detected PPP2R5⑀ to be ubiquitinated, with several other lysines corroborated by multiple studies (Lys-41, Lys-346, Lys-449, and Lys-456) (40) (Fig. 4B). To validate PPP2R5⑀ as a ubiquitinated substrate from the proteomics assay and to understand the ubiquitination mechanism on PPP2R5⑀, we constructed Lys-to-Arg point mutants corresponding to detected ubiquitin conjugation sites and the results of our analysis (Lys-84, Lys-86, and Lys-89). These constructs were expressed in cells and subjected to cycloheximide (CHX) chases to assay PPP2R5⑀ protein stability (Fig. 4, C and D). We observed WT PPP2R5⑀ showed instability by 8 h of treatment, as did other lysine3arginine mutants, except for K84R. The persistence of this mutant suggests that Lys-84 is a critical ubiquitination site for PPP2R5⑀ stability. KLHL42 facilitates PPP2R5⑀ polyubiquitination and degradation Finally, to validate PPP2R5⑀ as a bona fide substrate of KLHL42, we investigated the mechanism of ubiquitination. Endogenous PPP2R5⑀ immunoprecipitated from KLHL42 siRNA-treated SSc fibroblasts showed less ubiquitination relative to the control (Fig. 5A). As an orthogonal approach, we ectopically expressed KLHL42-HA with His-tagged PPP2R5⑀ prior to His pulldown and ubiquitin blotting. We observed that KLHL42 coexpression enhanced the polyubiquitin signal detected upon PPP2R5⑀ precipitation (Fig. 5B). As substrate polyubiquitination often signals for degradation, we probed whether KLHL42 depletion affected PPP2R5⑀ protein levels and observed that KLHL42 knockdown led to a significantly increased PPP2R5⑀ signal relative to the control (Fig. 5, C and D). We utilized other airway cells, BEAS-2B, to ectopically express KLHL42 and observed a significant dose-dependent decrease in PPP2R5⑀ protein levels (Fig. 5E). Further, the critical lysine site PPP2R5⑀ mutant K84R proved to be resistant to coexpression with KLHL42 (Fig. 5F, lane 4), relative to WT PPP2R5⑀ (Fig. 5F, lane 2), suggesting that KLHL42 mediates the protein stability of PPP2R5⑀ through Lys-84. These data suggest that KLHL42 regulates PPP2R5⑀ ubiquitination and protein stability. Finally, we investigated the role of PPP2R5⑀ on fibrotic signaling in SSc cells (Fig. 5, G-I). We observed significantly decreased PPP2R5⑀ protein upon siRNA treatment (Fig. 5, G and H). PPP2R5⑀ KD resulted in increased fibrotic production upon TGF-␤1 treatment, as measured by fibronectin protein relative to control siRNA (Fig. 5, G and I). These data suggest a role of the KLHL42-PPP2R5⑀ degradation axis in fibrotic SSc signaling, as KLHL42 functions as a profibrotic mediator degrading the PP2A-enhacing subunit PPP2R5⑀, resulting in increased fibrotic signaling (Fig. 5J). Discussion This study demonstrated SMAD2/3 immunostaining and automated microscopy as a screening tool for SSc cells. We calculated a translocation ratio of the SMAD signal as a surrogate for TGF-␤ pathway activity. This assay system is robust and shows good assay parameters with a Z-factor of 0.39. We observed that knockdown of the E3 ligase KLHL42 acutely affects SMAD localization and affects downstream TGF-␤/fibrotic signaling, suggesting that KLHL42 is a profibrotic E3 ligase. To uncover potential KLHL42 substrates, we conducted unbiased MS ubiquitin proteomics. Through bioinformatic analysis, PPP2R5⑀ emerged as a putative substrate, and we confirmed Lys-84 as a critical site for PPP2R5⑀ stability through an unbiased proteomics and mutagenesis assay. Finally, we demonstrated that KLHL42 facilitates polyubiquitination of PPP2R5⑀, leading to its degradation, and that knockdown of PPP2R5⑀ increases fibrotic protein production, potentially through the PP2A pathway. Phosphatase control of TGF-␤ signaling is a major mechanism in SSc. PP2A has been shown to play a direct role in regulating fibrotic signaling in primary SSc disease models. TGF-␤ treatment was observed to affect PP2A modulation of ERK1/2 activation in SSc fibroblast autocrine signaling (41). extracellular signal-regulated kinase activation has been directly implicated in dysfunction mechanics from SSc fibroblasts and has been observed to be constitutively activated in fibroblasts (42,43). Further, TGF-␤ treatment of dermal fibroblasts causes a decrease in PP2A transcript/protein levels (38). The regulatory subunit PPP2R5⑀ belongs to the B56/B' family and modulates overall PP2A activity (35). PPP2R5⑀ expression leads directly to more dephosphorylation of PP2A substrates, suggesting that PPP2R5⑀ is an enhancer of PP2a activity (36). This is in line with our functional observations, as we observed that PPP2R5⑀ knockdown led to increased fibrotic signaling. Most research has emphasized the role of PPP2R5⑀ in neoplasia; this is the first study to suggest a function in SSc-ILD fibrotic signaling. More studies are needed to characterize the exact molecular mechanisms of KLHL42 control of PPP2R5⑀ and PP2A signaling. E3 ligases may play an important role in SSc-ILD signaling. Our previous studies have shown that E3 ligases can potently control fibrotic signaling through targeted degradation of key signal transduction (24). There is evidence that this paradigm KLHL42 affects sclerotic signaling through PPP2R5e can be extended to SSc-ILD fibrotic signaling as well (44,45). Further, therapeutic targeting of ubiquitination may show promise for lung fibrotic diseases. Recent studies have shown that inhibition of the E3 ligase Skp2 ameliorates fibrotic phenotypes in mice (46), and pharmacological inactivation of Cullintype E3 ligases is protective in experimental pulmonary fibrosis models (21). Current therapeutic options in SSc-ILD are limited, and ubiquitin E3 ligases may prove to be new targets for intervention. In conclusion, we utilized two unbiased approaches (siRNA screen and proteomics) to discover the ubiquitin E3 ligase KLHL42 as a mediator of fibrotic signaling in SSc-ILD through regulation of PPP2R5⑀ ubiquitination and stability. Immunoprecipitation and immunoblotting Cells were lysed in immunoprecipitation buffer (50 mM Tris-HCl (pH 7.6), 150 mM NaCl, and 0.25% v/v Triton X-100) and centrifuged at 10,000 ϫ g for 10 min at 4°C. The supernatant was incubated for 2 h at 4°C with 1:100 dilution of immunoprecipitating antibody. The antibody was captured with protein A/G beads (Thermo Fisher) for an additional 2 h. The immunoprecipitate was washed three times with immunoprecipitation buffer and eluted in 1ϫ Laemmli buffer at 88°C for 5 min prior to immunoblot analysis. Immunoprecipitated samples were blotted with anti-mouse or anti-rabbit TrueBlot (Rockland). Cloning Recombinant DNA constructs were prepared through PCR cloning techniques and cloned into the pcDNA3.1D vector (Thermo Fisher) unless otherwise noted. Point mutants were generated with the QuikChange XL 2 site-directed mutagenesis kit (Agilent). All constructs were validated by DNA sequencing (Genewiz). Ubiquitin proteomics Ubiquitin proteomics and TUBE experiments were conducted at LifeSensors. Experimental and control samples were KLHL42 affects sclerotic signaling through PPP2R5e analyzed using biological duplicates. Use of biological replicates allowed better accounting for variability with each step of the MS workflow (cell culture, sample preparation, and analysis). Samples were spun down, and duplicate 22-l volumes of each sample were mixed with SDS loading buffer. Samples were heated at 90°C for 5 min and run on SDS-PAGE prior to Coomassie Blue staining. The remaining sample was run on a different gel, resulting in each sample being run on a total of three gel lanes. The lanes were excised, reduced with tris(2-carboxyethyl)phosphine, alkylated with iodoacetamide, and digested with trypsin. Tryptic digests were analyzed using a 150-min LC run on a Thermo Q Exactive HF mass spectrometer. A 30-min blank was run between samples. Ubiquitin proteomics resulted in four RAW files, two each for KLHL42 siRNA treatment and control siRNA treatment. MaxQuant 1.6.2.3 was used to query MS data against the Uniprot human database (2018_10_01); 196,371 entries were specifically searched. The first search peptide mass tolerance was set at 20 ppm; the main search peptide tolerance was set at 4.5 ppm. Fragment ion mass tolerance was set at 20 ppm. The protein, peptide, and site false discovery rate was set at 1%. Full trypsin specificity was used to generate peptides, with a maximum of three missed cleavages permitted. Carabimidomethyl (C) was set as a fixed modification, with acetyl (protein N-term), oxidation (M), and GlyGly (K) considered variable modifications. Protein quantification was performed using Razor and unique peptides. Razor peptides are shared (nonunique) peptides assigned to the protein group with the most other peptides (Occam's razor principle). Quantitation was based on the sum of the peptide MS peak areas for the protein (intensity) because of its increased accuracy compared with the MS/MS count. To account for the fact that larger proteins generate more peptides, the intensity values were adjusted by normalizing against the number of theoretical peptides for each protein (Intensity Based Absolute Quantification (iBAQ) intensity). We then sorted (largest to smallest order) on the iBAQ values to determine which proteins were more abundant in a sample. Data were then analyzed via the Gene Ontology enrichment analysis and visualization tool, querying the 464 proteins identified solely in the control siRNA treatments (and that may have lost ubiquitination when KLHL42 was knocked down) for their gene ontologies as described previously (34). The sets of proteins detected to be directly ubiquitinated and only found in the control siRNA treatment group (49 proteins) were inspected manually based on their relevance to the enriched gene ontologies, leading to the uncovering of PPP2R5⑀. Confocal microscopy SSc cells were seeded in 35-mm MatTek glass-bottom dishes before siRNA knockdown and TGF-␤1 treatment. Cells were washed with 1ϫ PBS prior to fixation with 4% paraformaldehyde and permeabilization with 0.5% Triton X-100. Following blocking with 2% BSA in PBS, cells were exposed to SMAD2 primary antibody (1:500) overnight and then 1:1000 Alexa Fluor 488 secondary antibodies for immunostaining. The nucleus was counterstained with Hoechst 33342. Cells were visualized with a Nikon A1 confocal mi-croscope using 405-, 488-, or 567-nm wavelengths. All experiments were done with a ϫ60 oil differential interference contrast objective lens. Statistics All statistical tests were calculating using GraphPad Prism 8. p Ͻ 0.05 was used to indicate significance. Densitometry was calculated using ImageJ (National Institutes of Health). Data availability Ubiquitin proteomics data have been uploaded through the MassIVE repository, ID: MSV000084800. The MSViewer Data can be accessed using the key axtpvumdao.
2020-02-20T09:15:15.624Z
2020-02-17T00:00:00.000
{ "year": 2020, "sha1": "e5454bc0cad8ef37eade1f71e4213e510e4bf0da", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1074/jbc.ac119.012066", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "389c5136cfdc4bc66c38693d4b2281ffc677d685", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
237868655
pes2o/s2orc
v3-fos-license
In vitro anthelmintic efficacy of Citrullus colocynthis (L.) Schrad on Haemonchus contortus Ethno-veterinary medicinal studies associated with traditional uses of the flora of the Cholistan desert have shown that fruits of Citrullus colocynthis are used for the treatment of helminth infections. The present research was designed to evaluate the anthelmintic efficacy of C. colocynthis against H. contortus. The in vitro anthelmintic effects of aqueous-methanol and ethyl acetate fruit extracts of C. colocynthis against H. contortus were determined through egg hatch and adult motility assays. The effect of four serial dilutions of 25 mg/mL of each extract compared to levamisol (0.55 mg/mL) and oxfendazole (three serial dilutions of 25 μg/mL) were studied. Both ethyl acetate and aqueous-methanol extracts paralyzed all adult worms 4h and 8h post-exposure at a dose of 25 mg/mL each. In the egg hatch assay, about 83.67% and 80.67% of H. contortus eggs failed to hatch with the same dose (i.e. 25 mg/mL) of ethyl acetate and CAME extracts, respectively. The results of the present study strongly support fruit extracts of C. colocynthis as a promising alternative to synthetic drugs against H. contortus. These findings will lead to further in vivo studies to investigate the bio-availability of the active ingredients of the plant and the minimum non-lethal concentration required for treatment of haemonchosis in livestock. The anthelmintic effects of C. colocynthis might be attributed to the presence of phenolic acids. explore alternative methods (QADIR et al., 2010). Among the alternatives, botanicals have been very effective against a wide range of parasites (ABBAS et al., 2017a(ABBAS et al., , 2017bHUSSAIN et al., 2017;IDRIS et al., 2017;ZAMAN et al., 2017;KHATER et al., 2018;FAYAZ et al., 2019). Plants have many bioactive compounds that can easily kill parasites through multiple mechanisms, and ultimately reduce the chances of the development of anthelmintic resistance. Plants are not only ecofriendly and easily biodegradable, but also their bioactive compounds have less chance of bioaccumulation in animal tissues and the surrounding environment (DELFIN et al., 2017). More reliable and inexpensive herbal medicines are being developed for rapid detection of their active ingredients against the different phases of H. contortus. Amongst these, Citrullus (C.) colocynthis, from the Cucurbitaceae family, is highly xerophytic with large perennial roots, triangular leaves, ovoid brown seeds, and globular green fruit containing white pulp and monoecious flowers (PRESTON et al., 2015). The anthelmintic activity of the fruit of C. colocynthis (Voucher # Ch-03) has been previously documented by conducting a survey (July 2010-January 2011) among local respondents in the Cholistan desert, using closed and open ended questionnaires. About 86 plant-based remedies were recorded, of which the helminthiasis treatment was found to be most frequent. Local shepherds use dry fruit powder from C. colocynthis to treat helminthiasis (RAZA et al., 2014). In the current study, the anthelmintic efficacy of fruit extracts of C. colocynthis was investigated against the most pathogenic and prevalent H. contortus. Introduction Livestock production is a foremost earning source for the agricultural sustainability of poor farmers in rural areas, especially when crop production is not a profitable source of income (KHAJURIA et al., 2013;AHMED et al., 2020). However, parasitism has always caused major problems in achieving maximum output from livestock production systems (MEHMOOD et al., 2017;IJAZ et al., 2018;NASIR et al., 2018;ZAFAR et al., 2019;AHMAD et al., 2019;BATOOL et al., 2019;LI et al., 2019aLI et al., , 2019bKHATER et al., 2020;LI et al., 2020). Haemonchus (H.) contortus, the causative agent of haemonchosis, is one of the major hindrances to small ruminant production (BIBI et al., 2017), and causes an estimated loss of about ten billion dollars annually to the veterinary market (ROEBER et al., 2013). A single parasite sucks approximately 0.05 mLblood daily (ALIM et al., 2016) and consequently severely damages the gastrointestinal mucosa . Continuous blood loss due to H. contortus causes anemia, anorexia, edema, diarrhea, hypoproteinemia, and emaciation, that ultimately lead to the death of the animal (GITHIGIA et al., 2001). Severe infection inflicts a substantial impact on milk, meat and wool production, reduces weight gain by 23-63%, and in 25% cases death occurs before weaning . Anthelmintic treatment is a leading prophylactic application used against parasitic infection (CARVALHO et al., 2012). However, frequent and prolonged use with undiscriminating administration and improper formulations of synthetic drugs has regrettably led to an upsurge in resistance in the parasites against these salts (DEVI et al., 2014). As a result, H. contortus is resistant to all broad range anthelmintic families such as benzimidazole, ivermectin and imidazothiazole and is hence becoming an irrefutable problem (DEVI et al., 2014). Resistant parasites become more pathogenic, prolific and acquire increased adaptability and survivability in all their free-living phases in the host (SANYAL et al., 2003). Apart from resistance against anthelmintics, toxic residues, the high cost of production and the unavailability and inaccessibility of these synthetic drugs, especially in remote rural areas, have spurred investigations to Preparation of extracts in aqueous-methanol and ethyl-acetate. The preparation of fruit extracts were performed according to the methods previously adopted by IQBAL et al. (2012). For extract preparation in aqueous-methanol solution, fruit powder (50g) was mixed into sufficient amount of 70% aqueous-methanol solution for three days, with stirring three times per day for five minutes, and later filtered using porous cloth. The residual plant material was mixed again in aqueous-methanol, and the whole procedure was repeated twice. The three filtrates were then combined and the solvent was evaporated using a rotary evaporator at 40 °C under reduced pressure. Later evaporation of solvent was completed in a water bath at 65 °C. Finally, crude aqueous methanolic extract (CAME) was stored at 4 °C in the form of a paste. The procedure was repeated similarly to prepare the extract in ethyl acetate. Thus, fruit powder (50g) was soaked in a sufficient amount of ethyl acetate and the further procedure was repeated three times as described above and finally, the paste was stored at 4 °C. High performance liquid chromatography (HPLC) analysis for flavonoids and phenolic compounds. The hydrolysis of CAME was performed as described previously by DEK et al. (2011). Briefly, CAME (50 mg) was weighed and dissolved in 24 mL of methanol. After homogenization, 16 mL of distilled water and 10 mL of 6M HCl were added to the mixture in order. Then mixture was incubated for 2h at 95 °C. It was filtered through a 0.45 µm nylon membrane filter (Biotech, Germany) prior to HPLC analysis. Determination of anthelmintic activity Adult motility assay. Adult motility assay was performed following the methodology of IQBAL et al. (2012) with modifications. About ten mature adult worms of the same size were collected from the abomasum of freshly slaughtered sheep, washed in PBS and transferred to separate petri dishes containing different concentrations of one of the plant extracts. Two-fold serial dilutions of both extracts were prepared from a 25 mg/mL concentration of stock solutions by mixing in PBS. Three replications of each concentration and of each control group were carried out as follows: Aqueous-methanol extract: 25, 12.5, 6.25, 3.12 mg/mL Ethyl acetate extract: 25, 12.5, 6.25, 3.12 mg/mL Levamisol: 0.55 mg/mL PBS: 20 mL/petri plate Motility was observed under an inverted microscope at intervals of 0, 2, 4, 6, 8, 10, 12 hours. The worms that did not show any motility in either the head or tail region were picked out and kept in lukewarm PBS for five minutes, and these worms were only counted as alive if their motility revived. Egg hatch assay. Egg hatch assay was performed following the guidelines of the "World Association for the Advancement of Veterinary Parasitology" with modifications (ALAWA et al., 2003). To release the eggs, female worms were triturated in a mortar containing PBS. The mixture was filtered using a mesh sieve with 80 µm pores, and then the sieve was washed with PBS. The collected fluid was diluted to a concentration of 200 eggs/mL. Four doses of each plant extract and three concentrations of oxfendazole were used in triplicates in 24 multiwell plates as follows: Aqueous-methanol extract: 25, 12.5, 6.25, 3.125 mg/mL Ethyl acetate extract: 25, 12.5, 6.25, 3.125 mg/mL Oxfendazole: 25, 12.5, 6.125 µg/mL PBS: 1 mL/well After incubation at 28 °C/48h unhatched eggs were counted under an inverted microscope. The percentage ratios of unhatched eggs were calculated by dividing the final number of unhatched eggs by the initial number of unhatched eggs. Statistical analysis. The collected data on unhatched eggs was subjected to probit analysis, and data on the adult motility assay were analyzed using SPSS. P˂0.05 was considered as the statistically significant level. Results Adulticidal effects. The adulticidal effect of ethyl acetate and aqueous methanol fruit extract of C. colocynthis is presented in Table 1. Ethyl acetate extract of C. colocynthis, even at lower concentrations (6.25 mg/mL and 3.125 mg/mL), revealed its maximum effect at the end of the time period, i.e. 12h. The dose of 25 mg/mL paralyzed all the worms in only 4h after the start of the experiment. The highest tested concentration (25 mg/mL) of aqueous-methanolic extract paralyzed all the worms 8h post-exposure, while the lower doses showed their extreme activity in about 10h after the start of the experiment. Therefore, the inhibitory potential of ethyl acetate extract was found to be greater than the aqueous-methanol extract. However, both extracts demonstrated a dose dependent effectiveness against the motility and mortality of the worms, ratifying their anthelmintic activity. Moreover, ethyl acetate extract exhibited comparable efficiency to Levamisole at higher dose, of 25 mg/mL. No death was recorded up until the end of the experiment (12h) by PBS, that was used as a negative control. Ovicidal effects. Table 2 shows the ovicidal effect of C. colocynthis ethyl acetate and aqueous methanol fruit extract. Ethyl acetate extract of C. colocynthis exhibited maximum ovicidal efficacy (83.67%) at the highest tested dose of 25 mg/mL (P<0.05). Serially diluted concentrations of ethyl acetate extract inhibited egg hatching in a dose dependent manner. The highest dose of aqueous methanolic extract (25 mg/mL) inhibited the development of 80.67% eggs into larvae, while lower doses displayed a similar pattern of inhibition of egg hatching to the ethyl acetate extract -in a dose dependent manner. Ethyl acetate extract demonstrated slightly higher effectiveness than the aqueous-methanol extract. In vitro experimentation plays an imperative role in this context as it shows the direct interactive effect of plant materials on different stages of the life of parasites (FERRIERA et al., 2013). Inhibition of egg hatch assay (COLES et al., 1992), and adult (HOUNZANGBE-ADOTE et al., 2005) and larval (KOTZE et al., 2006) motility assays are usually used for testing the anthelmintic activity of natural drugs. C. colocynthis has been extensively used by farmers as a traditional treatment against helminthiasis in Cholistan (RAZA et al., 2014). Its anthelmintic efficiency has also been previously explored against Pheretima postumaas (TALOLE et al., 2013), H. contortus (ULLAH et al., 2013) and Orthocoelium scoliocoelium (SWARANAKAR and KUMAWAT, 2014). Petroleum ether, ethanol and aqueous extract of C. colocynthis at 40 mg/ mL inhibited the motility of Pheretima posthumaas after an average time of 10.14, 2.36 and 4.33 min respectively (TALOLE et al., 2013). A combination of aqueous-methanolic extract of C. colocynthis (fruit), Curcuma longa (rhizome) and Peganum harmala (seeds) at 100 mg/mL leads to the death of all H. contortus worms 4h post exposure (ULLAH et al., 2013). Furthermore, an alcoholic extract of fruit pulp of C. colocynthis at 40 mg/mL for 5h rendered complete inhibition of motility of Orthocoelium scoliocoelium parasites (SWARNAKAR and KUMAWAT, 2014). The combined synergistic action of aqueous-methanolic extract of three plants including C. colocynthis previously reported by ULLAH et al. (2013) against H. contortus was only against one of its life cycle stages (adult worms) at 100 mg/mL. However, in the current study, aqueous methanolic and ethyl acetate extracts of the plant inhibited worm motility at 25 mg/mL at 8h and 4h post-exposure, respectively. AHMED et al. (2019) investigated the effect of C. colocynthis in vivo against albendazole resistant Haemonchus in lambs. An extract of C. colocynthis at a dose of 200 mg/Kg body weight caused a 95.57% reduction in fecal egg count, while a dose of 50 mg/Kg caused a 55.07% reduction in fecal egg count. This difference may be due to the fact that AHMED et al. (2019) performed an in vivo experiment on albendazole resistant worms. AHMED et al. (2019) observed no untoward response to admiinistration of 200 mg/ mL of C. colosynthis to lambs. However, they did not perform any proper study involving histological and biochemical tests to reveal any lethal effect at this dose. From these studies, it is conceivable that the ingredient absorption of plant extracts vary between different worms. The difference in anthelmintic activity of plants may be attributed to difference in solubility of solid materials of plants. Solubility is affected by dissimilarities in polarity of plant components and solvent (MALU et al., 2009) Egg hatch assay was first designed for analysis of benzimidazole resistance in helminthes, and is now used for screening the anthelmintic efficiency of plants (IQBAL et al., 2012). In egg hatch assays, the active compounds of the plant extract pierce the egg shell, paralyze the first stage larvae, and thus inhibit egg hatching (PONE et al., 2011) (KUAMR et al., 2015), 79.6% with methanolic and acetonic extract of Khaya senegalensis (CHINA et al., 2016) and absolute inhibition with aqueous extracts of Capparis spinosa (AKKARI et al., 2016). PONE et al. (2011) described that the active components of plants inhibit egg hatching by paralyzing the first stage larvae inside the egg shell. Despite these numerous studies, the inhibitory effect of C. colocynthis on egg hatching of any helminth has not yet been investigated. Ethyl acetate and aqueous-methanol fruit extracts of C. colocynthis in the current study caused inhibition of hatching of more than 80% eggs and showed their effectiveness against the parasite. It is remarkable to note that most plant extracts showed a statistically significant anthelmintic effect against the different life-cycle stages of parasites and lower the chances of the development of parasitic resistance (HOUNZANGBE-ADOTE et al., 2005). Besides their role in inhibition of egg hatching, these plant anthelmintics also affect various worm developmental stages. In this context, an aqueous extract of Saba senegalensis inhibited the motility of 97.77% H. contortus worms at 15.00 mg/mL (BELEMLILGA et al., 2016), an acetonic extract of Khaya senegalensis inhibited the motility of 75% H. contortus worms at 2400 µg/mL (CHINA et al., 2016), a herbal complex prepared from four herbs; Cinnamomum verum, Capsicum annuum, Origanum vulgare and Rosmarinus officinalis caused 100% H. contortus mortality at 100 mg/mL (ZAMAN et al., 2020), while a methanolic extract of the same plant caused inhibition of all the H. contortus worms even at a half the dose (1200 µg/ mL) of the acetonic extract (CHINA et al., 2016). The current study showed that the amount of phenolic acids (m-coumaric acid, gallic acid, vanillic acid) varied from 2.23 ± 0.01 to 10.48 ± 0.03 ppm, and the amount of flavonoids (quercetin) was found to be 2.49 ± 0.02 ppm. UMA and SEKAR, (2014) performed phytochemical screening of this plant, and labeled alkaloids and saponins as the active ingredients that might contribute to its anthelmintic activity. Conclusion It is concluded from the current project that bioactive compounds isolated from fruit extracts of C. colocynthis could be a strong novel substitute for commercial anthelmintic drugs for the control of this extremely prevalent nematode, H. contortus. For this purpose, safety and toxicity studies on C. colocynthis should be conducted in vivo to ascertain the minimum non-lethal concentration required for treatment of haemonchosis in livestock. statement of novelty For the first time, the in vitro anthelmintic activity of aqueousmethanol fruit extracts of C. colocynthis against the eggs and adults of H. contortus has been studied; the promising alternative to synthetic drugs against this highly prolific parasite is strongly advocated.
2021-06-29T08:25:31.670Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "f72afd6d560ba897058e09c68d39209251f08cc5", "oa_license": null, "oa_url": "https://doi.org/10.24099/vet.arhiv.1024", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f72afd6d560ba897058e09c68d39209251f08cc5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
4393760
pes2o/s2orc
v3-fos-license
Is adjuvant chemotherapy beneficial for patients with FIGO stage IC adult granulosa cell tumor of the ovary? Background To evaluate the association between adjuvant chemotherapy and clinical outcomes in patients with stage IC adult granulosa cell tumor (AGCT). Methods We performed a retrospective study of patients with stage IC AGCT diagnosed at our hospital from January 1985 to September 2015. We analyzed descriptive statistics, and performed univariate and multivariate and Kaplan–Meier survival analyses. Results Sixty stage IC AGCT patients were identified, including 28 in the no adjuvant chemotherapy group (NACG) and 32 in the adjuvant chemotherapy group (ACG). The median follow-up time was 88 months (range: 9–334 months). Sixteen patients developed recurrences, including nine in the NACG and seven in the ACG groups. Univariate analysis identified incomplete surgical staging and initial treatment place as associated with disease-free survival (DFS) (P = 0.003 and 0.038, respectively). Incomplete surgical staging remained a risk factor for recurrence in multivariate analysis (hazard ratio (HR) = 3.883, 95% confidence interval (CI): 1.123–13.430, P = 0.032). The 5-year DFS rates in the NACG and ACG groups were 76.3% and 87.5% respectively (P = 0.197). Adjuvant chemotherapy was thus not associated with improved DFS. Furthermore, the number of chemotherapy cycles was not associated with recurrence rate (≤3 cycles vs. > 3 cycles, HR = 0.613, 95% CI: 0.112–3.351, P = 0.572). Conclusion Administration of adjuvant chemotherapy does not improve DFS in patients with stage IC AGCT. Further studies with larger samples involving multi-institutional collaboration are needed to validate new treatment regimens for this disease. Background Granulosa cell tumors (GCTs) are uncommon, accounting for only about 5% of all ovarian malignancies, but comprising 70% of ovarian sex cord-stromal tumors [1]. Most GCTs are adult GCTs (AGCTs), based on their clinical presentation and histological findings. AGCT comprises a clinically and molecularly unique subtype of ovarian malignancy with different behavior from other histological subtypes. The majority of AGCTs are diagnosed at an early stage and have a good prognosis, with 5-and 10-year overall survival rates of 98% and 84%, respectively [2]. However, AGCTs can occasionally be indolent, with a tendency to late relapse, associated with significant morbidity and difficult therapeutic choices. Surgery is the cornerstone of treatment for AGCT, and patients with stage I AGCT have a favorable prognosis following surgical treatment alone, though the National Comprehensive Cancer Network guidelines (NCCN) recommend adjuvant chemotherapy for patients with advanced stage disease, or stage I disease with high risk factors. However, the definition of what constitutes a high risk factor remains unclear, and current evidence regarding the use of adjuvant chemotherapy in women with early stage AGCT is conflicting. Some studies have suggested that women might benefit from adjuvant chemotherapy [3,4], while others failed to show any effect of postoperative chemotherapy on survival or relapse rates [5,6]. This lack of clear evidence regarding the benefit of adjuvant chemotherapy in early stage AGCT makes treatment decisions difficult. According to the revised FIGO stage (2014), ovarian epithelial cancer stage IC can be subdivided into intraoperative rupture (IC1), capsule ruptured before surgery or tumor on ovarian surface (IC2), and malignant cells in ascites or peritoneal washings (IC3). This new FIGO staging provides a more precise definition of the risk in stage IC [7]. Indeed, patients with stage IC have a higher relapse rate and shorter median time to relapse compared with stage IA patients [8,9]. Some authors suggest the use of adjuvant therapy in AGCT stage IC patients with preoperative rupture or malignant ascites [10], but there remains limited information regarding the role of adjuvant chemotherapy in stage IC [8,11]. The aim of this study was thus to evaluate the association between adjuvant chemotherapy and disease-free survival (DFS) in patients with stage IC AGCT. Methods This study was approved by the ethics committee of our hospital. All patients diagnosed with AGCT at our hospital from January 1985 to September 2015 were reviewed. Sixty patients were diagnosed with FIGO stage IC AGCT. Information was collected from all patients regarding age, menopausal status, tumor diameter, preoperative serum CA125, FIGO stage, type of surgery, adjuvant therapy, relapse characteristics, and relapse treatment and follow-up information. Follow-up information was obtained from outpatient files or by telephone interview with patients or their relatives. Tumor stage was based on the new staging system reports (FIGO staging system, FIGO Committee on Gynecologic Oncology, 2014) [7]. All patients underwent surgery. Fertility-sparing surgery was defined as preservation of the uterus and at least one ovary. Total abdominal hysterectomy and bilateral salpingo-oophorectomy was classified as radical surgery. Staging was considered complete when it included peritoneal washing, omentectomy (or omental biopsy), multiple peritoneal biopsies, and biopsy of any suspicious area. Pelvic and/or para-aortic lymphadenectomy were optional procedures, according to the surgeon's experience and the intraoperative findings. The exact indications for adjuvant chemotherapy in the present study were unclear because of the retrospective nature of the study, and the decision to administer adjuvant chemotherapy was made by the attending physicians after discussion with the patients. Statistical analysis Statistical analysis was performed using SPSS version 15 (SPSS, Inc., Chicago, IL, USA). Patient demographics and baseline characteristics were summarized using descriptive statistics. Patients were defined into no adjuvant chemotherapy group (NACG) and adjuvant chemotherapy group (ACG). Median values were compared using Mann-Whitney U-tests and frequency distributions were compared using χ 2 and Fisher's exact tests. The main objective of the study was to evaluate the association between adjuvant chemotherapy and disease-free survival (DFS), defined as the time from initial surgery to the first recurrence or date of censoring. DFS survival curves were obtained using the Kaplan-Meier method and compared using log-rank tests. A P value < 0.05 was considered statistically significant. Variables with P < 0.05 on univariate analysis were selected for multivariate analysis. Results Sixty patients with Stage IC AGCT were identified during the study period, including 28 in the NACG group and 32 in the ACG group. The median age at diagnosis was 41 years (range: 23-75 years). Their baseline characteristics are summarized in Table 1. The FIGO distributions were as follows: surgical spill in 34 patients (IC1), capsule ruptured before surgery or tumor on ovarian surface in 23 patients (IC2), and malignant cells in ascites or peritoneal washings in three patients (IC3). All patients underwent upfront surgery, including 26 (43.3%) who underwent complete surgical staging and 34 (56.7%) who did not. Twenty-four patients (40%) had pelvic and/or para-aortic lymphadenectomy during surgery and the removed lymph nodes were all negative for metastatic AGCT. Thirty-seven (61.7%) patients received surgical treatment in our center and 23 (38.3%) were operated on elsewhere and then referred for subsequent evaluation postoperatively. Thirty-two (53.3%) patients received adjuvant chemotherapy, with a mean of 3.2 (range: 1-6) chemotherapy cycles. The chemotherapy regimens included bleomycin, etoposide, and cisplatin (BEP) in 11 patients; cisplatin, vincristine, bleomycin in seven; cisplatin and cyclophosphamide in four; paclitaxel and carboplatin (TC) in five; and other regimens in five patients. Among the 32 patients who received chemotherapy, eight (25%) received more than three cycles and 24 (75%) received three or fewer cycles. Survival analysis The median follow-up time was 88 months (range: 9-334 months). During the study period, sixteen patients (26.7%) experienced at least one recurrence, including nine in the NACG group and seven in the ACG group. Among all patients with recurrences, the median time to recurrence was 66 months (range: 7-165 months). The anatomic locations of the first recurrences included the pelvis alone in eight patients, the abdomen alone in two, and the pelvis plus abdomen in six. Thirteen patients underwent debulking surgery plus chemotherapy and three patients received surgical reduction alone. Eleven patients (69%) developed a second recurrence (4 pelvic plus abdominal relapse; 3 pelvic; 3 abdominal; 1 hepatic involvement) after a median time of 48 months from diagnosis of the first recurrence (range: 24-105 months). Five patients were treated with surgery, five with surgery plus chemotherapy, and one with palliative care. The associations between clinical factors and DFS in the 60 patients with stage IC AGCT are shown in Table 2. According to univariate analysis, menopause, FIGO stage, adjuvant chemotherapy, and lymph node dissection were not associated with DFS, while surgical staging (P = 0.003) and initial treatment place (P = 0.038) were significantly associated with DFS ( Fig. 1). Incomplete surgical staging (hazard ratio (HR) = 3.883, 95% confidence interval (CI): 1.123-13.430, P = 0.032) was still a significant predictive factor for recurrence in multivariate analysis. The 5-year DFS rates in the NACG and ACG groups were 76.3% and 87.5%, respectively (P = 0.197) (Fig. 2). Further analysis of the ACG subgroups revealed no association between the number of (Fig. 3). Discussion Adjuvant chemotherapy was not associated with improved DFS in the current cohort of 60 patients with stage IC AGCT. Furthermore, the number of cycles of chemotherapy was not associated with improved DFS among those patients who received postoperative chemotherapy. AGCT is a late-relapse disease, and long-term follow-up is necessary to obtain reliable data [12]. The median follow-up time in our study was significantly longer than in other recent reports (88 months; range: 9-334 months) [3,5], and the recurrence rate was 26.7%, which was consistent with previous reports [4,13], suggesting that this was a realistic and representative reflection of the natural history of the disease. NCCN guidelines suggest that adjuvant chemotherapy should be considered in patients with early-stage disease but with high risk factors (e.g., high mitotic index, tumor rupture, or incomplete surgical staging); however, our data do not appear to support this recommendation, and showed that adjuvant chemotherapy did not protect against recurrence in patients with AGCT. Adjuvant chemotherapy is not always administered in our practice, and almost half (46.7%) of all patients with stage IC disease did not receive adjuvant chemotherapy. However, the retrospective nature of the trial means that the reasons why these patients did not receive adjuvant chemotherapy were unknown. Previous studies have provided conflicting reports regarding the role of adjuvant chemotherapy in AGCT. Several studies showed beneficial effects of platinum-based treatments [3,14,15], though most of these included patients with advanced stage disease, which has a poorer prognosis than early stage disease, and unadjusted survival analyses meant that the role of chemotherapy in stage I disease remained obscure. However, adjuvant chemotherapy was not associated with a reduced recurrence rate, even when the survival analysis was restricted to patients with stage I AGCT [6,12]. Mangili et al. also recently reported no difference in DFS between stage IC patients with or without adjuvant chemotherapy [11]. The potential toxicity of chemotherapy, including second acute leukemia [16] and cardiovascular disease [17], together with a lack of obvious benefit, suggest that more studies are needed to clarify the benefit and thus inform the decision to administer chemotherapy for early stage AGCT. BEP is the most widely used first-line adjuvant therapy regimen among patients for whom adjuvant therapy has been deemed appropriate [18]. However, these agents are associated with potentially serious toxicities, such as myelosuppression and pulmonary disorders associated with bleomycin [19]. Prospective trials are therefore needed to assess the therapeutic ratio of regimens other than the widely used BEP. NCCN guidelines (category 2B) also recommend TC, and the Gynecologic Oncology Group is currently developing a randomized phase II trial to compare TC with BEP, with progression-free survival as the primary outcome (ClinicalTrials.gov Identifier NCT01042522). The expectation is that TC may be associated with reduced toxicity and similar progressionfree survival compared with BEP. BEP and TC regimens were both used in some of our patients; however, the retrospective nature of the study and the variety of chemotherapy regimens and doses used means that it was difficult to draw any conclusions regarding the relative values of the different regimens in this study. Age, FIGO stage, staging surgery, and initial treatment place have been reported as prognostic factors in AGCT [3,12]. Surgical staging was associated with improved outcomes in the present series, with 5-year DFS rates of 93.8% and 70.6% in patients with and without complete staging, respectively (P = 0.003). These results were consistent with previous studies demonstrating that the recurrence rate was significantly increased in incompletely staged patients. In Park's report, completely staged patients had no recurrence or deaths, while recurrence was observed in 14.3% of patients without complete staging [3]. Seagle et al. analyzed prognostic information from a national cancer database and found that incomplete surgical staging was associated with increased risk of death [20]. These reports highlight the need for an accurate diagnosis at presentation. Initial treatment outside was a risk factor for recurrence in univariate analysis (5-year DFS rate: 91.9% vs 72.2%), though this was not a significant factor in multivariate analysis. This may be explained by suboptimal surgical extent, delayed initiation of adjuvant treatment in high-risk patients, or an inaccurate pathological diagnosis. This study raises questions over the importance of chemotherapy in patients with stage IC AGCT. The lack of any obvious benefit of adjuvant chemotherapy suggests that new effective treatment modalities are needed, based on a deeper understanding of the pathogenesis of AGCT. Shah et al. examined FOXL2 gene mutations in ovarian granulosa cell tumors [21] and showed that FOXL2 mutation was associated with increased CYP17 expression. Inhibition of this enzyme may thus reverse the effect of FOXL2 mutation. Additional inhibition of CYP17 may be achieved by novel agents such as abiraterone and ketoconazole [22], and therapies targeting this mechanism may prove to be more useful and less toxic than traditional chemotherapy [5]. Other novel potential targets (vascular endothelial growth factor (VEGF), HER2) are being investigated. In an era of precision medicine, postoperative patient care and decisions about adjuvant chemotherapy can be individualized based on these immunohistochemical factors, such that patients whose GCTs show high expression levels of VEGF or HER2 can be treated with bevacizumab or trastuzumab/imatinib, respectively [23,24]. The current study was limited by its retrospective nature, including heterogeneity in terms of staging and work-up, and variable chemotherapy regimens, as well as by the rarity of the disease. However, the present study included a relative large cohort of stage IC patients and the results thus contribute to the limited body of knowledge on this condition. Further studies in large series are needed to characterize patients with stage IC subtype who can be spared adjuvant chemotherapy, and to define the real risk factors in stage IC. However, prospective clinical trials are difficult and time-consuming given the rarity of the disease, its indolent nature, and its overall good prognosis in the early stage. International collaboration is therefore needed to generate large studies with the aim of validating new treatment regimens for patients with high-risk early stage AGCT. Conclusions Adjuvant chemotherapy does not improve DFS in patients with stage IC AGCT. Further clinical trials, including using novel agents, are needed to characterize patients with stage IC AGCT who can be spared adjuvant chemotherapy, and to define the real risk factors in this disease. Funding No Availability of data and materials The dataset supporting the conclusions of this article is included within the article and its additional files.
2018-03-28T09:39:02.504Z
2018-03-27T00:00:00.000
{ "year": 2018, "sha1": "94cf7d4eda470dbee9e7e712d0ff462dcd3e6099", "oa_license": "CCBY", "oa_url": "https://ovarianresearch.biomedcentral.com/track/pdf/10.1186/s13048-018-0396-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94cf7d4eda470dbee9e7e712d0ff462dcd3e6099", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244004858
pes2o/s2orc
v3-fos-license
The Diagnosis Algorithm of Chronic Hypokalemia in Bartter Syndrome and Gitelman Syndrome: A Case Report with plasma potassium 1.7 mEq/L, increased urine potassium (71.1 mmol/24 hours), increased urine sodium 306 mmol/24 hours, and increased urine chloride (342 mmol/24 hours), plasma magnesium levels were normal (1.91 mg/dL). KCl infusion was given to correct electrolyte imbalance condition. Discussion: Several examinations must be performed to confirm the cause of hypokalemia condition. The diagnosis of this patient was suspected to lead to Bartter syndrome and Gitelman syndrome, because there was an increase in urinary potassium excretion, normotensive conditions, no suspicion of metabolic acidosis, and no symptoms of nausea and vomiting and no history of diuretic drugs usage. I N T R O D U C T I O N Hypokalemia is a common disorder characterized by low plasma potassium levels (<3.5 mEq/L). (1)Hypokalemia can be caused by abnormal potassium loss (drug use, gastrointestinal loss, kidney loss), transcellular changes (use of insulin drugs, beta agonists, alkalosis, thyrotoxicosis, etc.), inadequate intake (anorexia, dementia), as well as pseudo hypokalemia (delayed examination of samples, and significant leukocytosis (>75,000 cells/mm3). (2)Hypokalemia can also be caused by genetic disorders.Bartter syndrome and Gitelman syndrome are congenital disorders caused by mutations of several genes that are inherited autosomal recessive and impact on kidney tubular damage related to sodium, potassium, and chloride regulation.The incidence of Bartter syndrome is quite rare, about 1: 1,000,000 of the total population. (3)hile Gitelman syndrome occurs in approximately 1: 40.000 in Caucasian individuals. (4)he main symptom of Bartter syndrome and Gitelman syndrome is chronic hypokalemia, which can cause headaches, dizziness, constipation, cramps, and muscle weakness. (5)In some individuals, Bartter syndrome and Gitelman syndrome can result in a significant electrolyte imbalance.This condition can cause irregular heartbeats (arrhythmias) which can lead to sudden cardiac arrest. (6)Hypokalemia conditions need to be found the underlying cause, so that the therapy can be optimal.It can be determined by analyzing the diagnosis algorithm of hypokalemia.Several examinations must be performed to confirm the diagnosis.In this case report, we will discuss a case of chronic hypokalemia on a young female patient with suspected renal tubulopathy, with a focus on the algorithm that can be taken in establishing the cause of the diagnosis of hypokalemia. C A S E I L L U S T R A T I O N A 27-year-old woman was brought to the Emergency Room at the University of Muhammadiyah Malang Hospital with complaints of weakness in both legs since 1 day ago.The weakness is felt so severe that the patient cannot walk.This condition does not improve with rest.In addition, the patient also complained of nausea, and had constipationsince 2 days ago.The patient had a history of recurrent hypokalemia since 5 years ago.Hospitalization episodes are quite frequent.Almost once a month patient is hospitalized for hypokalemic conditions with unknown cause.There was no history of thyroid disease, and the patient had never taken diuretic drugs.On physical examination, the patient is in a calm condition.Vital signs were: BP: 110/60, pulse rate is regular 88x/minute, body temperature: 36.7°C,respiratory rate 14x/minute and oxygen saturation 99% on room air.The results of the EKG show a normal sinus rhythm with a heart rate of 88x/minute, regular, and normal T waves.The results of laboratory tests showed that complete blood count was within normal limits, while on electrolyte examination, the plasma potassium level was quite low (1.7 mEq/L).Previous history, in 2016 the patient had an ultrasound, MRI, and urine electrolyte examination.From the results of ultrasound examination, it was found that focal ectasis was found on the right kidney superior and inferior pole, and complex cyst on the superior pole of the left kidney.MRI results did not show mass or enhancement in the right and left adrenal glands.Complex cysts found on the superior pole of the left kidney according to Bosniak IIF criteria.Focal ectasis found on calyx major inferior pole on right kidney, and simple cyst on the superior pole on right kidney according to Bosniak I criteria.From the results of urine electrolyte examination, the value of urine sodium was 306 (40-220) mmol/24hours, urine potassium 71.1 (25-125) mmol/24hours, and urine chloride 342 (110-250) mmol/24hours.Plasma magnesium levels have also been checked and the results obtained are 1.91 (1,58-2,55) mg/dL.This patient has excessive urine sodium and urine potassium excretion, while magnesium levels is within normal limit.The patient was diagnosed with chronic hypokalemia with suspected tubular abnormalities in the kidney leading to Bartter Syndrome and Gitelman Syndrome.After consultation with an internist, the patient received KCl 50mEq infusion therapy within 12 hours and then repeated up to 3 times.For oral therapy, the patient got Kalium Sustained-Release (KSR) tablet therapy 3x600mg a day.In addition, patients also received 1x100mg spironolactone.Rechecking the serum electrolyte was done after the KCl infusion therapy was finished.The patient was discharged in a stable condition and had no complaints with the results of plasma potassium 3,7mEq / L. D I S C U S S I O N Hypokalemia is a condition that must be treated immediately.The appearance of warning signs at hypokalemia requires immediate treatment.Conditions that include a warning sign are severe hypokalemia (<2.5meq/L), hypokalemia with sudden onset, palpitations, muscle weakness, changes in ECG waves, or the patient has a history of heart disease or underlying hepatic cirrhosis. (2)In this case, a 27-year-old female patient experienced several warning signs such as muscle weakness in both lower extremities which occurred suddenly and got worse since 1 day ago and low Potassium level (1,7mEq/L).Gastrointestinal causes of hypokalemia such as low intake, nausea, vomiting, and diarrhea can be excluded.History of drugs that can affect potassium levels such as insulin use, thyroid hormone boosters, beta adrenergic, and diuretics can also be excluded.From the past history and physical examination, the patient was also not hyperthyroid and did not show any metabolic acidosis.The condition of hypokalemia in this patient is suspected to be due to tubulopathy in the kidneys.Investigations such as an electrocardiogram are useful for checking for arrhythmias or other electrolyte abnormalities.Laboratory tests showed a very low potassium level (1.7 mEq/L).The cause of the hypokalemia in this patient was determined using urine electrolytes examination.A urine potassium value of more than 30 indicates a leak of potassium from the urine.If urine potassium value is high, it can be evaluated by the patient's blood pressure.In hypertensive patients, hyperaldosteronism is suspected, whereas in normotensive conditions the decreased ability of the kidneys to retain potassium can be divided according to the accompanying acid-base disorders.Metabolic acidosis is usually caused by diabetic ketoacidosis and renal tubular acidosis.Whereas in conditions of metabolic alkalosis, it can be caused by severe vomiting, use of diuretics, or Bartter and Gitelmansyndrome. (7)lood gas analysis was not examined in this patient so that the patient's acid-base status cannot be determined certainty.However, from the results of the anamnesis and physical examination, she did not show any symptoms leading to metabolic acidosis.The diagnosis is suspected to Bartter syndrome and Gitelman syndrome, because the patient found increased excretion of potassium through urine, normotensive conditions, no suspicion of metabolic acidosis, and no symptoms of nausea and vomiting and no history of using diuretic drugs.The difference between Bartter syndrome and Gitelman syndrome is as follows: Table 1.Differences between Bartter and GitelmanSyndrome (4,6,8) From the table above, the point that leads to Bartter syndromein this patient is normal magnesium level.The points that lead to Gitelman syndrome are the usual age at presentation that usually occurs in adult patients and commonly develops neuromuscular disorders such as muscle weakness, spasms, and cramps.The diagnosis that suitable for this patient is more likely leads to Gitelman Syndrome.To certainly distinguish these two syndromes, molecular genetic testing can be performed to detect mutations in specific genes. (6)We can perform molecular genetic testing such as SLC12A1, KCNJ1, CLCNKA, and BSND genesfor Bartter syndrome.Gitelman syndrome can be detected by performing molecular genetic test such as SLC12A3 or CLCNKB genes. (4,6)Currently, it is possible to make diagnosis of Barter and Gitelman in one patient.In Bartter syndrome Featured type III, there are similar characteristics with Gitelman syndrome.Some children with Bartter syndrome type III also present with Gitelman phenotype as CLC-NKB found in the distal convoluted tubule and in the connecting tubule. (8)In this case, molecular genetic examination was not examined due to limited facilities. Results of ultrasound examination, it was found that focal ectasis on the superior and inferior poles of the right kidney, as well as complex cysts on the superior and inferior pole of the left kidney.From the Abdominal MRI results, there was no mass or enhancement in the right and left adrenal glands, complex cysts found on the superior pole of the left kidney according to the Bosniak IIF criteria, Focal ectasis found on calyx major inferior pole on right kidney, and simple cyst on the superior pole on right kidney according to Bosniak I criteria.Chronic hypokalemia is known to induce renal cyst formation in some disease including primary aldosteronism, distal renal tubular acidosis, Liddle disease and apparent mineralocorticoid excess syndrome. (9)Renal cyst formation in Bartter syndrome and Gitelman syndrome rarely reported before.Although the precise mechanism underlying the development of renal cysts in our patient remains unclear, chronic hypokalemia may contribute to cyst development. (9)Bosniak classification can be used to predict the risk of cysts in the kidney to develop into malignancy.Bosniak I cysts are benign, while BosniakIIF cysts are likely to be benign, and very rarely develop into malignancy. (10)Cl infusion therapy was given at a dose of 50mEq in 500cc 0.9% NaCl, finished within 12 hours, the administration was repeated up to 3 times.For oral therapy, KSR 3x1 tablets a day and Spironolactone 1x100mg were given.Rechecking the serum electrolyte was done after the KCl infusion therapy was finished.This therapy is in accordance with the theory.In severe hypokalemic conditions (Potassium <2,5mEq / L) a maximum KCl can be given at a dose of 60mEq dissolved in 1000cc 0.9% NaCl at a speed below 10mEq/hour.Oral potassium replacement given at a dose of 40-60mEq can increase potassium levels by 1-1.5mEq/L. (7,11)pironolactone is a specific aldosterone antagonist that binds competitively to the aldosterone-dependent sodium-potassium exchange site in the distal tubule.This can increase water excretion and hold potassium. (5)artter's syndrome and Gitelman syndrome are usually accompanied by hypomagnesemia. (6)The magnesium level in this patient is normal(1.91mg/d), so it does not require additional therapy to correct magnesium levels.Currently, there is no definitive treatment for Bartter and Gitelman syndrome.The treatment is focused on fluid and electrolyte correction.Severity of symptoms (and associated complications) vary from person to person.People with Bartter and Gitelman syndrome must take medications consistently, as prescribed, throughout their lifetime.They also must be careful to maintain an adequate fluid and electrolyte balance. (6) C O N C L U S I O N Hypokalemia can be fatal and life threatening if not treated immediately.In addition, the cause of hypokalemia must also be known.Tubulopathic disorders of the kidneys such as Bartter Syndrome and Gitelman are very rare, but it is important to know so that the therapy given to patients can be optimal.
2021-11-12T16:12:35.143Z
2021-11-08T00:00:00.000
{ "year": 2021, "sha1": "a0df759a432eb4a125456182e26f0aa1e752ef93", "oa_license": "CCBY", "oa_url": "https://crjim.ub.ac.id/index.php/crjim/article/download/66/94", "oa_status": "CLOSED", "pdf_src": "Anansi", "pdf_hash": "4d1d3052f34ac09f1b666a0cdc49076553f595a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
256435779
pes2o/s2orc
v3-fos-license
Term Idiopathic Polyhydramnios, and Labor Complications Background and Aim: Polyhydramnios is associated with an increased risk of various adverse pregnancy outcomes, yet complications during labor have not been sufficiently studied. We assessed the labor and perinatal outcomes of idiopathic polyhydramnios during term labor. Methods: Retrospective cohort study at a tertiary medical center between 2010 and 2014. Women with idiopathic polyhydramnios defined as an amniotic fluid index (AFI) greater than 24 cm or a deep vertical pocket (DVP) > 8 cm (cases) were compared with women with a normal AFI (5–24 cm) (controls). Statistics: Descriptive, means ± SDs, medians + IQR. Comparisons: chi-square, Fisher’s exact test, Mann–Whitney Test, multivariate logistic models. Results: During the study period 11,065 women had ultrasound evaluation completed by a sonographer within two weeks of delivery. After excluding pregnancies complicated by diabetes (pre-gestational or gestational), fetal anomalies, IUFD, multifetal pregnancies, elective cesarean deliveries (CD) or missing data, we included 750 cases and 7000 controls. The degree of polyhydramnios was mild in 559 (75.0%) cases (AFI 24–30 cm or DVP 8–12 cm), moderate in 137 (18.0%) cases (30–35 cm or DVP 12–15 cm) and severe in 54 (7.0%) cases (AFI >35 cm or DVP > 15 cm). Idiopathic polyhydramnios was associated with a higher rate of CD 9.3% vs. 6.2%, p = 0.004; a higher rate of macrosomia 22.8% vs. 7.0%, p < 0.0001; and a higher rate of neonatal respiratory complications 2.0% vs. 0.8%, p = 0.0001. A multivariate regression analysis demonstrated an independent relation between polyhydramnios and higher rates of CD, aOR 1.62 (CI 1.20–2.19 p = 0.002) and composite adverse neonatal outcome aOR 1.28 (CI 1.01–1.63 p = 0.043). Severity of polyhydramnios was significantly associated with higher rates of macrosomia and CD (p for trend <0.01 in both). Conclusions: The term idiopathic polyhydramnios is independently associated with macrosomia, CD and neonatal complications. The severity of polyhydramnios is also associated with macrosomia and CD. Introduction Amniotic fluid volume assessment at term using prenatal ultrasound is common for various indications. Examples include fetal biophysical profile studies, assessment of post term pregnancies or decreased fetal movements and when assessing fetal weight. Variations in amniotic fluid volume have been considered an indicator of adverse perinatal outcome [1][2][3][4][5][6]. Maternal, fetal and placental conditions associated with polyhydramnios include maternal diabetes mellitus, rhesus iso-immunization, congenital and chromosomal abnormalities and multiple gestation. However, in as many as 70.0% of cases, no cause is found antenatally and the polyhydramnios is referred to as idiopathic [9]. Previous reports have shown polyhydramnios to be associated with intrauterine fetal death (IUFD), preterm delivery, an unstable lie, malpresentation, cord prolapse and placental abruption [7,10,11]. 2 of 10 However, studies examining the association between idiopathic polyhydramnios, its severity and the fetal and maternal outcome during labor are scarce. We therefore undertook this study to assess the perinatal outcome of term idiopathic polyhydramnios. Methods We performed a retrospective cohort study of all women with a singleton pregnancy who delivered at term (37.0-42.0 weeks of gestation). All women included in the study had an ultrasound completed in our hospital either as they were having antenatal care within our system, or had a labor and delivery triage visit due to various reasons such as false labor, preterm labor and so on. Hence, the diagnosis of polyhydramnios or the normal amount of amniotic fluid was based on an ultrasound performed at our hospital within 14 days of term delivery. We aimed to assess the perinatal outcome of term idiopathic polyhydramnios. Our clinical practice regarding management of term polyhydramnios has changed over time, specifically since the publication of the landmark study by Pilliod et al. [11] from 2015 that suggested a much higher risk of IUFD at term polyhydramnios. Up until 2015, Polyhydramnios was not a reason for induction of labor; however, since 2015 we offered and suggested induction of labor for woman with polyhydramnios based on the severity and gestational age. Since our aim in this study was to assess the natural perinatal outcome of term idiopathic polyhydramnios, we included women who delivered either spontaneously or had induction of labor for other reasons between 2010 and 2014. We excluded women who delivered pre-term (less than 37.0 weeks) or post-term (42.1 weeks and on), women with fetal anomalies (detected prenatally and postnatally), women with intra-uterine fetal death (IUFD), multifetal pregnancies, women who underwent elective cesarean deliveries (CD), women diagnosed with diabetes (pre-gestational or gestational) and women with missing data. Data were extracted de-identified from our computerized database, which includes participants' demographics, procedures, diagnoses and other pertinent coding, all of which was extracted from the electronic medical record and are updated during hospital stay. Women with idiopathic polyhydramnios were defined as those with amniotic fluid index (AFI) greater than 24 cm or a deep vertical pocket (DVP) > 8 cm (cases) and were compared with women with a normal AFI (5-24 cm) (controls). Ultrasound examinations were performed by either a registered and trained sonographer or a maternal fetal medicinetrained physician, using GE-Voluson E8, Expert, 6-12 MgH or voluson 730. Setting Shaare Zedek Medical Center is a university-affiliated tertiary medical center with a large obstetric service. Roughly fifteen thousand deliveries are attended to annually. National Health and Drug Insurance plans cover all women for antenatal and peripartum care. At our medical center the real-time computerized medical records are continuously updated in real-time during labor and delivery by attending healthcare professionals. The data are audited intermittently by trained technical personnel to guarantee validity of the data. Demographic and obstetric characteristics, as well as outcome data as defined and detailed below, were extracted from the electronic database management software. Hence, for this study we did not use diagnosis codes, but rather the diagnosis as registered in the database, based on course of labor, postpartum course and the discharge letters of the mother and the newborn. Definition of Measures and Outcomes The primary outcome was the mode of delivery: vaginal delivery (spontaneous or assisted) vs. cesarean delivery (CD). Secondary outcomes were maternal and neonatal adverse events. Maternal adverse events included postpartum hemorrhage (PPH, estimated blood loss > 500 mL and/or hemoglobin drop3 g/dL), chorioamnionitis, placental abruption, shoulder dystocia, perineal tear grade 3 and 4 (OASIS) and readmission to the hospital within 6 weeks. Definitions: Obstetrical anal sphincter injury (OASIS) classified as perineal tear grade 3 which involve the external and/or internal sphincter or perineal tear grade 4 which involve the rectal mucosa. OASIS is diagnosed by an obstetrician and corrected in the operating room. Postpartum hemorrhage (PPH) blood loss of over 500 mL after vaginal delivery and over 1000 mL after CD. Neonatal adverse events included the following: 5-min Apgar score < 7, neonatal intensive care unit (NICU) admission for >72 h, macrosomia (birthweight > 4000 g), jaundice (defined as a need for phototherapy), Hypoglycemia (<40 mg/dL), respiratory complications (defined as transient neonatal tachypnea or respiratory distress syndrome), metabolic complications, sepsis, asphyxia, convulsions, necrotizing enterocolitis (NEC), clavicle fracture, intraventricular hemorrhage (IVH) and neonatal death. Neonatal adverse events were derived from newborn discharge records as documented by the neonatologist. Composite adverse neonatal outcome were defined as the presence of one or more of the neonatal adverse events. Statistical Analyses Descriptive continuous variables were reported as median and interquartile ranges or mean ± SD. Categorical variables were reported as counts and proportion. Univariate analyses included Chi-square test or Fisher exact test to compare between categorical variables. The effect of polyhydramnios on continuous variables was tested by unpaired student T-test or Mann-Whitney test. The choice of a parametric or nonparametric test depended on the distribution of a continuous variable. The "dose-response" relationship between the severity of polyhydramnios and outcomes was tested and reported with p-for-trend. Subgroup analysis was performed in order to assess the impact of polyhydramnios on CD in nulliparous deliveries. Two multivariable logistic regression analyses were developed in order to check for an independent association between polyhydramnios and (1) CD and (2) any neonatal complication. Covariates in the models included maternal age, ethnicity, hypertensive disorder of pregnancy, nulliparity, prior miscarriages, prior CD, induction of labor, use of epidural analgesia, oxytocin augmentation during labor and extreme neonatal weights defined as small for gestational age (>10th percentile) and large for gestational age (<90th percentile) based on Israeli neonatal birthweight charts by Dollberg S. et al. [12]. The discriminatory power of the models were examined using c statistics (AUC). Odds ratios (OR) with 95% confidence intervals (CIs) were reported. All comparisons were two tailed, and p < 0.05 was considered statistically significant. Analyses were carried out using IBM SPSS version 22 statistical package. The study protocol was approved by the institutional Helsinki Committee (IRB number P24.15). Results During the study period, a total of 11,065 women had ultrasound evaluation completed within two weeks of delivery. There were 9919 women (89.6%) who had normal or low amniotic fluid and 1146 women (10.4%) who had polyhydramnios. After applying exclusion criteria, there were 7000 women (90.3%) with normal AFI (controls) and 750 women (9.7%) with polyhydramnios (study group). The degree of polyhydramnios was mild in 559 (75.0%) cases, moderate in 137 (18.0%) cases and severe in 54 (7.0%) cases. There were no significant differences in the polyhydramnios rate over the study period (range of 8.5-11.2%), p = 0.191. Clinical and demographic characteristics of women are presented in Table 1. Women in the study group were older than women in the control group, with a mean age of 29.5 ± 5.7 years vs. 28.4 ± 5.7 years, (p < 0.0001). The percent of women at advanced maternal age (age >35 years) was also higher in the study group as compared to the control group 17.3% vs. 12.9%, (p = 0.001). Women in the study group were less likely to be nulliparous, with 23.0% compared to 35.0% in the control group (p < 0.0001). The median [IQR] time between the ultrasound and the delivery was similar in both groups. Primary outcome: Women with polyhydramnios had a higher incidence of CD: 9.3% vs. 6.2% p = 0.004. Furthermore, primiparous women with polyhydramnios compared to multiparous women had a higher incidence of CD 19.1% vs. 10.1% (p < 0.0001). The CD rate was also higher in woman with moderate/severe polyhydramnios vs. those with a normal amount of amniotic fluid or mild polyhydramnios (12.6% vs. 6.4% p = 0.001). Indications for CD differed between the groups and are presented in Table 2. Arrested labor was the leading cause of CD in the study group (44.3% vs. 37.7% 0.294), whereas non-reassuring fetal heart rate was the leading cause of CD in the control group (47.8% vs. 30.0% p = 0.005). There were two cases of umbilical cord prolapse in women with polyhydramnios (0.3%) and five cases in the control group (0.07%). Severity of polyhydramnios was significantly associated with higher rates of macrosomia and CD (p for trend <0.01 in both) as presented in Figure 1. Multivariate regression model accounting for various characteristics as detailed in the method section revealed the following: polyhydramnios was independently associated with higher rates of CD, at an adjusted odds ratio (aOR) of 1.62 (CI 1.2-2.2 p = 0.002) AUC 0.83 (95%CI 0.81-0.85). Labor characteristics are presented in Table 3. The median gestational age at birth was 40 (39-41) weeks in both groups (p = 0.546). The median duration of the first stage of labor was similar between the groups (239 min vs. 219 min p = 0.101), whereas median duration of the second stage of labor was shorter in the study group (110 vs. 99 min p = 0.005). The rate of meconium-stained amniotic fluid was higher in study group 21.1% vs. control group 17.7% p = 0.023. There were no differences between the groups regarding the rates of induction or augmentation of labor, use of epidural analgesia or rate of chorioamnionitis, placental abruption, shoulder dystocia, perineal tear grades 3 and 4 or postpartum hemorrhage. A composite adverse maternal outcome (including CD, PPH, placental abruption, perineal tear grades 3 and 4, hemoglobin drops more than 3 g, shoulder dystocia, chorioamnionitis and readmission to the hospital within 6 weeks) was higher in study group vs. control group (164 (21.9%) vs. 1266 (18.1%) p = 0.011). Labor characteristics are presented in Table 3. The median gestational age at birth was 40 (39-41) weeks in both groups (p = 0.546). The median duration of the first stage of labor was similar between the groups (239 min vs. 219 min p = 0.101), whereas median duration of the second stage of labor was shorter in the study group (110 vs. 99 min p = 0.005). The rate of meconium-stained amniotic fluid was higher in study group 21.1% vs. control group 17.7% p = 0.023. There were no differences between the groups regarding the rates of induction or augmentation of labor, use of epidural analgesia or rate of chorioamnionitis, placental abruption, shoulder dystocia, perineal tear grades 3 and 4 or post-partum hemorrhage. A composite adverse maternal outcome (including CD, PPH, placental abruption, perineal tear grades 3 and 4, hemoglobin drops more than 3 g, shoulder dystocia, chorioamnionitis and readmission to the hospital within 6 weeks) was higher in study group vs. control group (164 (21.9%) vs. 1266 (18.1%) p = 0.011). Neonatal characteristics and outcomes are presented in Table 4. Mean birthweight of newborns was significantly higher in the study group (3640 ± 467 gr vs. 3334 ± 470 gr (p < 0.0001)), and there was a higher rate of macrosomia in the study group (22.8% vs. 7.0%, (p < 0.0001)). The macrosomia rate was also significantly more common in women with moderate/severe polyhydramnios vs. those with a normal amount of amniotic fluid or with mild polyhydramnios (31.4% vs. 8% p < 0.0001). There was a higher rate of neonatal respiratory complications in the study group 2.0% vs. 0.8%, (p < 0.0001). There were no differences between the groups in the rate of NICU admission > 72 h 1.1% vs. 0.7% (p = 0.25) and low Apgar scores (defined as Apgar score <7 at 5 min) 0.9% vs. 0.6% (p = 0.202). A composite adverse neonatal outcome occurred in 12.5% of mild polyhydramnios, 13.9% of moderate polyhydramnios and 7.4% of severe polyhydramnios compared to 9.3% for controls. There were no neonatal deaths in either group. Multivariate regression models accounting for various characteristics as detailed in the method section revealed that polyhydramnios was independently associated with composite adverse neonatal outcome, aOR of 1.28 (CI 1.01-1.63 p = 0.043) AUC 0.58 (95%CI 0.56-0.6). Discussion In the present study, we evaluated the labor, maternal and neonatal outcomes of term idiopathic polyhydramnios. Our study found that term idiopathic polyhydramnios was associated with a higher rate of CD, macrosomia and neonatal respiratory complications. These findings remained significant after controlling for confounders by using multivariable logistic regression analysis, which revealed that idiopathic polyhydramnios was associated with an increased risk of CD and an increased risk of composite adverse neonatal outcomes. Previous reports of isolated polyhydramnios and labor outcomes are conflicting. The differences may be explained by the heterogenic group of patients and various designs and inclusion criteria. For example, some studies included cases with diabetes and fetal anomalies or evaluated term and preterm deliveries together. In our study, polyhydramnios was associated with macrosomia, similar to the findings by Dorleijn et al. [9] as well as Yefet E and Daniel-Spiegel E [13]. In the latter study, polyhydramnios with a normal detailed anatomy scan was associated with increased risk for CD and a birth weight in the >90th percentile. This increase in CD was attributed to the higher rate of elective CD due to suspected macrosomia. Our study also demonstrated an increase in CD rate due to an increase in macrosomia and arrested labor. Similarly, Magann et al. [5] showed that CD for fetal distress were more common with polyhydramnios as well as low 5-min Apgar scores, increased neonatal birthweight and newborn intensive-care unit admissions. Morris et al. [14] also showed in a meta-analysis that polyhydramnios is strongly associated with a birthweight in the >90th percentile. Interestingly, Asadi et al. [15] found an increase in low birth weight (<2500 g) as well as macrosomia (>4000 g) in their case group, perhaps due to a difference in selection of the study population. In our study, the duration of the first stage of labor was similar between the groups, whereas median duration of the second stage of labor was shorter in the study group. There were no differences between the groups regarding the rates of induction or augmentation of labor, use of epidural analgesia or rate of chorioamnionitis, placental abruption, shoulder dystocia, perineal tear grades 3 and 4 or post-partum hemorrhage. Aviram et al. [16] showed that polyhydramnios was an independent risk factor for labor induction, CD, prolonged first stage of delivery, abnormal or intermediate FHR tracings, placental abruption, shoulder dystocia and respiratory distress. Mild isolated polyhydramnios was independently associated with CD, prolonged first stage of delivery, placental abruption, abnormal or intermediate FHR tracings and shoulder dystocia. Khan and Donnelly [17] found a higher rate of caesarean delivery, fetal distress and NICU admissions. Zeino et al. [18] did not observe differences in the rate of epidural analgesia or rate of abnormal fetal heart tracing. Induced labor, amniotomy and non-vertex presentations were more frequent in their polyhydramnios group. In their study, CD rate was higher in pregnancies with polyhydramnios and remained higher after exclusion of induced labor and non-vertex presentation. Another important finding in our present study is the increase in perinatal respiratory morbidity, defined as transient neonatal tachypnea or respiratory distress syndrome without a significant difference in other morbidities examined. However, as there was no difference in NICU admission rate for more than 72 h, it is possible that these were non-severe respiratory complications. We also demonstrated no increase in fetal or perinatal death. Chauhan et al. [19] showed that polyhydramnios was significantly associated with the delivery of a macrosomic fetus, but not with delivery of a compromised neonate or CD. Thompson et al. [20] showed that there was a better overall fetal outcome compared with previous studies and no perinatal deaths. Pasquini et al. [21] also did not find significant difference in the Apgar score and the rate of neonatal hypoxia in mild idiopathic polyhydramnios. Conversely, Maymon et al. [22] showed that the presence of polyhydramnios was strongly associated with perinatal mortality and neonatal and maternal morbidity. Similarly, Asadi et al. [15] found that NICU admission, fetal distress, fetal death, lower 1-min and 5-min APGAR scores, preterm delivery and neonatal death were higher in their polyhydramnios cohort as compared to controls. Chen et al. [4] showed that polyhydramnios carried a higher incidence of adverse perinatal outcomes, such as low Apgar scores, fetal death, fetal distress in labor, NICU transfer and neonatal death, despite the exclusion of congenital anomalies from the study. Likewise, Polnaszek et al. [23] showed that polyhydramnios increased the risk of composite neonatal morbidity, including respiratory morbidity. Additionally, Pagan et al. [24] showed that idiopathic polyhydramnios had higher odds of neonatal death, intrauterine fetal demise, NICU admission and a 5 min APGAR score less than 7. Importantly, our study demonstrates that the severity of polyhydramnios is independently associated with a higher risk of macrosomia and CD. Our results are similar to those of Harlev et al. [25], who showed a significant linear association between the AFI and adverse pregnancy outcome, including hypertensive disorders, diabetes mellitus, preterm labor, macrosomia, placental abruption and low birth weight. A different population selection may cause the difference. A possible mechanism underlying this finding may be overdistention of the uterus caused by polyhydramnios, making it less responsive to oxytocin as the polyhydramnios increases in severity, leading to hypotonic disorders of labor progression and a need for CD. Conversely, an association between severity and outcome was not demonstrated in a study by Asadi et al. [15], who found no significant statistical differences in fetal and maternal adverse outcomes according to the severity of polyhydramnios. Our analysis of demographic data showed that polyhydramnios was more common in older women and multipara similar to the data of Biggio et al. [26], who found a relationship between idiopathic polyhydramnios and increasing maternal age and parity. Although premature rupture of membranes (PROM) has been related to polyhydramnios, in our study, PROM was less common in the study group of idiopathic polyhydramnios than the control group. A possible mechanism underlying this finding may be the high incidence of PROM of around 10% at term with the general population of pregnant woman [27]. Indeed, the rate of polyhydramnios in our study is much higher than what has been previously reported, yet it should be noted that the true prevalence is unknown. Our study has strengths and limitations. The limitations include the following: this was a retrospective study and has the inherited limitations of retrospectively collected information. During the study period, there were roughly 50,000 term singleton deliveries; however, only 20% of those had an ultrasound exam in our institution within 2 weeks of delivery. It is, therefore, possible that our study suffers from selection bias. As noted, there are several differences between study groups; for instance, 23% of women with polyhydramnios were nullipara, vs. 35% of the women in the control group. We do not have data regarding the BMI of parturients as this information was not routinely collected during these years. We do not have information regarding infection screening as a cause for polyhydramnios, but the significance of investigating infectious causes has not been shown to be beneficial in the workup of polyhydramnios [28,29]. Furthermore, as the association between polyhydramnios and IUFD had been previously studied, we did not look at this association in this study. The strengths of our study include the large sample size and the fact that this is a historical cohort before our departmental protocol was changed to offer induction of labor for women with idiopathic polyhydramnios at term. Even though women had an ultrasound exam within 2 weeks of delivery, the median time from ultrasound to delivery was not different between the groups. Furthermore, we included in this study only women without diabetes and without fetal/neonatal anomalies. Conclusions We found that term idiopathic polyhydramnios is independently associated with macrosomia CD and neonatal complications. Additionally, the severity of polyhydramnios was found to be associated with macrosomia and CD. The information from this study will assist health care providers in counseling their patients with term idiopathic polyhydramnios regarding the expected mode of delivery according to the severity of polyhydramnios, and will allow the NICU team to be prepared for possible respiratory difficulties in the neonates of this group. Informed Consent Statement: Patient consent was waived by the the Institutional Review Board due to a retrospective study. Data Availability Statement: Data available on request from the corresponding author due to privacy and ethical restrictions.
2023-02-01T16:13:46.262Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "e0bebbc9b8815a1ded33d458371871465066d2e6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/3/981/pdf?version=1674827416", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3ab8eb37eabac7b4471e2a483282a924cedfd1a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16359775
pes2o/s2orc
v3-fos-license
Higher neonatal growth rate and body condition score at 7 months are predictive factors of obesity in adult female Beagle dogs Background The risks during early growth on becoming overweight in adulthood are widely studied in humans. However, early-life predictive factors for canine adult overweight and obesity have not yet been studied. To identify factors that may help explain the development of overweight and obesity at adulthood in dogs, a longitudinal study of 2 years was conducted in 24 female Beagle dogs of the same age, sexual status, and raised under identical environmental conditions. By means of a hierarchical classification on principal components with the following quantitative values: fat-free mass (FFM), percentage fat mass and pelvic circumference at 2 years of age, three groups of dogs were established and were nominally named: ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). With the aim of identifying predictive factors of development of obesity at adulthood parental characteristics, growth pattern, energy balance and plasma factors were analysed by logistic regression analysis. Results At 24 months, the group compositions were in line with the body condition scores (BCS 1–9) values of the IW (5 or 6/9), the OW1 (6/9) and the OW2 (7 or 8/9) groups. Logistic regression analysis permitted the identification of neonatal growth rate during the first 2 weeks of life (GR2W) and BCS at 7 months as predictors for the development of obesity at adulthood. Seventy percent of dogs with either GR2W >125% or with BCS > 6/9 at 7 months belonged to the OW2 group. Results from energy intake and expenditure, corrected for FFM, showed that there was a greater positive energy imbalance between 7 and 10 months for the OW2, compared to the IW group. Conclusion This study expands the understanding of previously reported risk factors for being overweight or obese in dogs, establishing that (i) 15 out of 24 of the studied dogs became overweight and (ii) GR2W and BCS at 7 months of age could be used as predictive factors as overweight adult dogs in the OW2 group had higher values compared the other groups of dogs. Electronic supplementary material The online version of this article (doi:10.1186/s12917-017-0994-7) contains supplementary material, which is available to authorized users. Background Around 50% of pet dogs have been reported to be overweight or obese [1][2][3], which makes these, conditions an important health concern in small animal medicine. Overweight and obese dogs are defined as having excessive adipose tissue that results in a body weight (BW) >15 and >30% above their ideal BW, respectively [4]. The main reason for excess weight gain in healthy dogs is a positive imbalance between energy intake and energy expenditure [4]. Obesity in dogs is associated with decreased quality of life and lifespan [5,6], as well as with numerous chronic disorders such as osteoarthritis and cardiorespiratory diseases [4]. Some of those obesityrelated outcomes can be reversed by restricted energy intake and by increased activity through a weight-loss program [7]. However, only half of the dogs entering such program reach their target body weight [8] and among them, only half succeed in maintaining their optimal body weight over the long term [9]. In the case of obesity in adulthood, both human and animal studies have reported a dysregulation of plasma biomarkers that are directly or indirectly associated with the regulation of energy homeostasis. Among the plasma biomarkers related to energy intake, the concentrations of some were reported to differ between obese and normal weight dogs, such as insulin, ghrelin, leptin and adiponectin [10][11][12][13]. Obesity in dogs also results in low-grade systemic inflammation which may contribute to the development of metabolic disorders [7]. Given the health issues related to obesity and the physiology of energy balance, it would be a better strategy to prevent the development of excessive fat stores than to manage established excess weight through weight-loss programs. Numerous investigations in various countries have identified risk factors that were associated with excess weight in overweight or obese but otherwise healthy dogs. The main reported risk factors are neutering, especially in females, high feeding frequency, sedentary lifestyle and specific breeds or cross-breeds [2,[14][15][16]. Human studies have determined factors during the gestational and infancy periods that affect the degree of obesity in adulthood. It was demonstrated that an obese mother or high gestational weight gain may lead to an elevated body-mass index (BMI) in the offspring [17]. A high birth weight and weight gain during growth have also been identified as risk factors for overweight in adulthood for humans [17] and cats [18]. Although early risk factors have been widely studied in humans, little work has been conducted in small animal medicine to identify early growth patterns correlated with risks of becoming overweight in adulthood. Moreover, and to the authors' knowledge, an association between the biomarkers mentioned above and the development of obesity in adulthood has not been studied in dogs. The aims of our study were (i) to show that excess weight gain without an obvious underlying cause could occur between dogs of the same age, sex, sexual status and breed and raised on the same diet in the same environment and (ii) to identify early predictive factors such as parental and neonatal characteristics, energy intake or expenditure, or plasma biomarkers, associated with overweight or obesity in adulthood. Animals The current investigation involved 24 female Beagle dogs, which were studied from birth to 24 months of age. We expected to have groups that present a significant difference of FM% between two grades of BCS. Assuming that the difference of FM% between two grades of BCS is approximately 5% [19], the sample size was established considering 3 pairwise comparisons (corresponding to BCS 5, 6 and 7) of FM% means with a first type error of 0.5 and a power of 0.8. The maximum sample size per group obtained was 7.47, therefore 8 dogs were allocated to each of the three groups [20]. Furthermore, the sample size of 24 dogs was suitable for both the capacity of Oniris' facilities to guarantee animal welfare and to ensure that the sampling workload could be conducted in reliable conditions by one person in order to avoid manipulation bias. They were the offspring of 10 litters (10 mothers and 7 fathers), were housed by litter in the same breeding centre (Isoquimen SL., Barcelona, Spain), weaned at 10 weeks, and neutered at the age of 8 months. All dogs received an annual veterinary check-up and were vaccinated against canine distemper, canine adenovirus type 2, canine parainfluenza virus, canine parvovirus, rabies, and received worming treatments, at 2.5, 4, 12, and 22 months of age. A registered veterinarian was available to carry out additional veterinary treatments if required, and had the authority to withdraw dogs from the study if any adverse events occurred. Two dogs (one at 12 and one at 14 months of age) reached a BCS of 8/9 prior to the end of the study. Then, they were rationed in order to maintain a stable body weight until study completion. Housing & diet Throughout the study the dogs lived in the same environment and were fed in the same manner. All diets were supplied by Royal Canin (Royal Canin SAS, Aimargues, France). Prior to weaning (at 10 weeks of age), puppies were housed by litter with their mother in the breeding centre of Isoquimen SL, and had free access to mother's milk and dry diet. This dry diet, Medium Starter (protein = 30%DM, fat = 22%DM, 4010 kcal/kg or 16.8 MJ/kg) was available ad libitum for both mothers and puppies, making an accurate assessment of the puppies' nutrition source (dry diet vs maternal milk) impossible. After weaning, dogs were relocated to Oniris (Nantes, France) and were housed in pairs. Each pair was housed in an outdoor enclosure of 4 m 2 that included a sheltered place to sleep. Each dog was fed individually ad libitum for 3.5 h per day whilst the partner was temporarily removed from the enclosure. From weaning to 10.5 months of age, the dogs were given a dry diet formulated to meet growth requirements, Pediatric Junior Dog (protein =29%DM, fat = 20%DM, 3900 kcal/kg or 16.3 MJ/kg). From 10.5 months of age, in order to avoid too much excess weight gain after spaying, the dogs were fed with a moderate calorie dry maintenance diet (Neutered Adult; protein = 28%DM, fat = 11%DM, 3260 kcal/kg or 13.6 MJ/kg). Individual food intake (g/day) was recorded daily (except on weekends) from weaning to 24 months of age, on the same calibrated electronic weigh scale (Ohaus Europe, Greifensee, Switzerland; accurate to within 0.2 g). The energy intake was corrected for metabolic body weight (EI; kcal/BW 0.75 ) or for fat-free mass (EI FFM; kcal/FFM) which was calculated as follows: Dogs were walked on a leash for at least 15 min twice a week and had access to 1 h/day of free time in a closed garden of 400 m 2 enriched with agility equipment. Dogs had free access to water throughout the study. Biometric assessment Early-life data were provided by the breeding centre, including age and BW of parents at mating, parity, weight gain during gestation, litter size and BW of each puppy at birth. Prior to weaning, puppies were weighed every 2 weeks. Post-weaning, BW was recorded weekly on the same electronic weigh scale (Mettler-Toledo SAS, Viroflay, France; accurate to within 50 g). The withers height was measured at 24 months of age. The morphometric estimation of body fat described by Burkholder and Toll [21] has been validated only on adult dogs, so a the pelvic circumference (PC) and patella-to-calcaneus (PCL) was measured every 2 months from 7 months of age, when dogs had a morphology closer to adult one. The body condition score (BCS) was evaluated monthly from 7 months of age by the same investigator, using a 9-point scale (1 for emaciated, 9 for morbidly obese) as recommended by the WSAVA [22]. The body composition was determined by isotopic dilution (deuterium oxide) at 6, 9, 12, 15 and 24 months of age. Food was withheld for 20 h before and water from 1 h before to 3 h after a subcutaneous tracer injection (physiological saline 2 H 2 O solution (99.9% 2H/H; Eurisotop, Saint-Aubin, France), 0.5 g/kg), to achieve body water equilibration. Venous blood samples were collected in sterile ethylenediaminetetraacetic acid (EDTA) tubes before and 3 h after injection of the isotope. Total body water was determined in two steps. Firstly, the deuterium enrichment of plasma water was determined by Fouriertransform infrared on a Vector 33-type spectroscope (Brücker SA, Wissembourg, France) as previously described [23]. The deuterium enrichment (2H/H) was used to calculate the dilution space of the isotope, which indicates the total body water content after correction for proton exchanges with non-aqueous molecules [24]. Finally, the fatfree mass (FFM) in dogs was calculated with a canine specific hydration rate [25]: The proportion of fat mass (FM%) was calculated as the difference between BW and FFM, divided by BW. Given that the ideal BW in Beagles should be composed of approximately 80% FFM and 20% FM [21], the ideal weight would be FFM × 1.25. The percentage of excess weight according to estimated ideal weight at 24 months of age was calculated as follows: Blood sampling Blood samples were taken from dogs after 20 h of food deprivation, every 2 months from 3 to 17 months of age in order to measure plasma levels of glucose, appetiterelated hormones (insulin, ghrelin, leptin and insulin-like growth factor 1 [IGF-1]), and stress markers (cortisol and prolactin). Additional blood samples were taken every 2 months until 9 months of age, and subsequently every 3 months until 15 months, in order to measure levels of markers of inflammation (C-reactive protein [CRP], adiponectin, interleukin [IL-] 6, IL-8, IL-10 and tumour necrosis factor alpha [TNFα]). At 7 and 13 months of age, in order to follow the post-prandial plasma kinetics of glucose, insulin, ghrelin and peptide YY 3-36 (PYY), blood was collected immediately before a meal, and then 15, 30, 60, 90, 120 and 150 min after the meal. To avoid the influence of meal size and eating duration [26], dogs were given 10 min of access to a meal of their standard diet providing 130 kcal/kg metabolic BW (BW 0.75 ) or 544 kJ/BW 0.75 according to the recommendations for kennel dogs [27]. In all cases, blood was collected in heparin-coated sterile vacutainers. Plasma was separated by centrifugation at 5000 g for 10 min, then aliquoted and stored at -20°C in sealed vials until analyses were completed. Assays Glucose was assayed immediately after collection by AlphaTRACK 2, a validated portable canine blood glucose meter (Abbott Animal Health, Abbott Park, IL, USA) using capillary or heparin-venous blood [28]. Plasma insulin to glucose concentrations ratio (I:G) was calculated in the unfed and postprandial state as follows: Energy expenditure assessment Energy expenditure was determined by indirect calorimetry at 4, 7, 10 and 16 months of age, as validated in dogs by Pouteau et al. [33], with the following minor modifications. Food was withheld for 20 h, after which dogs were placed in a metabolic chamber (60 × 66.5 × 65 cm) for 4 h. The chamber was connected to a breath gas-exchange monitor (Quark RMR, Cosmed, Rome, Italy), which was calibrated at the start and then hourly, using a standard gas mixture. The system was an opencircuit ventilated by atmospheric air, pumped through the metabolism chamber at a flow rate of approximately 8 L/min adjusted for each dog at each age. The rate of flow of CO 2 production and O 2 consumption was measured every 5 s. The energy expenditure (kcal/d) was calculated using the abbreviated Weir formula [34]: After an approximately 40-min equilibration period, the energy expenditure was averaged on rolling 20-min periods. The resting energy expenditure (REE) was assumed to correspond to the lowest rolling mean of the energy expenditure during the 4 h of measurement, when the dog was calm but not asleep. The REE was corrected (i) for metabolic BW (BW 0.75 ) at 4 and 7 months and expressed as REE (kcal/BW 0.75 ) and (ii) for FFM at 7, 10 and 16 months and expressed as REE FFM (kcal/FFM), both determinations being performed within a window of 30 days. The activity level was not measured. Data analysis All statistical analyses were performed using R software (R Foundation for Statistical Computing, Vienna, Austria) [35]. Graphs were prepared using GraphPad Prism software (GraphPad Software Inc., San Diego, CA, USA). In order to distinguish dogs in three distinct groups, a principal component analysis (PCA) was performed on quantitative and active variables: FM%, FFM and PC, on dogs aged of 24 months. This was followed by a hierarchical clustering classification (HCPC) which was realized using Ward algorithm with euclidean distance on the two first principal components. The three identified groups were named IW, OW1 and OW2. The BCS is a qualitative variable, so it was included as supplementary qualitative variable in the PCA. In order to assess the impact of parental and gestational factors on FM%, FFM and PC, and parental characteristics were included as supplementary variables in the PCA. In order to determine the age at which groups became significantly different, additional PCAs were conducted on FM%, FFM and PC at the ages of 6, 9, 12 and 15 months. Confidence ellipses (95% confidence level) were constructed for the three groups in each PCA. Mean and standard deviation (SD) were computed for each variable at 24 months of age in the three groups. The independence between the groups and the parents was assessed by Fisher's exact test. In order to characterize the growth throughout the study period, individual growth curves, defined as BW over time were plotted. Growth curves usually show a sigmoid profile and are fitted by a Gompertz function [36]: where: t is age in weeks, BW t is body weight at time t in kg, BW max is the maximum body weight, also named mature body weight, α is an expression of the ratio between mature and birth body weight k is the maturation rate, which corresponds to the velocity to reach the adult body weight and e is Euler number. The growth period could be divided into two periods by the point of inflection: the first period corresponding to an increasing growth rate and the second period to a decreasing growth rate. The maximum of weight velocity, calculated as BW max Âk e occurs at the point of inflection (PI) In order to assess the growth parameters (BW max , α and k) of each group, the growth data collected from 0 to 18 months of age were fitted by the Gompertz function using nonlinear mixed effect model tools (saemix package in R software) with the dog as random effect. To compare the evolution of BW of the groups during specific life-stage periods, we split the study duration into five periods. Linear mixed effects models were performed on age, BW, and groups over the following periods: before weaning (0 to 2.5 months of age), pseudolinear growth (2.5 to 6 months of age), before spaying (6 to 8 months of age) and after spaying (8 to 10.5 months of age). Given that the normal weight in 2-week old neonatal puppies is twice that at birth [37], GR 2W , % was calculated. We also calculated the relative weight gain after sterilisation (%) to assess the susceptibility to gain weight after the surgery. Post-prandial kinetic data (0-150 min) was transformed into area under the curve (AUC) values based on the difference from baseline. In order to interpret energy intake and expenditure, resting energy balance was calculated by subtracting REE FFM from EI FFM . To take into account the repeated measurements for individual dogs in time, linear mixed effects models (nlme package in R software) were performed with dogs as random term. These models enabled us to assess i) the interactions between groups and age on BW, FFM, FM%, PC, energy intake and expenditure or on hormonal concentrations, ii) the correlation between parental factors and biometric data at 24 months of age, iii) the correlation between the growth rate in the first 2 weeks of life (GR 2W ) and either groups or FM% at 24 months of age. Independency and normal distribution for residuals and random effects were checked by graphical tools as described in the theory of mixed models effects [38]. Logistic regression analysis was performed on the three groups by pairs (IW-OW1; OW1-OW2, IW-OW2) to attempt to discriminate the groups by i) parental and gestational variables, ii) GR 2W , iii) BCS, FM% or FFM prior to 2 years of age. For each identified discriminant factor, a discriminant value was deduced from logistic regression results and was used as a cut-off to differentiate the groups. The goodness of fit was explored through an analysis of deviance table. In each model, multiple comparisons were taken into account and adjusted p-values were calculated by singlestep method, proposed in "multcomp" package of R software. All p-values were compared to α = 0.05 to establish significant differences. Results All dogs remained healthy throughout the duration of the study. Constitution of three groups based on biometric data at 24 months of age In order to categorize dogs according to their status of fatness, a PCA was performed on FM%, FFM (kg) and PC (cm) at 24 month of age. The two first axes explained 95.3% of the total variation (Fig. 1a). A hierarchical clustering of principal components revealed three well separated groups (Fig. 1b), which could be described as dogs with (optimal) ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). The distribution of BCS in the groups at 24 months (Fig. 2) was in line with the groups' characteristics ( Table 1). The OW2 group had significantly higher values of BW, FM%, PC (all p < 0.001) and height to withers (p = 0.037) than the IW group, although the FFM of the two groups was similar (p = 0.25). The OW2 group also had significantly higher values of BW, height to withers, FFM, FM% and PC compared to the OW1 group (p < 0.001, p = 0.003, p < 0.001, p = 0.02, p < 0.001, respectively). The OW1 and IW groups had similar BW and height to withers at 24 months of age (p = 0.208, p = 0.201, respectively), while there was a significant difference in FFM and FM% between groups (p = 0.010, p < 0.001, respectively). The mean percentage of excess weight as calculated by Eq. 3 was significantly higher in the OW2 than in the IW (p < 0.001) and OW1 group (p = 0.002), and higher in the OW1 than in the IW group (p = 0.001). Parental and gestational factors Parental characteristics, such as BW and age of parents at mating, parity, gestational weight gain and litter size were added as supplementary variables to the PCA assessed in 24 months old dogs (Fig. 1). A linear regression model confirmed that the BW of the mothers was significantly and positively correlated with FFM of their offspring at 24 months (p < 0.001) and the BW of the fathers was significantly and positively correlated with FM% of their offspring at 24 months (p = 0.002). Logistic regression analysis did not identify any parental characteristics differentiating the groups. Table 2 characterizes the fitted growth curves for each group. The OW2 group was found to be significantly different from the IW group in many respects, with significantly increased values of BW max , t PI and BW PI (all, p < 0.05). Both overweight groups (OW1 and OW2) exhibited a lower value for the constant k than the IW group (both, p < 0.05). The OW1 group also appeared quite different to other groups, with significantly lower values for BW max , BW PI and maximum of weight velocity (all, p < 0.05). Life-stage periods of weight gain Birth to weaning At birth, BW b did not significantly differ between groups, with IW, OW1 and OW2 groups having an average BW b (SD) of 0.44 (0.07) kg, 0.47 (0.14) kg and 0.39 (0.06) kg, respectively. From 6 to 24 months of age, the BW of the OW2 group was significantly greater than those of the other groups (p < 0.05). Between 12 and 24 months, the BW of the IW did not significantly differ from the OW1 groups (Fig. 4). We also compared the evolution of BW and weight gain during 3 growth periods (2.5 to 6 months, 6 to 8 months and from 8 to 10.5 months of age). At weaning (2.5 months), the BW of the OW2 and IW groups were A B Fig. 1 Outputs of the PCA followed by the hierarchical clustering on principal components (HCPC). The analyses were assessed on 24 female Beagle dogs aged 24 months. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). a: Factor map of principal and supplementary variables. The principal variables (solid lines) are fat-free mass (FFM, kg), fat mass proportion (FM%,) and pelvic circumference (PC, cm). Supplementary variables (dotted lines) are fathers' body weight (BWfa, kg), mothers' body weight at mating (BWmo, kg,), mothers' age at mating (AGEmo), previous litters (PL), gestational weight gain (GWG) and litter size (LS). b: Confidence ellipses (95% confidence level) around the groups identified by HCPC significantly greater than those of the OW1 group (p = 0.003 and p = 0.041, respectively; Fig. 4). Body weight increased significantly more in the OW2 group than in the IW for all three periods concerned (all, p < 0.001). The rate of weight gain (SD) over the 8 to 10.5 month period was higher in the OW2 (14.93% (6.66)) compared to the IW group (8.53% (3.70); p = 0.016); the OW1 group (12.22% (3.18)) was not significantly differ from the IW (p = 0.106) and OW2 (p = 0.517) groups. Determination of the minimum age at which groups are distinct According to the PCA and HCPC processed at 6, 9, 12 and 15 months, significant differences between groups were established in accordance with the confidence ellipses (95% confidence level). In dogs aged 6 months, the OW2 group was significantly distinct from the IW and the OW1 groups while there was some overlap between the IW and the OW1 groups. From 9 months of age, all groups were significantly distinct. The results of PCA and HCPC for determination of the earliest age of differentiation between groups were supported by an analysis of the biometric data (BCS, FM%, FFM, and PC) over the 6 to 24 month period. The distribution of BCS values remained relatively stable over time in the IW group but in the OW1 and the OW2 groups, BCS values increased (Fig. 2). At 7 months of age, the BCS discriminated OW2 from both the IW (p = 0.012) and OW1 (p = 0.016) groups. Among the 11 dogs with a BCS of 6 at 7 months, 8 belonged to the OW2 group, 2 to the IW and 1 to the OW1 group. From 9 months onwards, the OW2 group had significantly greater FM% than the IW group (p = 0.005 at 9 months); from 12 months onwards, the FM% of the OW2 group was higher than the OW1 group (p = 0.029 at 12 months). While the FFM of the OW1 group was always significantly inferior to the other groups (both p < 0.014), no significant difference was observed between the FFM values of the IW and OW2 groups whatever the age (p = 0.18). Finally, the PC of the OW2 group was significantly greater from 9 months of age onwards compared to the other groups (both p < 0.001 at 9 months). Energy intake and expenditure Energy intake and expenditure were analysed over three age periods: 4 to 7 months (adjusted for metabolic BW), Fig. 2 Number of dogs at each BCS at considered ages. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9) Values are expressed as mean (standard deviation). The letters identify significant differences (p <0.05) between groups difference within the same parameter. The dogs were separated into 3 groups using a hierarchical classification on principal components on FM%, FFM and PC. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9) 7 to 10 months and 10 to 16 months (each adjusted for FFM, Fig. 5). One dog from the group IW was not included into the analysis, the measurements were not reliable. Actually, the dog had difficulties to remain calm all along the experimental procedure. At 4 months, all the groups had a similar EI and REE. Over 4 to 7 months, EI decreased significantly (p < 0.0001) and similarly in all groups, whereas for REE, there was a group-age interaction (p = 0.007), with an increase observed in the IW group and a decrease observed in the OW2 group. From 7 to 10 months of age, both EI FFM and REE FFM decreased significantly (p < 0.001) and similarly in all groups. At 10 months, EI FFM was significantly higher in the OW2 group than in the IW group (p = 0.013), while the REE FFM did not significantly differ (Fig. 5a). On considering the resting energy balance, [EI FFM -REE FFM ] over the 7 to 10 months period, values for the OW2 group were significantly higher than the IW (p = 0.001) and the OW1 group (p = 0.035) irrespective of time (Fig. 5c). However no significant changes were observed for any of the groups over time. Hormonal variations Prior to 11 months of age, no differences were found in basal leptin concentration between the groups. From 11 months, however, the leptin level was significantly higher in the OW2 group than in the IW group irrespective of age (p = 0.010). The basal plasma IGF1 concentration decreased significantly in all groups, until it reached a plateau at 13 months of age (p < 0.001). Before 11 months of age, the basal plasma IGF1 was lower in the OW1 group than in the IW and the OW2 groups (p = 0.045 and p = 0.061, respectively). From 11 months of age, the level of IGF1 did not significantly differ among groups. The groups did not significantly differ by their I:G ratio, adiponectin, ghrelin, cortisol, prolactin, CRP, IL-8 and IL-10 measured in the fasting state (Additional file 1). Values are expressed as mean (standard deviation). Letters represent a significant difference (p <0.05). Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). Details regarding the fitted curves can be found in the Materials and Methods Fig. 3 Scatter plot of GR2W (%) of three groups, during the two first weeks of life. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). Data are presented by groups, the line represents the median of the group; error bars represent the interquartile range. * significant difference (p < 0.05) between groups as identified by a linear regression model Fig. 4 Body weight as a function of time in 24 female Beagle dogs. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). Values represent means of groups, while error bars represent standard error of the mean. Upper-case letters identify significant differences (p < 0.05) within a group; lower-case letters identify differences (p < 0.05) between groups Statistical analyses for IL-6 and TNFα were not performed, as the majority of samples were below the level of detection. Relative weight gain(%) Analysis of post-prandial kinetic data showed that before 60 min, the baseline-adjusted acylated ghrelin values were significantly lower in the IW group than in the OW1 group (p = 0.027). At 60 min after the test meal, the variation of acylated ghrelin compared to baseline was lower in the IW group than in the OW1 (p = 0.002) and the OW2 groups (p = 0.011) (Fig. 6). After 60 min, the baseline-adjusted acylated ghrelin values did not differ between groups. No significant difference were detected between groups concerning whether the AUC values of ghrelin, PYY, plasma glucose, insulin or postprandial I:G ratio (Additional file 2), or variations from baseline (excepted for ghrelin). Discussion This is, to our knowledge, the first longitudinal study in growing dogs to investigate predictive factors that could explain becoming overweight and obese in adulthood. The BCS values of the dogs aged 24 months ranged from 5 to 8, thereby confirming that some dogs can be more susceptible to gain body fat than others of the same breed, fed the same diet and housed under the same environmental conditions. FM%, FFM and PC at 24 months of age retrospectively identified three well distinct groups of dogs that differed by their median BCS. When categorized in this manner, a posteriori, 62.3% of dogs were classified as having become overweight (Table 1). This is consistent with the reported prevalence of overweight dogs worldwide [2,3,15,39]. A number of parental and neonatal parameters were examined to determine early predictive markers of overweight status in adulthood. Fat-free mass and fat mass of the offspring at 24 months of age were positively correlated with the BW of the mother and the father, respectively, but A B C Fig. 5 Evolution of: a: energy intake (EI); b: resting energy expenditure (REE) and (c): EI less REE. All the values were adjusted for fat-free mass (FFM) in 24 female Beagle dogs. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). Values represent means of groups, while error bars represent standard error of the mean. Letters identify significant differences (p < 0.05) between groups at the same age Fig. 6 Baseline-adjusted variations of acylated blood ghrelin 60 min after a test meal. Values were obtained from 24 female Beagle dogs at the age of 7 months. Groups were described as ideal weight (IW, n = 9), slightly overweight (OW1, n = 6) and overweight (OW2, n = 9). Data are presented by groups, the line represents the median of the group; error bars represent the interquartile range. Lower-case letters identify significant differences (p < 0.05) between groups parental BWs were insufficient to clearly discriminate IW, OW1 and OW2 groups. This pilot study was conducted on 24 dogs born from 7 fathers and 10 mothers, which is a small sample size. The Fisher's exact test showed that despite this lack of power, the parental contribution should not be removed from future investigations. In contrast to human studies, BW at birth did not correlate to the overweight status, FFM at adulthood [40,41] or parental BW [42]. The earliest predictive marker of adult overweight status in our study was GR 2W . Seven out of the ten dogs with a GR 2w greater than 125% (cut-off point determined by the regression logistic model, see Data analysis section) belonged to the OW2 group. Despite the lack of strictly comparable data (% weight gain over the first 2 weeks of life) in humans, our finding is similar to those in human studies, which have shown that high weight gain in early life stages is associated with the development of adult obesity [43][44][45]. Neonatal growth will be impacted by the availability and composition of the dams' milk, which depend on litter size. We were unable to see a clear influence of litter size on GR 2W , and the explanation for GR 2W as a predictive factor for overweight status remains unclear. Future investigations with larger populations are needed to establish the impact of the dam's milk and/or litter size on the overweight status of offspring. Comparisons of weight gain between groups were performed by Gompertz-fitted curves over 18 months. The OW2 group presented the highest BW max and the lowest k (maturation rate) compared to the IW group ( Table 2). Given that α was the same in all dogs as a breed-size characteristic, this indicates a delayed maximum weight velocity ( Table 2, t PI IW < t PI OW2); human studies have reported contrasting findings on the association of weight velocity and age of PI, with the risk of becoming overweight [46]. Maximum weight velocity was similar in the IW and OW2 groups, but lower in the OW1 group, which could correspond with the higher FFM in the IW and OW2 groups. Increased frequency of measurements of morphometric parameters could help to clarify the underlying correlation of weight gain patterns with the overweight status in adulthood, as has been suggested in humans [46,47]. Analysis of BW change by growth period confirmed the approach by the Gompertz model: the groups displayed a similar pseudo-linear growth followed by a deceleration of growth, which was both delayed and reduced in the OW2 group compared to the IW group. After spaying but before the change of diet, BW increased in all groups but the rate of weight gain was higher in the OW2 (~15%) compared to the IW group (~8%). This suggests that spaying could aggravate excessive weight gain in dogs predisposed to obesity. Excessive fat-mass deposit was reflected in a BCS > 5/9 in 8/9 dogs in the OW2 group at 7 months of age. This is consistent with previous studies, either in cats, where lean and overweight phenotypes could be identified as early as 8 months according to the BCS [48], or in humans, where it was demonstrated that overweight status in childhood or adolescence is generally maintained or increased in adulthood [49]. The 9 pointscale BCS, although currently only used in adult dogs, may help veterinarians quickly and easily identify adolescent dogs (from 6 months of age) that may be at risk of becoming overweight in adulthood. Without an external intervention to regulate energy intake, the dogs in the OW2 group gained excess fat mass from 6 months of age. The establishment of growth charts, which take into account the size and body weight growth similar to the WHO's growth charts would be an interesting tool to monitor growth in puppies. In healthy dogs, excess weight gain is due to an energy imbalance. In this pilot study, the energy balance was approximated by the analysis of the energy intake and the REE. The fact that both EI and REE corrected for metabolic BW did not differ between the groups at 4 months could suggest that either the OW2 group had no energy imbalance at that stage in life or that the variation within groups was too large to detect a difference. During the 7 to 10 month period, in which spaying occurred, we found that EI FFM and REE FFM decreased by approximatively 18 and 25%, respectively (Fig. 5a, b) in the three groups. These decreases could be related to spaying and/ or the end of growth. The observed decrease in REE is consistent with one study in dogs and another study in cats, which suggested that energy requirements decreased by approximately 30% after gonadectomy in both species [50,51], resulting in a general recommendation to reduce calorie intake after neutering [37]. Over this 7 to 10 month period, [EI FFM -REE FFM ] (Fig. 5c) was higher in the OW2 group compared to the IW group which would explain the increase in fat mass in the OW2 group. Although the dogs were subject to the same environmental conditions, an individual measurement of physical activity could have helped to understand the link between the excess weight gain and the differences between EI FFM and REE FFM . Although time-restricted feeding could have impacted overall weight gain [52], all dogs were in same conditions, and thus have limited the impact of this outcome on our study. Thus, it seems that the OW2 group had a poorer control of energy balance regulation. This could be linked to the basal level of leptin in this group compared to the others. At 7 months, the leptin levels did not differ between the IW and OW2 groups which could be considered normal as the FM% of these groups were not significantly different at 6 months. After gonadectomy, the leptin level was significantly higher in the OW2 than in the IW group, which is in line with previous studies in dogs which correlated the level of basal leptin to BCS and FM% of dogs regardless of age, breed and sex [53]. This study failed to show plasma leptin as an early marker of excess weight gain in later life, indicating that leptin's most important role is the preservation of existing body fat stores [54]. We also found that acylated ghrelin levels decreased more rapidly and were significantly lower 60 min after a test meal in the IW group compared to the OW1 and OW2 groups when measured at 7 months of age. A similar observation has been made in obese cats [55]. Our findings suggest that the delayed suppression of acylated ghrelin in the OW2 and OW1 groups might impact their short-term regulation of energy balance [56] by facilitating overfeeding. More sensitive methods for determination of plasma biomarkers might also help to detect earlier differences between groups. Our results warrant further investigations on energy expenditure during growth and following neutering in a larger group of dogs in order to limit the impact of individual variability [57]. Further investigations might also include differences in gastrointestinal microbiota [58,59] and variations in gene expression in adipose tissue [60]. Conclusion Albeit small, the sample size used in the current study was sufficient to highlight differences in the development of overweight and obesity between dogs matched for age, sex and breed and raised under the same conditions. Among the predictive factors of adult obesity which were identified, the neonatal growth rate and adolescent BCS could be exploited in a clinical setting. Neonatal growth rate might help breeders identify dogs that should be dietary restricted from an early age. Adolescent BCS values might help veterinarians deliver specific nutritional advice for dogs at higher risk of becoming overweight before neutering. Any practical recommendations, however, are contingent upon validation of our findings in larger populations and in different breeds and sexes.
2017-10-26T03:51:48.742Z
2017-04-13T00:00:00.000
{ "year": 2017, "sha1": "1e40814ce0af91aaa231c1ff5254fcfbc322d455", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12917-017-0994-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e40814ce0af91aaa231c1ff5254fcfbc322d455", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54721945
pes2o/s2orc
v3-fos-license
Expression of Human Chloride Channels ClC 1 or ClC 2 Revert the Petite Phenotype of a Saccharomyces cerevisiae GEF 1 Mutant The mechanism of activation of the yeast ClC chloride channel/transporter GEF1 is unknown, and in this study we tested the ability of human ClC1 and ClC2, two channels with different activation kinetics, to revert the petite phenotype of a strain whose GEF1 gene was deleted. We found that when the human channels are expressed in a low-copy plasmid, the reversion of the phenotype does not occur; in contrast, when the channels are over expressed by means of a strong transcriptional promoter in a multiple-copy plasmid, the cells reach the normal size, and show a normal membrane surface and oxygen consumption. To determine the size variationsof individual cells, we employed flow-cytometry as a quantitative tool to evaluate the petite phenotype. These results suggest that the human ClC channels, when abundantly present in the cells, can support the metabolism disrupted in the knock-out strain. We also observed that the fluorescence emitted by GFP-tagged channels was found mostly towards the periphery of the wt yeast, whereas in the GEF1 knock-out it was detected in intracellular clusters. GFP-tagged channels expressed in X. laevis oocytes produced robust currents and did not show any evident difference with respect to the normal ClCs, whereas Gep1p did not show voltage-dependent activation. Introduction Chloride channels/transporters (ClCs) are members of a large family present in a wide variety of organisms from bacteria to higher eukaryotes.ClCs carry out multiple physiological roles, from plasma membrane and cell volume modulation to the control of vesicular pH (Fahlke, 2001;Jentsch, Stein, Winreich & Zdebik, 2002;Sardini et al., 2003;Soleimani & Xu, 2006;Jentsch, 2008).A clear example of this functional diversification is illustrated by comparing the properties of mammalian ClC1 and ClC2.They are both located in the plasma membrane; however, whereas ClC1 is activated by plasma membrane depolarization and thus is responsible for the repolarization current in muscle fibers, ClC2 is activated by hyperpolarization, as well as by other mechanisms such as changes in pH and cell volume (Conte, De Luca, Mamrini, & Vrbovà, 1989;Steinmeyer, Ortland, & Jentsch, 1991;Klocke, Steinmeyer, Jentsch, & Jockusch, 1994;Jordt & Jentsch, 1997). The mechanism of activation of the Saccharomyces cerevisiae Gef1p, the sole ClC found in this species of yeast, is still not clearly understood.Gef1p plays a critical role in yeast iron metabolism and is found mainly in the trans-Golgi (Greene, Brown, DiDomenico, Kaplan & Eide, 1993;Schwappach, Stobrawa, Hechenberger, Steinmeyer & Jentsch, 1998).Mutations of the GEF1 gene lead to an iron requirement for growth on non-fermentable carbon sources due to a failure to load copper onto the iron uptake system; thus, knocking down the expression of GEF1 leads to petite (pet) colonies when grown in these conditions (Gaxiola et al., 1998).Gef1p forms a Cl -transporter/channel in the plasma membrane of the yeast that does not show voltage-dependent activation when expressed in heterologous systems (López-Rodríguez et al., 2007).Interestingly, several ClC genes from plants, fungi, and vertebrates functionally complement the pet phenotype of yeast whose gene GEF1 had been deleted, whereas others such as the mammalian ClC7 gene, which codes for a protein of the lysosomal membrane, does not revert the mutation (Hechenberger et al., 1996;Gaxiola, Yuan, Klausner & Fink, 1998;Miyazaki et al., 1999;Kida, Uchida, Miyazaki, Sasaki, & Mauro, 2001;Marmagne et al., 2007). To determine if human ClC1 and ClC2 complement the pet phenotype of Gef1p -yeast, we expressed these two genes in a GEF1 knock-out strain.This paper describes the results of complementation assays and some details of the yeast phenotype revealed by scanning electron microscopy (SEM).To quantify the reversion of the pet phenotype, the colony size assay was supported with an analysis of cell volume by flow cytometry, which allowed us to measure the size and estimate the cell surface complexity of up to 5,000 individual cells per second.The results suggest that overexpression of ClC1 or ClC2 rescue the pet phenotype of a Gef1p -strain, whereas expression of the same channels in a single-copy plasmid and under a constitutive promoter do not rescue the mutant phenotype. Yeast Complementation Assays Strains RGY30 (wt) and RGY192 (Gef1p -) were transformed with plasmids derived from pYES (pYES-hClC1 or pYES-hClC2) and plated on YPD; after selection in restrictive media, positive colonies were transferred to SC-U supplemented with 2% galactose to induce the GAL promoter.The size of the transformed yeast was visualized and measured under the light microscope.Cell diameters were measured from ten different ocular fields (100X), and statistical analysis was performed with one-way ANOVA and Tukey post hoc tests; in order to have a more accurate measure of the cell diameter and membrane complexity, flow cytometry was used, as indicated below. Plasmids derived from pUG35 (pUG-hClC1 and pUG-hClC2) were also introduced into RGY30 and RGY192; galactose was added to as above this carbohydrate to revert the phenotype, eventhough the MET promoter allowed the constitutive expression of the transgenes (Mumberg, Müller & Funk, 1994). Flow Cytometry Cells grown in liquid YNB were collected from samples of three independent transformations, optic density measured in a spectrophotometer ( 480 nm) after 4 h of induction with galactose, and sorted using a Fluorescence Activated Cell Sorter apparatus (FACS calibur; Becton Dickinson).Acquisition and analysis of the FACS data were performed using CELLQUEST software (Becton Dickinson) and SUMMIT V4.3 (DAKO Colorado, Inc.).Frontal light dispersion was a direct indication of cell volume whereas lateral dispersion suggests the complexity of the cell surface.Data analysis was performed with Windows Multiple Document Interface Flow Cytometry Application, Version 2.9 (WinMDI V2.9). O 2 Consumption Rates Yeast strains were grown at 30ºC in SC-U medium with 2% dextrose to an OD 600 of 3 and then arrested in M phase by adding 1.5 mg/mL of nocodazole in 1% DMSO.After 4 h, yeast were collected by centrifugation for 5 min at 1000 g at 4ºC and resuspended in SC-U supplemented with 2% galactose.After 4 h of incubation, cells were counted in a Nuebauer chamber (Optyk Labor), and the culture was diluted in 3 mL of fresh SC-U set at 30ºC.The rate of O 2 consumption was determined using a Clark-type oxygen electrode and YSI Benchtop Biological Oxygen Monitor (5300 model) as reported (López-Rodríguez et al., 2007; Hernandez-Muñoz, Díaz-Muñoz, & Chagoya de Sanchez, 1992). Scanning Electron Microscopy (SEM) Yeast were transformed and fixed in 3% glutaraldehyde in H 2 O for 2 h.Then the cells were covered with a thin coat of gold using an Ion Sputter FC 1100 (Jeol) operating at 1200kV and 5 mA for 10 min.Samples were observed and photographed under an electron microscope (Jeol, JSM-54110LV) at a 10,000 X. The Gef1p -Yeast Phenotype is Reverted by Overexpression of hClC1 and hClC2 The first indication showing the complementation of the pet phenotype by expressing hClC1 or hClC2 was provided by a simple drop assay.Figure 1A illustrates that the mammalian genes revert the size of spotted cells when expressed in a Gef1p -strain growing in low-iron and non-fermentable carbon sources.This complementation was found when the ClCs were introduced and induced to express under the GAL1 promoter contained in the plasmid pYES.Flis et al. (2005) reported that expression of the mouse ClC2 was not capable of complementing a Gef1p -strain when using a single-copy plasmid and thus, we decided to see if this was also true with our strains and the human ClCs. The ClCs clones derived from pUG35 were grown in restrictive media supplemented with galactose to discard any ability of this carbohydrate to revert the phenotype.Consistent with Flis' findings, neither hClC1 nor hClC2 reverted the pet phenotype (Figure 1B).The results above suggested that the reversion of the pet phenotype observed in the first series of experiments was due to a dose effect, since the expression derived from pYES is expected to be higher than that driven by the MET25 promoter of the pUG35 vector. A visual inspection of the cells transformed with pYES under the light microscope revealed that the cell diameter correlated well with the size of the colony (Figure 1A and 2A).The cell diameter of Gef1p -(5.91 ± 0.06) and the strain transformed with the multicopy vector (5.95  ± 0.03) differed from that of the wt strain (6.88 ± 0.03).Cells expressing hClC1 were clearly larger (6.77  ± 0.01) than the knock-out mutant but did not reach the full size of the wt, whereas those expressing hClC2 (6.82  ± 0.03) were undistinguishable from the wt yeast (Figure 2A).The yeast surface was analyzed in samples of these cells under the SEM (Figure 2B and 3B), but other than changes in cells diameter we did not observe any major difference among the samples.Observation of the cells under the light microscope showed a wide diversity of cell diameters among samples; thus flow cytometry, a standard easy and quick methodology, was used to analyze a large population of yeast samples to have a better idea of the variants within and among the samples. The results of flow cytometry were plotted in Figure 2C, which shows the wide variability of cell size regardless of the cell sample, and this was consistent with the diversity of cell diameters observed under the light microscope.Nevertheless, comparing the distribution of cells in quadrants R8 and R9 revealed some difference in volume and complexity of the cell surface between the wt and the GEF1 knock-out; these parameters allowed us to establish a clear quantitative reference to determine whether the reversion of the pet phenotype occurred or not.The number of events recorded in quadrant R8 for the GEF1 mutant and for cells transformed with the core vector (pYES) were slightly different (725 ± 4 and 943 ± 28, respectively), whereas the number of cells expressing ClC1 (2116 ± 8) in R8 approached that of the wt (1998 ± 14), and the number of ClC2-expressing cells in R8 was intermediate (1524 ± 10), suggesting a partial reversion (Figure 2C-D). Oxymetry Respiratory metabolism is significantly diminished in strains that lack the GEF1 gene (Gaxiola et al., 1998); thus, we determined if the level of oxygen consumption was normal in the reverted strains that expressed the hClCs.In three independent experiments the wt strain presented a higher respiratory rate (3.65 ± 0.68 nAO 2 /min per 10 6 cells) when compared to Gef1p -transformed with the core plasmid (3.04 ± 0.10 nAO 2 /min per 10 6 cells, Table 1).However, when the plasmid expressed either hClC1 or hClC2, the strain exhibited the normal level of oxygen consumption: 3.62 ± 0.05 or 3.59 ± 0.05, respectively.(In three independent experiments, the expression of hClC2 showed a slightly lower rate of O 2 consumption; however, it was not statistically significant.Expression of the hClCs in the wt strain did not increase the O 2 consumption (Table 1, RGY30 strain), size of the colony, or microscopic characteristics (not shown). Expression of hClCs Tagged with GFP In sharp contrast to the results described above, expression of hClC1 and hClC2 fused to GFP using a centromeric plasmid (pUG35) did not completely revert the pet phenotype.The spot assay correlated well with the images taken under the light microscope (Figure 1A and 3A), whereas flow cytometry revealed that the characteristics of the mutant were not totally reverted by expressing the ClCs (Figure 3C and D).This was also evidenced when observing the cell structure under the SEM, which showed that cells with the pet phenotype remained in the population of yeast transformed with either hClC1 or hClC2, and only a few cells among the population appeared to have reverted to the wt phenotype (Figure 3B). A previous report describing similar results suggested that for the proper expression of ClC2, several codons have to be switched to those more frequently found in yeast.In addition, the overexpression of the Kah1 transporter is also needed to suppress the pet phenotype (Flis et al., 2005).In our results, fluorescence derived from the hClCs tagged with GFP and expressed in the wt strain was observed in intracellular compartments but mainly distributed around the periphery of the cells (Figure 4); thus,it is not necessary to introduce the yeast preferred codons for proper expression of the hClCs.When those plasmids were used to transform the Gef1p - strain, roughly 20-25% of the cells expressed the gene, and the fluorescence was found in intracellular compartments (Figure 4); however, as indicated above, this level of expression did not suffice to rescue the pet phenotype.A remaining question was if the GFP-tagged channels were functional; to probe that, we used X. laevis oocytes to test the electrophysiological properties of these channels. Functional Expression of hClC1 and hClC2 Channels in X. laevis Oocytes Injection of 50 nl of RNA (1 g/l) isolated from yeast expressing either hClC1 or hClC2 into frog oocytes induced the expression of functional channels; in contrast, GEF1 did not present a voltage-dependent current. The resting membrane potential of oocytes injected with the hClC1 was usually around -25 mV, while the uninjected oocytes as well as those expressing hClC2 oscillated between -35 and -40 mV.Voltage stepping the oocytes from 0 to -120 to +40 in 20-mV steps elicited currents derived from the expressed channels.Sample currents generated by hClC1, hClC2, and the GFP-tagged channels are shown in Figure 5. hClC1 and hClC1-GFP showed a fast activation and a pronounced deactivation at voltages more negative than -100 mV, as previously demonstrated (Lorenz, Pusch, & Jentsch, 1996;Pusch, Steinmeyer, & Jentsch, 1994).hClC2 and hClC2-GFP showed a slow activation upon hyperpolarization of the plasma membrane, similar to that previously reported (Gründer, Thiemann, Pusch & Jentsch, 1992;Thiemann, Gründer, Pusch & Jentsch, 1992).This indirect assessment of the channels expressed in yeast gives no indication that the GFP tag alters the properties of the channel. Figure 5. Functional expression in frog oocytes Neither control nor GEF-1-injected oocytes generated a voltage-activated current, whereas oocytes injected with RNA from yeast induced to express ClC1 and ClC2, whether tagged or not with GFP, generated voltage-gated currents. Discussion The aim of this study was to determine whether the opposite activation kinetics of hClC1 and hClC2, i.e. either slow or fast activation, respectively, as well as other functional differences were related to their ability to revert the pet phenotype of a Gef1p -strain of S. cerevisiae.Initially, we observed that both hClCs were able to revert the pet phenotype of the colonies formed by GEF1 mutant cells; however, a previous report by Flis et al. (2005) contrasted with our observations.Thus, we repeated our experiments using a centromeric expression plasmid as reported by Flis et al. (2005); in this case our results were consistent with those of Flis et al. (2005); that is to say: the expression of hClC1 or hClC2 derived from pUG35 did not rescue the mutant phenotype.Therefore, we can explain our initial results by a dose effect: overexpression of the hClCs under the GAL1 promoter in pYES permits many channels to be properly located in the cell.In contrast, expression of hClCs under the MET25 promoter and in a centromeric plasmid did not induce the expression of sufficient, properly located protein to complement the functions lost in the GEF1 mutant. A second possibility to explain the inability of our pUG35-derived plasmids to revert the pet phenotype is the presence of GFP at the carboxy-terminus of the receptor.However, the membrane currents generated by oocytes injected with the ClCs showed no evident differences between the channels that were tagged or not with GFP.The fluorescence detected in yeast that were induced to express the GFP-tagged human channels indicates that it is not absolutely necessary to change the codons to those prefered by S. cerevisiae, as reported by Flis et al. (2005).This may reflect differences in the nucleotide sequence between the murine cDNAs used in their studies and the human genes that we used in our experiments.Furthermore, the wt strain expressing ClC1 or ClC2-GFP presented fluorescence at the cell periphery. Considering that hClC1 and hClC2 show obvious differences in their activation mechanism and kinetics, we had aimed to correlate their ability to revert the pet phenotype with the specific properties of one of the channels; unexpectedly, both human ClCs induced the reversion.GEF1 does not show voltage dependence either in HEK cells or in X. laevis oocytes heterologously expressing the protein (López-Rodríguez, 2007) (Figure 5) for what is considered mainly as an intracellular chloride transporter.There is also some evidence showing the functional role of ClC1 and ClC2 in intracellular compartments as well as their active role in transporting protons and their modulation by pH (Steinmeyer et al., 1991;Bösl et al., 2001).This last functional property may explain the ability of both channels to compensate for the absence of Gef1p in the-knockout yeast. Figure 1 . Figure 1.Overexpression of hClCs revert the pet phenotype A. The human ClCs expressed under the GAL1 promoter revert the pet phenotype of a Gef1p -strain.B. In contrast, a centomeric plasmid where the ClCs were expressed under the direction of MET25 did not revert the phenotype of the strain. Figure 2 . Figure 2. Phenotype of reverted cells A. Sample images of wt and reverted cells seen under the light microscope.Bar = 10 m.B. Sample images of the yeast under the SEM; notice the diversity of cell sizes within and among samples.Bar = 5 m.C. Distribution of the cells resulting from the flow cytometry; FSC-H (cell size) and SSC-H (surface complexity), comparative data was subtracted from quadrant R8.D. Distribution of cells (events) in R8 and R9 in three independent experiments (means ± SE). Figure 3 . Figure 3. Expression of ClC derived from pUG-35 does not revert the pet phenotype A. Images of cells under the light microscope.Bar = 10m.B. Cells seen under the SEM Bar = 5m.C. Distribution of cells in the flow cytometry assay.D. Distribution of the cells in R8 and R9 in three independent experiments (means ± SE). Figure 4 . Figure 4. Distribution of GFP-tagged channels A. fluorescence of ClC1 and ClC2 expressed in the wt strain was observed in the periphery of the cells.B. The same plasmids did not revert the petphenotype, and the fluorescence was localized in intracellular compartments.Bar = 10m.
2018-12-12T21:14:13.220Z
2013-07-02T00:00:00.000
{ "year": 2013, "sha1": "f3295375c7470238626e28ba1496e06eca8b735e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/jmbr.v3n1p68", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f3295375c7470238626e28ba1496e06eca8b735e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
244642653
pes2o/s2orc
v3-fos-license
Fusion Analysis of Economic Data of the Medical and Health Industry Based on Blockchain Technology and Two-Way Spectral Cluster Analysis Due to the huge potential in gene expression analysis, which is helpful for disease diagnosis, new drug development, and life science research, the two-way clustering algorithm was proposed and it was widely used in gene expression data research. In order to understand the economic data of medical and health industry, this paper analyzes the economic data of the medical and health industry in different regions of China based on blockchain technology and two-way spectral cluster analysis and makes statistics on the economic data of the medical and health industry in eastern, central, and western regions of China. .is paper studies the development status of China’s medical and health industry and the factors affecting the agglomeration of medical and health service industry and analyzes them under the blockchain technology and two-way spectral cluster analysis method. .e results show that the overall development trend of China’s medicine and health is from government-led to government, society, and individual sharing. After the transformation of blockchain technology and two-way spectral cluster analysis, the output value of the pharmaceutical industry increased by about 10%. Introduction Traditional clustering analysis algorithms mainly deal with static data information. Due to the high-speed, real-time, and continuous characteristics of real-time data streams, traditional clustering analysis algorithms cannot be used. Blockchain technology is essentially a distributed witness technology. e so-called distributed means that the data are not concentrated in a certain data server center, but stored in various nodes in the network. e network members themselves are the storage carriers of the data and directly share, copy, store and synchronize the data. e so-called "witness" is to confirm and notarize the information uploaded in this distributed network. Once the information is uploaded and verified successfully, it cannot be tampered with, achieving the purpose of "witness." e data is stored in the blockchain instead of a centralized server, which can protect the data from being tampered with, making the data more credible and reliable. In addition, the permanent preservation of data also prevents the occurrence of denial. erefore, blockchain technology fundamentally solves the many problems of the current traditional centralized system due to the existence of third parties. Different from the general clustering method, in twoway clustering, not only the genes must be clustered, but also the changes in experimental conditions must be considered at the same time. e clustering composed of object subsets and attribute subsets identifies gene combinations with consistent expression patterns in the subsets of specific conditions, that is, two-way clustering. e clustering changes found in the dynamic analysis of data flow play an important role in the economic data fusion analysis of medical and health industry. For the fusion analysis of economic data in the medical and health industry based on blockchain technology and two-way spectral clustering analysis, experts at home and abroad have conducted many studies. Yokoya showed the scientific results of the 2017 Data Fusion Competition organized by the IEEE Geosciences and Remote Sensing Society Image Analysis and Data Fusion Technical Committee. It aims to establish an accurate model (evaluated by the accuracy index of the undisclosed test city reference), and it is computationally feasible (evaluated by a limited-time test phase) [1]. Paola believes that the use of multisensor data fusion technology is essential to effectively merge and analyze heterogeneous data collected by multiple sensors, which are generally deployed in smart environments. A context-aware, self-optimized, and self-adaptive sensor data fusion system based on a three-tier architecture is proposed. e results show that the proposed solution is superior to the static method of context-aware multisensor fusion and achieves a lot of energy saving while maintaining a high degree of inference accuracy [2]. Liu believes that there is an urgent need to develop a data fusion method that can integrate data from multiple sensors to better characterize the randomness of the degradation process. e article developed a method to construct a health index by fusing multiple degradation-based sensor data [3]. In the article, Ghamisi proposed a new framework for using extinction profile (EP) and deep learning to fuse hyperspectral and optical detection and rasterization data derived from ranging. e results are compared with other methods, and the proposed method can achieve accurate classification results. It should be noted that the article uses the concept of deep learning to integrate LiDAR and hyperspectral features for the first time, providing a new opportunity for further research [4]. Chen FC proposed a deep learning framework based on convolutional neural network (CNN) and naive Bayesian data fusion scheme, called NB-CNN, which is used to analyze a single video frame for crack detection. At the same time, a new data fusion scheme is used to aggregate information extracted from each video frame to improve the overall performance and robustness of the system. For this reason, a CNN is proposed to detect crack patches in each video frame, and the proposed data fusion scheme maintains the temporal and spatial consistency of the cracks in the video, and the naive Bayesian decision effectively discards false positives [5]. In order to create a positioning system that provides high-availability attitude estimation, Tao Z also integrates dead reckoning sensors. en the data fusion problem is expressed as sequential filtering. A reduced-order statespace modeling of the observation problem is proposed to provide an easy-to-implement real-time system. Experimental results show that, in terms of accuracy and consistency, this tightly coupled method performs better than the loosely coupled method that uses GNSS positioning points as input [6]. Beyca integrates multiple in situ sensor signals to detect initial abnormalities in the ultraprecision machining (UPM) process. rough the development of a new supervised learning method, the DP model state estimation is combined with the evidence theory sensor data fusion method to make a cohesive decision about the UPM process conditions. It is detected and classified as 90% accuracy [7]. ese studies provide a reference for the creation of this paper, but due to certain problems in related research algorithms and insufficient data samples, the results of related research are not consistent. In this paper, we have two innovations. One is that the paper proposes a method to search for various biclusters using different bicluster quality evaluation indicators. Second, it compares and analyzes the experimental results of the algorithm in this paper and other commonly used biclustering integration methods in expressing data. e comparison results show that the blockchain-based technology and the two-way spectral clustering analysis method proposed in this paper are implemented in indicators, and time performance is better than other methods. Blockchain Technology. Blockchain is derived from the underlying support technology of the Bitcoin network. It is a decentralized public ledger facing the world [8]. e block header contains the version number, timestamp, random number, difficultly value, hash value of the previous block, and the root hash value of the Merkle tree, as shown in Figure 1 [9]. e blockchain is built on the entire network, and the extension of the blockchain network is convenient. Any place that has access to the Internet can be connected to the blockchain so that it can realize transactions across borders and supervisions, reduce supervision costs, and improve convenience. Blockchain is a storage form of blockchain technology. e blockchain is composed of "blocks" connected in chronological order, and corresponding information is recorded in each "block." Blockchains can be divided into three types, namely, public chains (represented by Bitcoin and Ethereum), alliance chains (represented by R3 alliance, BCOS platform), and private chains. Among them, the public chain is an open platform facing the world. Any individual or organization can freely access and use the services of the public chain and can also withdraw freely [10]. As the underlying chain, the public chain can develop decentralized applications for specific businesses based on the public chain. e contribution of nodes in the public chain will be rewarded with digital tokens, and nodes participating in the world will jointly maintain the public chain. e public chain has achieved complete decentralization, but it lacks effective supervision, and transaction throughput is relatively low. At present, it cannot fully adapt to commercial applications with large business volumes. e alliance chain is jointly maintained by several organizations to maintain a blockchain, which is mainly used for the blockchain platform as a new way of cooperation between organizations to reduce the cost of business collaboration between alliance members and improve business operation efficiency [11,12]. e alliance chain can have no token mechanism, nodes are provided by alliance members, the generation of each block is jointly determined by the preselected nodes, and other nodes can participate in verification and transactions. e alliance chain will provide a supervision interface and even allow the setting of supervision nodes to achieve a kind of semidecentralization. Private chain is a blockchain system managed by an individual or a single organization. It has all the read and write permissions of the blockchain. e transaction throughput is much higher than that of the public chain. It is generally used for the internal business of exchanges and financial institutions. With the help of blockchain, the platform improves the business efficiency within the organization in a low-cost way [13]. e essence of a smart contract is a collection of data (state) and code (business function), which are stored in a specific address on the Ethereum blockchain. ey can be triggered by transactions on the blockchain, and this code can be read from the blockchain to get data and write data [14]. Ethereum uses smart contracts to extend the functionality of the blockchain to support developers in building decentralized applications. At present, there are thousands of decentralized applications that are being developed and deployed based on Ethereum, and hundreds of decentralized applications have been running stably on the Ethereum blockchain network [15]. Traditional centralized systems face problems such as high cost, low business operation efficiency, and insecure data storage. From the nature of blockchain, it can be seen that blockchain can provide good solutions [16]. In the blockchain system, there is no need for a trusted third party to do credit endorsement, and the nodes in the network can still carry out normal transactions and business operations in an environment that does not need to trust each other. Data does not need a centralized server for storage and management but is secured by cryptography technology, distributed consensus algorithm, and so on, so that the data cannot be tampered with and can be traced [17]. Two-Way Spectral Cluster Analysis Method. With the rapid development of science and technology in the world today, the development of mankind has produced a large amount of data. How to quickly and fully utilize these data and find useful information is a major challenge [18,19]. Data mining is to mine and analyze the original data from a large number of data sources to obtain effective knowledge information so as to make guidance and decision-making. Under normal circumstances, the process of data mining mainly has the following steps, as shown in Figure 2. e two-way clustering algorithm is fundamentally different from the traditional clustering algorithm. e traditional clustering algorithm is only one-way clustering of rows or columns, while the two-way clustering algorithm considers the whole matrix at the same time; that is, at the same time, it performs cluster analysis on rows and columns to detect the local information of the matrix [20]. However, based on the two-way clustering algorithm of gene expression data, a gene or a sample can belong to different "clusters" at the same time; of course, it can also not exist in any "clusters"; that is, it can be between "clusters" and "clusters." e overlapping part is shown in Figure 3. Rows represent genes, and columns represent the edges of two adjacent conditions, that is, the direction of the gene expression level of a gene under two adjacent conditions. e algorithm can exclude extra rows and columns from the two-way clustering results so as to achieve the purpose of shielding the rows and columns contained in the previous two-way clustering results so that the algorithm can produce different results through continuous iteration. Two-way cluster analysis plays an important role in gene expression profile data, which is mainly manifested in the following two aspects. In drug research, the results of two-way cluster analysis based on gene expression data are useful for the research of drug mechanism, drug development, the judgment of drug efficacy, and the detection of drug targets has played a great role. In terms of disease diagnosis, cancer heterogeneity is the biggest difficulty facing current cancer diagnosis and treatment. However, we can use the two-way data of gene chip cluster analysis that is used to identify cancer subtypes, thereby developing personalized treatment approaches. It can also be used to detect new tumor markers for early diagnosis and corresponding treatment. Most of the two-group analysis algorithms are currently based on either Jewish or metaheuristic optimization methods, so these algorithms require some quality evaluation indicators to calculate the quality of the search and the direction of the search [21]. In fact, the research process of biclustering is the process of proposing a large number of biclustering indicators. e quality of biclustering evaluation indicators directly determines the efficiency and benefit of biclustering analysis algorithm. In the two-way clustering, the two-way clustering set with the smallest average mean square residual is determined, and it is saved as the contemporary optimal two-way clustering set. Otherwise, the iteration is terminated, and the contemporary optimal two-way clustering set is output as a result. e mean square residual of a bicluster B (I, J) is defined as Its related index is expressed as Among them, R Ij is the correlation index of the j-th column in the bicluster and σ 2 Ij is the local variance of all elements in the j-th column in the bicluster B, but σ 2 j is the global variance of all the elements in the j-th column in the entire gene expression data A. For a bicluster k * l of size B (I, J), k * (l − 1) do the following transformations to obtain a matrix M of size, and each element of b ′ ij is defined as follows: en, the corresponding similarity times N of any two gene sums in double cluster B (I, J) are defined as follows: Mobile Information Systems Among them, when the value δ(x) � 1 of x is true, δ(x) � 0. Based on the formula biclustering B (I, J), the maximum number max N of similarities of gene i is defined as follows: Any three genes of bicluster B (I, J) are defined as follows: Each data point has neighbor points. is transformation should be reversible, where Z j is the X ij mapping result and Z j can also be obtained byX ij doing the inverse transformation. e formula is inverse transformation. In actual operation, due to the influence of noise data or different transformation methods, there are errors between X ij ′ and X ij , as shown in the following: Perform objective optimization operations on B and d. e specific formula is as follows: where ω j is the weight of the error ε j . According to the feature decomposition, the minimum weighted mean square value of B is obtained. S is the weighted covariance matrix of neighboring points. e customization of data is based on the premise of which distribution it conforms to, and then training and analysis are carried out according to the hypothetical distribution model. erefore, learning the distribution of feature data according to the energy model can solve all the above problems. en, Among them, θ is the parameter model, a i is the bias of the visible layer unit, b j is the bias of the hidden layer unit, and W ij is the connection weight between the visible layer and the hidden layer. e joint probability distribution that can be obtained according to the energy function is as follows: where Z(θ) represents the normalization factor which was in the calculation of joint probability. e likelihood function is solved through specific calculations, and the formula can be expressed as According to the state of the hidden layer unit, the formula for obtaining the visible layer unit in reverse is e specific solution algorithm P(v|θ) of the function is to use the contrast divergence algorithm and then calculate the minimum mean square value of the translation vector d: When the above formula is transformed, size ω is related. e weight of the sample point reflects the possibility that the point is noisy data. If the error is large, it means that the point is likely to be noise; otherwise, the point is less likely to be noise. e following functional relationship is satisfied between the weight and the error: Calculate the cost function. If the cost function ε is less than a certain threshold or the change of the cost function during two iterations ε is less than a certain threshold, the algorithm stops, and the cost function is Update the membership matrix U, and then return to the step: For the membership matrix output by the algorithm, no human intervention is required in the algorithm implementation process. In order to avoid the possible misjudgment of this method, based on the cosine similarity, the Mobile Information Systems cosine value of the angle between the point and the cluster center is used to weigh the Euclidean distance. en, Among them, t � |v j |, |v j | represents the number of samples v j in a cluster that is the cluster center and represents all sample points in the cluster where the cluster center is located. Data Fusion. e data integration process includes information retrieval, data processing, data integration, and result analysis [22]. Due to the variability of data, in the process of multisensor data integration, data must be integrated systematically, and data integration is divided into two levels according to function. All-round data connection with data preview, location recognition, and tracking was functions. High-resolution data integration is important for the analysis of trends and errors as a process to obtain the overall integration results [23]. Data fusion plays an important processing and coordination role in multi-information sources, multiplatforms, and multiuser systems, ensuring the connectivity and timely communication between the units of the data processing system and the collection center. We use Figure 4 to illustrate the data-level fusion method. Data-level fusion is based on the raw data collected by each sensor to directly perform sublevel fusion; that is, data compilation and analysis are performed before the raw data collected by each sensor is processed [24]. Data-level fusion can retain the effective information in the original data as much as possible, but its disadvantage is that when the sensor data value is too large, the statistical accuracy will be improved, and the original data will be incompletely verified. e biggest advantage of data-level fusion is that the original information is rich because the processed object is the most original data set, without any preprocessing, the loss of information is negligible, it can provide a large amount of detailed original information, and the accuracy of the fusion result is high. e disadvantage is that the amount of data that needs to be processed is extremely large, and the computer capacity and performance requirements are high. At the same time, the entire fusion process takes a long time, which will directly affect the real-time performance of the system; the original data is easily interfered with by external data, and the system should have good fault tolerance. Commonly used methods include weighted average algorithm, wavelet transform, and other algorithms [25]. In order to solve the shortcomings of data fusion, this paper is aimed at detecting the characteristics of a certain ambiguity in the data set and using fuzzy logic methods to identify and classify the detected data sets. Fuzzy set theory is essentially a kind of multivalued logic. In the process of fusion, a number between 0 and 1 is set for each data to express the credibility in the fusion process, and then the multivalue is used. Logical reasoning method merges data to realize data fusion [26,27]. Economic Status of the Medical and Health Industry. Data analysis is carried out on sites where Chinese medical institutions and health centers are focused on consulting data. In the eastern part of China, the medical centers of Beijing, Tianjin, Hainan, and Shanghai are higher than 1; Hebei and Shandong are nearly three. e strength of agglomeration in the year is higher than 1. e agglomeration index of Jiangsu, Zhejiang, Fujian, and Guangdong is below 1, as shown in Figure 5. It can be seen that the four eastern cities of Beijing, Tianjin, Shanghai, and Hainan are densely populated and well-developed, stimulating an infinite demand for medical services, and they have relative agglomeration advantages from the perspective of demand. From the perspective of supply, the average number of health personnel in each medical institution in the above three cities is higher than the average in the eastern region, while the average in the four provinces, including Jiangsu, is lower than the overall level in the eastern region. e four provinces of Jiangsu, Zhejiang, Fujian, and Guangdong are the eastern coastal areas, and the large population base is the key factor for the formation of agglomeration levels lower than the levels of other eastern regions. e results of statistics on the average personnel of medical institutions in the eastern region are shown in Table 1. rough statistics on the medical industry in the western region, the results of the study found that the concentration of medical service industries in Ningxia, Inner Mongolia, and Xinjiang in the western region are all above 1. Qinghai and Shaanxi are basically above 1. e agglomeration level is below 1, and only one year exceeds 1 a year; the agglomeration levels in Guangxi, Chongqing, Sichuan, Guizhou, Yunnan, and Gansu are all below 1. e result is shown in Figure 6. It can be seen that the average level of health personnel in each medical institution in Ningxia, Inner Mongolia, and Xinjiang is above the western average. e level of health personnel in each medical institution in Guangxi, Chongqing, Sichuan, Guizhou, Yunnan, and Gansu is almost less than or close to the western average, as shown in Table 2. e study found that the medical service industry concentration levels in the four provinces of Anhui, Jiangxi, Henan, and Hunan were below 1; the concentration levels of the five provinces of Hubei, Shanxi, Liaoning, Jilin, and Heilongjiang were all above 1. e average number of health personnel in each medical institution cannot reflect the concentration of the medical service industry in the region from the perspective of supply. e results of the research are shown in Figure 7. For the central region, from the effective analysis of eil index and local health expenditures, the expenditures in the region are relatively consistent, which meets the needs of the local population and fully integrates into the government's health expenditures. ere are significant differences among different provinces in China. e average ranking of the central region is 9. Compared with the eastern and western regions, the overall assessment of the local health expenditure in the central region is carried out. e average personnel of each institution in the central region are shown in Table 3. e average health staff of each medical institution in the central region of our country is slightly lower than that of the eastern region but higher than that of the western region. e values of Anhui, Hubei, Jilin, and Heilongjiang are all above the average. Henan has tended to be below the average in recent years; Jiangxi, Hunan, Shanxi, and Liaoning are all below the average. With the level of medical service, industry was agglomeration. e reason is that the number of medical institutions in Shanxi and Liaoning is higher than that in Jilin and Heilongjiang. We have made statistics on the number of for-profit medical institutions and nonprofit medical institutions in various regions, and the results are shown in Table 4. From the above table, the number of nonprofit organizations in the eastern region has increased year by year since 2015, and the number of for-profit organizations has been basically stable except for a few years; the number of nonprofit organizations in the central region has increased year by year, and the for-profit organizations have been basically stable, while in the western region, nonprofit organizations have increased year by year, and profit-making organizations have remained stable. Data Fusion Changes. We have made statistics on the expenditures of the medical and health industry as a percentage of GDP and structure, and the results are shown in Figure 8. It can be seen that the overall development trend of our country's medicine and health is a government-led transformation into a form of sharing by the government, society, and individuals. Government expenditures are declining gradually, and personal and social expenditures are increasing year by year, finally reaching a balance. We made statistics on the changes in the medical and health industry under the blockchain technology and the two-way spectral clustering analysis method, and the results are shown in Figure 9. We can see from Figure 9 that, after the change of blockchain technology and the two-way spectrum analysis system, the medical and healthcare industry has greatly improved. Among them, the result of the pharmaceutical industry has increased by about 10%, the cost has fallen, and an increase in the population and a reduction in bed rest time have led to a significant improvement of the medical and healthcare industry. We use examples to count different variables and analyze the fusion results. e results are shown in Figure 10. Discussion 4.1. Algorithm Discussion. As a new type of technical means in the field of data mining, the bidirectional clustering algorithm successfully overcomes the shortcomings of traditional clustering algorithms. It can cluster both the gene direction and the conditional direction at the same time. at is, while retaining the global information, the local information of the gene expression matrix can still be mined. Mobile Information Systems e traditional clustering analysis algorithm and data stream clustering analysis algorithm are researched and analyzed, and the data stream clustering analysis algorithm based on density grid is mainly discussed, and it is analyzed and summarized, and the improvement ideas are put forward. Combining the data stream clustering analysis algorithm based on density grid with fast processing speed and strong real-time characteristics, the DSG-stream algorithm is proposed on the problem of its insufficiency of cluster boundary processing and the uniform division of singlemode grids. e grid is divided into different thicknesses and granularity. e concepts of boundary grid and internal grid are introduced, and the grid influence factor is combined for clustering processing. e algorithm is based on a two-stage processing framework: online stage maintenance of grid feature vectors and dynamic processing of the internal grids form microclusters, and the boundary grids are fine-grained clustering in the offline stage to obtain the clustering results. In the algorithm, the grid cluster and density threshold are adjusted dynamically, which reflects the real-time changes of data and detects and processes isolated grids, which improves the efficiency of the algorithm and proposes a localized algorithm based on the distributed environment. e node-global node processing model further improves the processing speed of the algorithm. Experiments and comparisons verify the clustering accuracy and the operating efficiency of the algorithm, as well as the processing efficiency of the algorithm in a distributed environment. Pharmaceutical Industry. With the continuous reform of the medical system and the continuous expansion of medical service marketing, the impact of private medical companies on our country's medical service market forprofit first changed the behavior of our country's general medical companies. In areas where profitable hospitals are relatively dense, nonprofit hospitals will actually be affected by the hospital's profit-making medical behaviors, and it is changing the market value and service quality of our country's medical service products. Our country's medicine, especially in the field of traditional medicine, has major problems such as multiple facilities, small scale, hardly control of equipment, information asymmetry, low efficiency, high cost, and confusion. For a long time, pharmaceutical companies have not had their own foundation, and modern medical statistics are no exception to the cost reduction and efficiency improvement of the pharmaceutical industry. e integration of social medicine logistics and the integration of the modern medicine and medicine movement system is the primary task of reducing the complexity of the medicine industry. In the process of implementation, the situation in the past was that primary hospitals were unwilling to be trusted by banks due to their weak financial strength, while large hospitals had strong financial strength and did not require banks to be trusted. It is now required that all grassroots hospitals and community hospitals implement "two lines of revenue and expenditure," that is, all hospital expenditures are included in financial budget management and all revenues are turned over to special financial accounts. Conclusions is paper considers the degree of difference between twoway clustering and the degree of fusion between clusters and believes that the optimal number of clusters determined by two-way clustering is better than other algorithms. We compared the accuracy of the two-dimensional clustering algorithm. For high-density genetic data, the two-way clustering algorithm can better extract local information while retaining all this information. It can be seen that the two-way clustering algorithm is better than other clustering algorithms. Of course, there are some problems in the research of this paper. Compared with the clustering ensemble algorithm, the biclustering ensemble algorithm has an extra step of reconstructing the bicluster, and usually, heuristic algorithms that are easy to fall into local optimality are used to solve the problem. However, at present, there is no relevant literature on how to reconstruct the biclustering to obtain the global optimum, and it is still in a blank state of research, which points out a clear direction for the next step of research. How to obtain useful information from these data and solve some new problems is the focus of current research in the big data industry. erefore, the related research ideas of gene expression data biclustering analysis are extended to other data, which opens up a new direction for the next step of research for discovering or solving new problems. Data Availability No data were used to support this study. Conflicts of Interest e authors state that this paper has no conflicts of interest. Mobile Information Systems 11
2021-11-26T17:00:07.968Z
2021-11-24T00:00:00.000
{ "year": 2021, "sha1": "fdeca5527aab6b2345fd7dba6f399288a64f4b06", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/7731387", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cc768661319cb12560bf65d77c774d0995fed81e", "s2fieldsofstudy": [ "Medicine", "Economics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
209488921
pes2o/s2orc
v3-fos-license
Changes in neuronal activity across the mouse ventromedial nucleus of the hypothalamus in response to low glucose: Evaluation using an extracellular multi‐electrode array approach Abstract The hypothalamic ventromedial nucleus (VMN) is involved in maintaining systemic glucose homeostasis. Neurophysiological studies in rodent brain slices have identified populations of VMN glucose‐sensing neurones: glucose‐excited (GE) neurones, cells which increased their firing rate in response to increases in glucose concentration, and glucose‐inhibited (GI) neurones, which show a reduced firing frequency in response to increasing glucose concentrations. To date, most slice electrophysiological studies characterising VMN glucose‐sensing neurones in rodents have utilised the patch clamp technique. Multi‐electrode arrays (MEAs) are a state‐of‐the‐art electrophysiological tool enabling the electrical activity of many cells to be recorded across multiple electrode sites (channels) simultaneously. We used a perforated MEA (pMEA) system to evaluate electrical activity changes across the dorsal‐ventral extent of the mouse VMN region in response to alterations in glucose concentration. Because intrinsic (ie, direct postsynaptic sensing) and extrinsic (ie, presynaptically modulated) glucosensation were not discriminated, we use the terminology ‘GE/presynaptically excited by an increase (PER)’ and ‘GI/presynaptically excited by a decrease (PED)’ in the present study to describe responsiveness to changes in extracellular glucose across the mouse VMN. We observed that 15%‐60% of channels were GE/PER, whereas 2%‐7% were GI/PED channels. Within the dorsomedial portion of the VMN (DM‐VMN), significantly more channels were GE/PER compared to the ventrolateral portion of the VMN (VL‐VMN). However, GE/PER channels within the VL‐VMN showed a significantly higher basal firing rate in 2.5 mmol l‐1 glucose than DM‐VMN GE/PER channels. No significant difference in the distribution of GI/PED channels was observed between the VMN subregions. The results of the present study demonstrate the utility of the pMEA approach for evaluating glucose responsivity across the mouse VMN. pMEA studies could be used to refine our understanding of other neuroendocrine systems by examining population level changes in electrical activity across brain nuclei, thus providing key functional neuroanatomical information to complement and inform the design of single‐cell neurophysiological studies. | INTRODUC TI ON The brain is critical for maintaining systemic glucose homeostasis, in part by direct sensing of changes in glucose levels. Neurophysiological studies in rodent brain slices have identified glucose-sensing neurones that change their firing frequency in response to alterations in local glucose levels. Glucose-sensing neurones are broadly subdivided into two groups: glucose-excited (GE) and glucose-inhibited(GI).GEneuronesshowenhancedfiringactivityinresponseto higher ambient glucose concentrations, whereas the reverse is true for GI neurones. The hypothalamus is a key brain region mediat- The VMN is one of the best studied hypothalamic nuclei with respecttoglucose-sensingneurones,withbothGEandGIneurones observed. 1,5 VMN glucose-sensing neurones play an important role in detecting and reacting to glucose deficit, regulating both the counter regulatory response to hypoglycaemia and glucoprivic feeding. 6 GEandGIneuronesaredefinedasbeingdirectly/intrinsically responsive to alterations in extracellular glucose levels. 7 Indeed,in the presence of the voltage-gated sodium channel blocker, tetrodotoxin(TTX),directpostsynapticchanges(ie,intherestingmembranepotential)areseeninbothGEandGIneuronesinvariousbrain areas. 8,9 Three subtypes of non-intrinsic, glucose-sensing VMN neurones have also been described, which are modulated by presynaptic glutamatergic inputs. These extrinsically glucose-sensing neurones include PED neurones, which are presynaptically excited by a decrease(PED)inextracellularglucoselevels(from2.5to0.1mmolL -1 glucose),andPERandPIRneurones,whichareeitherpresynaptically excited(PER)orinhibited(PIR)byanincreaseinextracellularglucose levels (from 2.5 to 5-10 mmol L -1 glucose), respectively. 10 Unlikeotherhypothalamicnucleiwheretheneuropeptide/neurotransmitter phenotype of glucose-sensing neurones has already been identified, 1 because only a single cell can be studied at a given time, and given that intrinsic GE and GI neurones are estimated to together comprise only approximately 20% of the VMN neuronal population, 11 thismakesthepatchclamptechniquebothtime-consumingandof relatively low yield. Furthermore, because few glucose-sensing neurones can be recorded per brain slice, the investigator, who, in this circumstance, is ultimately searching for specialised glucose-sensing neurones, is potentially introducing unintentional anatomical selec-tionbiasintheirrecordingsbytargetingpartsoftheVMNknownto be enriched in these cells. Multi-electrodearrays(MEAs)compriseanelectrophysiological tool enabling the electrical activity of large neuronal groups to be simultaneously recorded across multiple electrode sites (channels). Thispowerfulneurophysiologicalextracellularrecordingtechnique is used both in vivo and ex vivo. 12 Anadvantageofthismethodcompared to others is the high spatial and temporal resolution that it offers, enabling the functional investigation and mapping of complex and heterogeneous nuclei, such as the VMN. 13,14 Each MEA electrode site can record the electrical activity from a population of neurones:referredtoasmulti-unitactivity(MUA).Useofconventional spikesortingmethodsallowstheactivityofsinglecells(single-unit activity[SUA])tobediscriminatedfrom theMUA. 15 Wehavepreviously employed perforated MEA (pMEA) technology for ex vivo hypothalamic recordings from mouse brain slices 16 because of the improved slice perfusion rate, long-term recording stability and high signal-to-noise ratio. 17 Inthepresentstudy,wehaveusedanequiv-alentpMEAextracellularrecordingapproachinexvivoadultmouse brain slices aiming to objectively evaluate changes in neuronal activity across the dorsal-ventral extent of the medial portion of the VMN in response to changes in extracellular glucose concentration. To our knowledge,thisisthefirststudyutilisingtheMEAsystemtoeval-uateglucoseresponsivityacrossthemouseVMNneuralnetwork. | In vitro multi-electrode recordings MEA recordings from acute brain slices were performed as described previously. 16,19 After 1 hour of rest, a VMN-containing mouse brain slice was placed recording side down, onto a 60pMEA100/30iR-Ti-gr perforated multi-electrode array (pMEA; Multi Channel Systems, MCS GmbH, Hanover, Germany). These arrays are comprised of 60 electrodes in a 6 × 10 layout (one is a reference electrode), with each electrode 100 µm apart, and with a diameter of 30 µm, covering a total area of approximately 707 μm 2 . Thus, anatomically, a single array can cover the entirety of the mouse VMN. 18 Contrastillumination(bothunderneathand above the brain slice) was used to ensure accurate placement of the | Glucose responsiveness To investigate the glucose-sensing capability of VMN neural networks to changing glucose concentrations, recordings were com- | Anatomical locations Offline overlay of pMEA electrode sites and respective recorded sliceimages,withtheaidofkeylandmarks(shapeofthehippocampus, third ventricle, median eminence and location of the optic tracts) andreferencetotheMouseBrainAtlas, 18 to be studied and, importantly, to be distinguished from non-VMN regions, with the latter being excluded from VMN-related analyses ( Table 2). In summary, although we found evidence of glucose-sensing | More glucose-excited (GE/PER) channels were found in the dorsomedial compared to the ventrolateral subdivision of the VMN Within the VMN, the DM-VMN had significantly more GE/PER channels than the VL-VMN in response to a lowering of the extracellular glucose concentration from 2.5 to 0.1 mmol L -1 for 40 minutes (Figures6Aand4andTable3).Therewasnosignificantdifference in the distribution of GI/PED channels between these VMN subregions ( Figure 4 and Table 3). | Glucose-excited (GE/PER) channels in the ventrolateral portion of the VMN displayed a higher spontaneous basal firing frequency ThebaselinefiringfrequencyofGE/PERchannels(ie,in2.5mmolL -1 glucose;MUAanalysis)wassignificantlyhigherintheVL-VMNcompared to the DM-VMN region VMN in response to a lowering of the extracellular glucose concentration from 2.5 to 0.1 mmol L -1 for 40 minutes (Figure 6B-D). From these data, a representative heat map was generated in which the firing activity across the VMN was plotted relative to anatomical location: this analysis indicated that channels with the highest average basal firing frequency were located in the VL-VMN proximal to the central region (Figure 7). | Analysis of the single-unit data supported the observations of the multi-unit analysis TheSUAsupportedtheMUAdata,with60%ofsingle-unitsacross the VMN being GE/PER (Table 4) and < 7% being GI/PED in response to a lowering of the extracellular glucose concentration from 2.5 to 0.1 mmol L -1 for 40 minutes, with the latter being a higher percentage than was seen for the GI/PED MUA analysis. There were insufficient single-units per slice within the VMN sub-regions to provide meaningful analysis of the within VMN distribution. | D ISCUSS I ON In the present study, we report that, using the pMEA recording technique in ex vivo brain slices, both multi-unit and single-unit neurones reported in the rat VL-VMN. 23 Assuch,thereappearstobe variabilityinthereportedfrequencyofGIneuronesintheVMN.It is possible that, using our extracellular recording method, the preva- Nevertheless,inthepresentstudy,thepercentageofGI/PEDchannels across the VMN was significantly lower than that of GE/PER channels. GIneuronesintheVMHthatspecificallyexpressneuronalnitric oxide synthase have been shown to be important for both neuronal glucosensing and regulation of the counter-regulatory response to hypoglycaemia in vivo. 46 Furthermore, VMN GE and GI neurones have been shown to express SF-1. 47 AsubpopulationofVMNSF-1 neurones has been shown to co-localise with pituitary adenylate cyclase-activatingpeptide(PACAP) 48 andrecentworkhasidentified apopulationofintrinsicallyGIVMNPACAP-expressingneurones. 9 Althoughcomprisingarelativelysmallneuronalpopulation,theydisplay a wide distribution across the VMN and are considered to play a role in systemic glucose regulation because chemogenetic stimulation of these neurones in vivo resulted in inhibition of insulin secretion, which led to a reduced glucose tolerance. 9 Interestingly,GI VMNPACAP-expressingneurones(15%ofwhichwerefoundtobe neuronal nitric oxide synthase-positive) send projections both within the VMH and to other brain areas, including the paraventricular nucleus, lateral hypothalamus, aBNST, paraventricular nucleus of the thalamusandperiaqueductalgrey, 9 in line with the findings reported byMeeket al. 36 Insummary,inthepresentstudy,wehaveutilisedthepMEArecordingtechniquetoprovideanunbiasedassessmentofthepercentage and distribution of glucose-sensing neurones across the mouse VMN (ex vivo). Although the pMEA method cannot provide finer, more detailed information regarding the neurophysiological propertiesofcells,suchascanbeobtainedbythepatchclamptechnique, it is still a very useful neurophysiological tool for examining both population and single-cell level responses simultaneously across brainnuclei,thusproviding,usingahigh-throughputapproach,key functional neuroanatomical information that could complement and inform the design of future single-cell studies. In practical terms, MEArecordingsarearguablymoreuser-friendlyandtechnicallyless demanding to perform than the gold standard patch clamp method and, at the same time, offer a high spatial and temporal resolution. 14 Finally, the in vitro MEA method permits cell-non-invasive, longterm, stable recordings to be performed, and, in combination with optogenetic, chemogenetic, pharmacological and/or calcium imaging approaches, this method can be used to refine our understanding ofotherneuroendocrinenetworks. ACK N OWLED G EM ENTS We thank Dr Jonathan Brown (University of Exeter) and Dr CO N FLI C T O F I NTE R E S T S The authors declare that they have no conflicts of interest. DATA AVA I L A B I L I T Y The data that support the findings of this study are available from thecorrespondingauthoruponreasonablerequest.
2019-12-28T14:03:20.323Z
2019-12-27T00:00:00.000
{ "year": 2020, "sha1": "4c740e7239ee4c7ac52f38f6aea41e2b12619853", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jne.12824", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "83dae0c595dfee7e2d749c69fd21144ecb02f7f6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
225557767
pes2o/s2orc
v3-fos-license
Risk Analysis of Toll Road Construction Project By Using Soft System Methodology (ssm) A Case Study Of Sumatera Trans Toll Road Section 1 (Terbanggi Besar – Menggala) Toll Constructing a toll road project always deals with various technical and non-technical problems. Those problems are considered as the risk of construction projects. These risks will definitely affect the project function and result in losses both in terms of cost, quality, and time, which determine the success of a project. Therefore, the purpose of this research was to identify and to analyze the risks using qualitative-quantitative research with the Soft System Methodology (SSM) on Sumatra Trans Toll Road Project Section I in Bakauheni - Kalianda (Sta. 00 + 400 - Sta. 20 + 000). Collecting data, this research used purpose sampling techniques from six respondents. In addition, data analysis methods used in this study are Probability Risk Test, Consequences Risk Test, and Soft System Methodology (SSM). I. Introduction The development of the Trans Sumatra Toll Road (Terbanggi besar -Pematang Panggang) project by Hutama Karya along the 189,2 kilometers (km) is still being accelerated. By the end of 2019, it is estimated that 365 kilometers of toll road will be built and operated to be used by public. The construction process of Trans Sumatra Toll Road is done quite different from the latest technology experience. For the first time, in Indonesia, the Ministry of PUPR collaborated with Hutama Karya to apply the use of advanced technology called VCM to accelerate the construction of Trans-Sumatra Toll Road.The toll road construction in this project has also never been away from various technical and non-technical problems. These problems are known as construction project risks. These risks will definitely affect the project function and result in losses both in terms of cost, quality, and time, which determine the success of a project which determine the success of a project (Kangari, 1995). In the end, risks can arise both in expected and unexpected way (Smith, 1992).These risks can be managed by identifying quantifying the risks that might occur in a project. The analysis can be done with qualitative and quantitative analysis. This analysis was conducted to determine the opportunities and impacts of a project. Formulation of the problem Based on the background of the problems above, the problems can be formulated as follows: 1. What risks might occur in the project case study? 2. What were the level of opportunity and impact of the risk in the project? Research Purposes The objectives of this study are 1. Identifying and analyzing risks that might occur in project case study qualitatively. 2. Quantifying the risks that might occur using the SSM method. Limit Problems The limitations of this research are Examining the risks occur at the construction stage in the project case study. Determining the level of risk based on the results of identification and risk analysis. Identifying risks based on secondary data, namely document reviews in project case study. Analyzing risks using the SSM method. Research Contribution The result of this research is expected to give significant contribution, those are: 1. Being the initial recommendation in managing risks that may occur in the construction 2. Providing an overview of the SSM application framework in terms of construction risk management on toll road. Definition of Risk Management According to PMBOK Guide 5th Edition describes project risk management as a process of planning, identifying, analyzing, planning handling and controlling risks in a project. The purpose of risk management is to increase the chance (probability) of positive influences and reduce negative effects (threat). In this research, risk management is intended to analyze the risks that may occur in project case study. Risks During Toll Road Construction According to the Construction and Building Guidelines (Pd T-01-2005-B) issued by the Department of Public Works, the class and elements of toll road risks are divided into three stages consisting of : Financing Continuity of sources of funds interest during Development Filed condition,weather, material, supply,theft, specification, strike Equipment Import the performance Force majeur Disaster nationalization revolution Post construction Operation and mintenance operation and maintenance ,implementation system, poor building construction, estimated operating and maintenance costs ,inflation of operating and maintenance costs, vandalism, accident rate , security conditions and public order Toll acceptance estimated traffic volume initial tariff and tariff adjustment competition inefficiency (corruption, collusion and nepotism) Toll obligation Exchange rate interest Force majeure during operation Disaster natiolization Revolution As mentioned before, the researcher will only analyze toll road risks during the construction phase. Risk Management Process Risk Management Process is a systematic process of planning, identifying, analyzing, responding, and monitoring project risks. It aims to increase the opportunities and impacts of positive events and reduce the opportunities and impacts of negative events. The process will be explained in detail in the following stages: Plan Risk Management Risk Management Planning is the process of determining how to carry out risk management activities in a project. It can be concluded that the Risk Management Planning explains how to manage risk into a structured project. Risk Identification As a series of processes, risk identification begins with understanding what is actually called by a risk. Risk identification can be done by analyzing the source of risk and analyzing the problem. In this study, the researcher identified the risks by using Information gathering techniques from the Interview method conducted with experts in project case study. The outputs or results of the identification above are re-analyzed by using the Fishbone Diagram to make a list of risks that occur in the project case study. 3. Qualitative Risk and Quantitative Risk a. Qualitative risk According to PMBOK® Guide-2000 Edition, p.193, a qualitative risk analysis is a method of prioritizing the list of identified risks in subsequent treatment. This process is carried out by arranging risks based on their impact on project objectives. It also prioritizes risks based on their probabilities and impacts. Qualitative risk analysis can be done with the help of tools and technique, including: • Risk Probability and Impact Assessment This technique is an investigation of the possibilities of each Specific risk will occur such as potential impacts on project performance such as time, cost, scope and quality and also negative impacts and opportunities. • Probability and Impact Matrix Risk can be prioritized for further quantitative analysis and action (responses) based on the size (rating) of risk. Measures are taken of risks based on opportunities and their impact. t Risk data quality analysis is a technique for evaluating the usefulness of data in risk management. • Risk Data Quality Assessment Risk data quality analysis is a technique for evaluating the usefulness of data in risk management • Risk Categorization Project risks can be categorized by source of risk, risk impact, and phases (engineering, procurement, and construction) to determine the project area affected. • Risk Urgency Assessment Risks that require immediate action may be categorized as the urgent thing to be analyzed. In this study, the authors quantify the risk by Probability and Impact Matrix using the Soft System Methodology (SSM) method. The understanding of SSM, as follows: Soft System Methodology (SSM) The development of the system model is carried out by exploring unstructured problems, discussing intensively with related parties, comparing the concept of the thinking In this study, assessment and measurement of aims to understand how big levels opportunities risk if there is. Case Study Methods used in the research are a method of descriptive with : 1. Case study Case study research is research on the status of research subjects with regard to a specific or typical phase of the whole personality. The purpose of the case study is to provide a detailed description of the background, characteristics, and characteristics of a particular case and then the characteristics of the above will be made a general thing. Risk Identification In details, interview method is used to obtain primary data by conducting interviews with PT. PP (Persero) Tbk as the contractor. The purpose of this risk identification is to get a list of risks that will be analyzed using a fishbone diagram. Qualitative After obtaining a list of risks with a fishbone diagram, the next process is to quantify the data to obtain opportunities and the impact of risks that may occur using the SSM method. Types of data on 1. Secondary data. Secondary data was obtained from project case studies such as project documents, etc. 2. The primary data.
2020-07-23T09:06:30.868Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "6c4f83db2703ffac73d844a3aad592a12bb329ed", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/852/1/012036", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "efbf57225fcb7272fde63866cd29cf277263a08e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
214667378
pes2o/s2orc
v3-fos-license
A Modified Global Climate Simulation Model In this paper, we incorporate seasonal variations of insolation into the global climate model C-GOLDSTEIN. We use a new approach for modelling insolation from the space perspective presented in the authors' earlier work and build it into the existing climate model. Realistic monthly temperature distributions have been obtained after running C-GOLDSTEIN with the new insolation component. Also, the average accuracy of modelling the insolation within the model has been increased by 2%. In addition, new types of experiments can now be performed with C-GOLDSTEIN, such as the investigation of consequences of random variations of insolation on temperature etc. Introduction The development of climate models started back in the early 1960s, when the first models containing only the atmosphere appeared [1]. But as time was progressing and the computational capacities increased, more components were developed and coupled together. The resulting comprehensive models are known as General Circulation Models (GCMs). These models represent powerful tools for predicting future climate changes, as well as for understanding the climate of the past. The examples of the recent GCMs are MIROC-ESM-CHEM [2], CNRM-CM5 [3], IPSL [4], CSM4 [5], CMCC-CESM [6]. Parallel to them, another group of models was developing-the Earth System Models of Intermediate Complexity (EMICs). These models are more simplified than the comprehensive GCMs. However, they have a number of advantages, such as their capability to be used for the forecasts up to several millennia, as well as for performing extensive sensitivity studies. The examples of the EMICs are EcBilt [7], IGSM2 [8], CLIMBER-2 [9], DCESS [10], Bern3D-LPJ [11], LOVECLIM1.2 [12]. In this paper, we incorporate the seasonal variations of insolation into the EMIC C-GOLDSTEIN [13], which previously used yearly averages of insolation. In order to do this, the annual averages of insolation have been replaced by approximation curves of insolation at any particular time. The approximation was done by using the least square method based on the results obtained from the authors' earlier work [14], where a new approach for modelling insolation has been proposed. Realistic monthly latitudinal temperature distributions have been obtained after running the C-GOLDSTEIN model with the new insolation component. The average accuracy of modelling the insolation within the model has been increased from 96% to 98%. In addition, this work broadens the applications of C-GOLDSTEIN, because calculations can now be performed for any particular time of the year. Description of the model C-GOLDSTEIN (Global Ocean-Linear Drag Salt and Temperature Equation INtegrator) consists of a two-dimensional atmospheric model, a threedimensional ocean model, and simple land surface and sea ice models. The full description of the model is provided in Marsh et al. [13]. Longitudinal resolution of the atmospheric component is 10 • , while latitudinal resolution varies from 3 • near the equator to 20 • for polar regions. The ocean component is based on thermocline equations with an additional linear drag term in the horizontal momentum equations. A condition of zero normal fluxes of heat and salt was specified at the lateral boundaries. The lower boundary fluxes of two prognostic variables (temperature and salinity) were set to zero. The land component has no dynamical land-surface scheme and only determines the runoff of fresh water. The surface temperature was assumed to be equal to the atmospheric temperature and the evaporation is set to zero. The sea ice component contains dynamic equations which were solved for the fraction of the ocean surface covered by sea ice and the average height of sea ice. The atmospheric component of the model is represented by an Energy Moisture Balance Model. The prognostic parameters are air temperature and specific humidity at the surface. The model balances heat and moisture within the atmosphere. The net flux of longwave radiation into the atmosphere was modelled as a function of the surface and atmospheric emissivities, the temperature of the underlying surface and the Stefan-Bolzmann constant. The incoming radiation was approximation by Legendre polynomials [15] and produces latitudinal-dependent annual average values: where S(x) in the mean annual distribution of radiation reaching the top of the atmosphere, x is the sine of latitude, S 2 = −0.477 is a constant, and is the second Legendre polynomial [16]. Within the model both short-term and multi-millennium forecasts can be performed within a relatively short computational time. The standard time step used for calculations is 0.73 days for the atmosphere and double that for the ocean. In order to obtain near present-day climate, a 2000 year experiment needs to be performed (known as SPINUP) which starts from some unrealistic conditions (such as zero mean global air temperature) and then progresses until the system comes close to equilibrium. Curve fitting procedure for incorporating the seasonality into the global climate model In order to incorporate the seasonality into C-GOLDSTEIN model, the amount of insolation for every latitudinal belt throughout the year computed in the authors earlier paper [14] was used. The amount of insolation for the odd latitudinal belts of the Northern and Southern Hemisphere is presented in Figure 1. In order to allow the incorporation of those curves into the code of the C-GOLDSTEIN, they were approximated by functions of several different types. In particular, the curves corresponding to the first two latitudinal belts best approximated by straight line sections; the best fit for the 20 • − 30 • latitudinal belt was a combination of a wave function and the straight lines. The remaining latitudinal belts which displayed a more complicated shape (50 • − 60 • and 80 • − 90 • ) were approximated by the combination of several wave functions and straight lines. The wave functions used for the approximation are of the following form: The coefficients of the straight lines were found by simply interpolating two given points. In order to find the amplitudes and the vertical shifts of the wave function, the ordinary least square method was used. The optimisation was performed in MS Excel using "The Solver" add-in. The GRG (Generalized Reduced Gradient) non-linear solving method was used. Note that the Solver command only determines locally optimal solutions and without the specified bounds, a physically unreasonable solution can result. Thus the estimates for the amplitudes and estimates for the vertical shifts were calculated. The estimates for the amplitude were calculated as a half of the difference between the largest and the smallest value of the initial curve on the interval over which the approximation was made. In case of a vertical shift, their sum was taken instead. Based on these, upper bounds for each amplitude and lower bounds for each vertical shift were then chosen so as to allow a reasonable range for the parameters to be optimised. The angular velocities were fixed as ω = 2π p , where p denotes the number of intervals over which the wave function is defined. The values for the phases were chosen manually after examining the plot obtained after the first round of optimisation. In case of the optimal values reaching the constraints, the corresponding bounds were shifted further in order to allow an improved and physically reasonable solution to be obtained. After performing the optimisation in this way, an optimal solution was reached. For the Southern Hemisphere, a simple shift of the curves for the Northern Hemisphere by 6 months has been made. The resulting curves for the 0 • − 10 • , 40 • − 50 • and 80 • − 90 • latitudinal belts in the Northern and Southern Hemispheres are presented in Figure 2. In each figure the blue curve corresponds to the data obtained from the model, and the red one is the approximation curve. In order to verify how well the values from the proposed model are replicated by the approximation, the R squared test was applied. The R 2 coef- ficient was computed using "Data Analysis" add-in in MS Excel. The "Regression" analysis tool was used. The results obtained are shown in Table 1. The R 2 coefficient obtained for all the latitudinal belts is greater than the required 95% confidence level. In particular, for the latitudinal belts beyond the 20 • , its value is 99%. Thus the conclusion can be drawn that the fitted curves are appropriate for approximating the values obtained from the proposed model. Simulating the seasonal variations of insolation within the C-GOLDSTEIN model The calculations of insolation were initially performed in one of the subroutines of the global climate model where the atmosphere is initialized prior to the start of iterations. Firstly, we have replaced the average annual values used there by the annual average values obtained from our proposed model [14]. Note that C-GOLDSTEIN has a latitudinal resolution of 20 • near the polar regions. Thus, we chose to extend the value for the 70 • − 80 • belt to the 80 • − 90 • latitudinal belt. Both sets of the yearly averages were compared with satellite data from the NASA Langley Research Centre Atmospheric Science Data Centre Surface meteorological and Solar Energy (SSE) web portal supported by the NASA LaRC POWER Project. 1 Note that the data in the source is given in terms of 1 • resolution so we have averaged this over the 10 • latitudinal belt. The comparison of the results is shown in Table 2. The results obtained indicate a 2% increase in the average accuracy compared to the insolation values used previously in C-GOLDSTEIN. Also, there is a significant increase in accuracy for the furthest polar belt. After the comparison we reinitialised the atmosphere starting from zero initial conditions and ran C-GOLDSTEIN in SPINUP mode leaving all other parameters set to zero (such as carbon dioxide growth rate etc.) in order to obtain suitable initial conditions. The model was run remotely in high performance computing environment. The curves obtained in Section 3 have then been incorporated into the main loop of the C-GOLDSTEIN simulation model. In this way, the insolation computations are performed at each time step. The model was then run with the modified code using the results obtained after the SPINUP run as the initial conditions. The time step was reduced to 1 day for the ocean (compared to the initial 1.46 days). The ocean-atmosphere time step ratio was kept unchanged (the atmospheric time step is half of the ocean one). Note that a 360-day calendar was used for simplicity (i.e. each month has 30 days). The results for the 21st day of each month are illustrated in Figure 3. The figures were obtained with a small modification to the MATLAB plotting subroutine provided together with the model software. The results can be compared with the monthly temperature distribution maps from National Centres for Environmental Predictions (NCEP)/National Centre for Atmospheri Research (NCAR) Reanalysis Project. 2 Clearly, the obtained temperature distributions are realistic and follow all the main patterns in the actual temperature distributions from NCEP/NCAR, such as the maintenance of hot temperature throughout the year for the equatorial regions, the rotation of the winter and summer seasons for the Northern and Southern Hemispheres, extreme low observed temperatures for the polar regions during their winter seasons, and distinct temperature variations due to the location of continents. Conclusion In this paper, we have incorporated the seasonal variations of insolation into the global climate model C-GOLDSTEIN. Firstly, the latitudinal curves of insolation throughout the year obtained from the authors' earlier work were approximated by the functions of a more suitable form for computation. These curves were then incorporated into the main loop of C-GOLDSTEIN. The model was then run remotely in high performance computing environment. Realistic monthly temperature distributions have been obtained after running the global climate model with the new insolation component. Also, the average accuracy of modelling the insolation within C-GOLDSTEIN has been increased from 96% to 98%. In addition, new types of experiments can now be performed with the C-GOLDSTEIN model, because the calculations can now be performed for any particular time of the year. For example, consequences of random temporal variations of insolation on temperature can now be examined.
2020-03-27T01:00:41.041Z
2020-03-19T00:00:00.000
{ "year": 2020, "sha1": "20e11896f7778379cf8a952ae8224b3b94ef213d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "20e11896f7778379cf8a952ae8224b3b94ef213d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
15868964
pes2o/s2orc
v3-fos-license
Psychological well-being in individuals with mild cognitive impairment Objectives Cognitive impairments associated with aging and dementia are major sources of burden, deterioration in life quality, and reduced psychological well-being (PWB). Preventative measures to both reduce incident disease and improve PWB in those afflicted are increasingly targeting individuals with mild cognitive impairment (MCI) at early disease stage. However, there is very limited information regarding the relationships between early cognitive changes and memory concern, and life quality and PWB in adults with MCI; furthermore, PWB outcomes are too commonly overlooked in intervention trials. The purpose of this study was therefore to empirically test a theoretical model of PWB in MCI in order to inform clinical intervention. Methods Baseline data from a convenience sample of 100 community-dwelling adults diagnosed with MCI enrolled in the Study of Mental Activity and Regular Training (SMART) trial were collected. A series of regression analyses were performed to develop a reduced model, then hierarchical regression with the Baron Kenny test of mediation derived the final three-tiered model of PWB. Results Significant predictors of PWB were subjective memory concern, cognitive function, evaluations of quality of life, and negative affect, with a final model explaining 61% of the variance of PWB in MCI. Discussion Our empirical findings support a theoretical tiered model of PWB in MCI and contribute to an understanding of the way in which early subtle cognitive deficits impact upon PWB. Multiple targets and entry points for clinical intervention were identified. These include improving the cognitive difficulties associated with MCI. Additionally, these highlight the importance of reducing memory concern, addressing low mood, and suggest that improving a person’s quality of life may attenuate the negative effects of depression and anxiety on PWB in this cohort. Introduction Dementia is one of the principal causes of disability and decreased life quality among older adults. 1 Coincident with a worldwide acceleration of dementia burden, there has been a sharp rise in quality of life (QoL) research in this field. 2,3 Growing expectations for positive aging amongst older adults and policy concern about the rising costs of age-related health care and institutionalization underlie this trend. 2 In fact, low life quality is a strong predictor of adverse health outcomes such as nursing home placement and death. 4 Consequently, QoL outcomes are now recommended as essential in dementia prevention and intervention research. 5 Despite the universal recognition that QoL is important, no single consensus definition of QoL is available, as definitions vary by theoretical and disciplinary perspectives. [1][2][3]6 A related but distinct concept that is viewed as a marker of successful aging is psychological well-being (PWB). 7,8 Recent definitions of PWB focus on eudaimonic well-being, which incorporates psychological concepts such as mastery, social connectedness, and self-acceptance. 9 Additionally, research indicates that older adults emphasize PWB, rather than biomedical factors, as they rate well-being as a priority despite the presence of disease and disability. 8 In contrast, QoL often refers to hedonic concepts such as satisfaction with different domains of life, including health, finances, and recreation. 10 Despite these differences, PWB is often not explicitly examined or is subsumed into the generic concept of QoL. [11][12][13][14][15][16][17] The terms and constructs PWB and QoL are also frequently applied in research without definition 15,18,19 with many studies confusing the terms and mixing outcome measures or simply avoiding defining terms. 15,18 Maintaining life quality is highly relevant for those with neurodegenerative disorders as there is no effective cure. 20 Jonker et al provided a three-tiered hierarchical model of QoL with PWB as the ultimate focus 17 to improve treatment outcomes in dementia. However, for the purposes of dementia prevention, interventions are increasingly targeting those in the early preclinical stage of the disease, 21 often diagnosed with mild cognitive impairment (MCI). 22 However, the vast majority of studies have focused upon dementia 20 and no model of PWB has been developed for MCI. Additionally, clinical trials that enroll people with MCI have generally not examined QoL and PWB as outcome variables, with a recent review of cognitive interventions in MCI indicating that only two of 14 trials had QoL outcomes. 23 Therefore, based upon Jonker's 2004 model for Alzheimer's disease, we present a theoretical hierarchical model of PWB in MCI, shown in Figure 1, in order to inform intervention. This theoretical model includes both objective measures of disease and subjective measures of PWB and QoL, consistent with recent recommendations. 24 Level 1 "clinical aspects of disease" (Jonker's term) 17 are operationalized as the diagnostic criteria for MCI: subjective memory concern; mild cognitive impairment assessed on objective cognitive measures; and intact activities of daily living. 22 "Clinical aspects not related to dementia" (Jonker's term) 17 includes covariates age, education, sex, and negative affect. External environment is defined here as social network, which has been found to influence health and life quality. 25 Level 2 "evaluation of each domain" (Jonker's term) 17 is operationalized as self-ratings of satisfaction across different aspects of life including health. Level 3 PWB, as defined above, is similar to the concepts of "positive productive aging" or "successful aging", 8,26 and is the ultimate clinical outcome. It was originally postulated that level 1 factors would be interrelated and have discrete links with level 2 QoL and level 3 PWB, and that changes in clinical aspects of disease would be reflected in changes in evaluations of PWB. 17 However, this model was never empirically tested and it has been argued that such evaluation of QoL and PWB is necessary to advance our understanding of the field. 27 781 Psychological well-being in individuals with mild cognitive impairment 1. level 1 clinical aspects of MCI, negative affect, and social environment will be interrelated; 2. level 1 factors will be related to the level 2 evaluations of quality of life factors; 3. level 1 and level 2 factors will both be related to PWB; and, importantly; 4. level 2 factors will mediate the relationship between level 1 factors and PWB. Methods The data were drawn from the Study of Mental Activity and Regular Training (SMART) trial published by Gates et al 28 Participants Participants (N=100) were enrolled in the Sydney SMART trial 28 and were community-dwelling adults aged over 55 years with diagnosis of MCI: self-reported memory complaint; objective cognitive deficit based on a minimental status examination (MMSE) 31 score of 23-29; and no dementia (Clinical Dementia Rating of 0.5 or below). 32 Primary exclusion criteria of the SMART trial were clinical depression, unstable medical conditions, and other progressive neurological diseases. Full inclusion and exclusion details can be found in our published protocol. 28 Measures For details of the full neuropsychological battery and psychological test instruments, see the SMART protocol. 28 Level 1 clinical aspects of MCI were the three common diagnostic criteria measured separately. Subjective memory concern can be validly assessed via memory complaint and a person's self-rated capacity to perform daily memory tasks, 29 and both methods were used here. A study-specific questionnaire of seven items relating to severity of current memory complaints provided a memory complaint score (MCS), and self-rated memory function was assessed with the Memory Awareness Rating Scale-Memory Function Scale (MARS-MFS). 30 Cognitive function was measured with the MMSE, 31 the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) 32 , and multidomain neuropsychological measures. These measures were: Trail Making Test (TMT); 33 Symbol Digit Modalities Test (SDMT); 33 Logical Memory I and II subtests of the Wechsler Memory Scale 3rd Edition; 34 the ADAS-Cog three memory recall trials with a total score of correctly-recalled words for the assessment of list learning; Benton Visual Retention Test-Revised 5th Edition; 35 controlled oral word association (COWAT); Category fluency (animal naming); 33 and the Matrices and Similarities subtests of the Wechsler Adult Intelligence Scale 3rd Edition. 36 Daily function was measured on the Bayer-Activities of Daily Living (B-ADL) 37 as a self-rating scale of capacity to perform instrumental activities of daily life. Level 1 nonclinical MCI factors were environment and negative affect. Negative affect was measured with the Geriatric Depression Scale (GDS) 15-items scale, 38,39 and with the Depression Anxiety and Stress Scale 21(DASS). 40 Satisfaction with social environment was assessed on the eleven items of the abbreviated Duke Social Support Index (DSSI) 41 providing a satisfaction score regarding the size and structure of respondents, social network. Level 2 evaluations of QoL involved measuring hedonic aspects of QoL obtained from the 15-item Quality of Life Scales (QOLS), 42 the SF-36v2™, 43 and the Life Satisfaction Scale (LSS). 12 The QOLS measures level of satisfaction across five domains of life: material and physical well-being; relationships with other people; social, community, and civic activities; personal development and fulfillment; and recreation. 42 The Scale of Psychological Well-Being (SPWB) and the QOLS have been differentiated as measuring two distinct constructs. 10 The SF-36v2™ is a clinician-administered scale on which respondents rate eight areas: physical functioning; role functioning; bodily pain; general health; vitality; social functioning; role-emotional functioning; and mental health. The scoring algorithm generates two summary scores; a physical component score (SF-36 PCS) and mental component score (SF-36 MCS). The LSS is a global validated single item seven-point delighted-terrible rating scale. 6 Level 3 psychological well-being concerned with eudaimonic factors measured with the 84-item SPWB 13 across six domains -autonomy; environmental mastery; personal growth; positive relations with others; purpose in life; and self-acceptance -with respondents required to rate their level of agreement with each item. Procedures All results reported here were derived from baseline data collected before randomization. Sociodemographic and health status data were obtained through face-to-face interviews and assessment using structured interviews and self-report scales. statistical analysis Demographic details are reported as means and standard deviations (SD) for continuous variables and percentages for categorical variables. The distributions of all continuous variables used in the following analyses were examined and, if necessary, transformed to approximate the normal distribution more closely. Preliminary regressions were performed between PWB and all variables within the same level to avoid issues associated with colinearity. Relationships within level 1 variables (clinical aspects of MCI, social support, and negative affect) were analyzed using a series of regression analyses. For these analyses, each independent variable was entered separately into each of the regression models, together with control variables age, sex, education. Negative effect was entered as a control variable where appropriate to the particular analysis. Multivariate regression analyses were used to examine significant level 1 and level 2 predictors of SPWB and significant level 1 predictors of level 2 variables. Model reduction was carried out using the backwards elimination method with the P-value for item removal set at 0.10. Regression using the stepwise procedure was performed to isolate the relative contribution of each level in the hierarchical model. Potential mediation of the effects of level 1 variables on SPWB by level 2 variables was examined using the method described by Baron and Kenny. 44 All data analyses were performed using IBM SPSS version 21 (IBM Corporation, Armonk, NY, USA). A P-value of ,0.05 was considered indicative of statistical significance. Results There are no missing values in any table as a full data set was acquired. Demographic information is shown in Table 1. The sample had a mean age of 70 years, was predominantly female (68%), and all participants had completed secondary schooling (mean 13.4 years of schooling). Mild cognitive impairment was evident (mean MMSE 27.47, SD 1.46: mean Clinical Dementia Rating 0.14, SD 0.22). Descriptive statistics for the measures of clinical aspects of MCI, negative affect, social support, evaluations of QoL, and PWB are presented in Table 2. Scores on the DASS-depression and DASS-anxiety scales were transformed with square root transformation due to nonnormality of the distributions prior to analyses. Memory complaints (mean 2.84, SD 1.37) and intact capacity to perform activities of daily living (mean The relationships between clinical aspects of MCI and other level 1 variables A series of regression analyses with the memory concern variables as dependent variables (DVs) on each cognitive variable and B-ADL as an independent variable (IV), and B-ADL (DV) on each cognitive variable (IV) are presented in Table S1. The results of regression analyses for the effects of clinical aspects of MCI on social support and negative affect variables are presented in Table S2. Each of the clinical aspects of MCI variables were entered singly into each regression equation, with age, sex, and education included in each analysis as control variables. Higher education was related to fewer DASS-depression symptoms (β −0.22, P.0.05) and DASS-anxiety symptoms (β −0.27, P,0.05), but age and sex were not related to negative affect scores. Generally, both subjective and objective cognitive function and negative affect were inversely related, as anticipated. After controlling for demographics, increasing memory complaint (MCS) was significantly associated with more depressive symptoms on GDS (β 0.29, P,0.005). Better self-rated memory function (MARS-MFS) was significantly associated with fewer depressive symptoms (GDS; β −0.29, P,0.005; DASS-depression; β −0.25, P,0.05) and stress (β −0.25, P,0.05). Worse ADAS-Cog performance was related to more depressive symptoms on GDS (β 0.28 P,0.005) and greater stress (DASS-stress; β 0.34, P,0.05). Higher delayed memory on Logical II was associated with lower depressive symptoms (GDS; β −0. 22 hierarchical model: relationships between levels 1, 2, and 3 Regression of level 3 SPWB (DV) onto each single independent variable, and multivariate regression of SPWB on all level 1 variables (clinical aspects of MCI, mood, social support) and on both level 1 and level 2 evaluations of life quality variables combined, are presented in Table 3. In the reduced models, ratings of SPWB were not significantly associated with any demographic variable. The final set of significant predictors of SPWB, with both level 1 and 2 variables as IVs, were level 1 objective cognitive function, self-rated memory function and negative affect, and level 2 evaluations of QoL. Better SPWB was associated with lower verbal fluency (COWAT; β −0.23, P,0.00) but better matrices performances LSS (β −0.10, P,0.1). The final reduced regression model of SPWB, obtained using backwards elimination procedure, is shown in Table 3. Hierarchical regression analysis, using the variables in the reduced model shown in Table 3, was performed to determine the contribution of the clinical aspects of MCI, after controlling for negative affect, and the contribution of the evaluation of QoL variables in addition to all other variables in the model. Depression and anxiety were entered first, followed by clinical aspects of MCI (memory function and cognitive function), and then evaluations of QoL at step 3. The full model explained more than half the variance of SPWB (total R 2 =0.61; P,0.001). Memory concern and cognitive function made an additional significant contribution to the explanation of variance in SPWB (R 2 change =0.17; P,0.001) above Clinical Interventions in Aging 2014:9 submit your manuscript | www.dovepress.com 785 Psychological well-being in individuals with mild cognitive impairment the contribution of depression and anxiety. Evaluations of QOLS and SF36PCS were entered last and contributed an additional R 2 of 0.19 (P,0.001), indicating that level 2 evaluations of QoL predict additional variance in SPWB above level 1. A depiction of the final regression model is shown in Table 4. Mediation Applying Baron and Kenny's criteria for mediation, 44 only two level 2 variables (QOLS and SF36-PCS) had statistically significant effects on SPWB when all variables were entered (see Table 3), and could potentially be included in mediation analyses. Two separate linear regressions were conducted with QOLS and SF36-PCS as DVs and level 1 variables as IVs providing reduced models following backwards elimination shown in Table S3. Next, only those level 1 variables that had statistically significant effects on SPWB (without QOLS and SF36-PCS in the equation [ Table 3]), and on QOLS and SF36-PCS (Table S3), satisfied Baron and Kenny's criteria for inclusion. Inspection of the results in Tables 3 and S3 indicates that DASS-anxiety and DASS-depression, social Discussion This study provides empirical support for a hierarchical model of PWB in MCI that explains 61% of the variance as measured here. The hypotheses that level 1 clinical aspects of MCI, depression, and social support would be interrelated, and that those primary aspects would influence secondary evaluations of life quality, were also supported. Further, the postulate that tertiary-level PWB would be significantly influenced by primary MCI and depression, as well as secondary evaluations of QoL, was supported. Moreover, our hypothesis that evaluations of QoL would mediate the effect of lower-level 1 variables on PWB was partially supported. Results obtained here are hence consistent with findings from studies of older adults that suggest that lower QoL and well-being are associated with higher memory concern, 29,45 cognitive deficits, 20 and negative affect. 46 However, unlike those studies, analyses here examined clinical aspects of MCI, negative affect, and QoL within the one cohort, thus testing a more comprehensive model in a clinically relevant sample. Our examination of clinical aspects of MCI indicated that memory concern in the form of complaints and self-rated memory function was significantly associated with cognitive and daily function, independently of negative affect. Memory concern was also significantly linked to a low satisfaction with social support and greater depressive symptoms. With the exception of cognitive and daily functions, which were not associated, all other level 1 factors were significantly linked together. These findings therefore provide some support for Jonker's conceptualization 17 of primary disease and nondisease factors relevant to PWB. Our findings also align with research that memory concerns are indeed reported by individuals with objective evidence of cognitive decline. 47,48 Many studies have not reported significant associations between memory concern and cognitive function, finding rather that memory concern is related to psychopathology, personality traits such as neuroticism, and negative cognitive bias. 29 In our study, however, the association between memory concern and cognition was independent of the links between memory complaint and self-rated memory function, and negative affect. Lower cognitive performance in our MCI sample was associated with higher levels of negative affect, consistent with previous MCI research. 49 786 gates et al levels were exceedingly low; nonetheless, mixed associations between cognition and negative affect were evident depending upon the scale. This highlights a previously known issue: that different depression scales can provide conflicting results. 50 Moreover, the nature of any causal links between cognitive function and depression is controversial. In healthy adults, some research has suggested that perceived deterioration in memory may lead to anxiety, possibly fear about developing dementia, with depression as a natural response. 51 Hence, poor cognitive function may lead to depression. Other research indicates a concurrent incidence between MCI and depressive symptoms. Therefore, another possibility is that they are comorbid conditions and the presence of hippocampal atrophy in both cognitive impairment and depression suggests a common biological process as well. Yet another alternative is that depression is a significant risk factor for subsequent cognitive deterioration. 49,52 At this time, there is no consensus in this area. However, given that negative affect was here independently associated with cognitive function, social support, memory concern, and PWB, understanding and identifying negative affect in MCI represents a valuable potential treatment target. Level 1 memory concern, cognitive function, capacity to complete daily activities, social support, and negative affect were significantly related to level 2 evaluations of QoL. Specifically, results here indicate that satisfaction with social support has a large and significant positive association with QoL, consistent with previous research. 25 The association between memory concern and QoL is controversial because of the role of depression and negative cognitive bias or "affective distortion". 46 In contrast, negative affect, specifically anxiety and depression, in this study was significantly linked to lower evaluations of QoL; a finding entirely consistent with previous research. For example, a review of Health Related Quality of Life (HR-QoL) in dementia indicated that decreased QoL was consistently associated with depression. 53 Similarly, evaluations of QoL have been found to deteriorate with depression in clinical settings. 54 Thus, findings here suggest complex interrelationships similar to those reported in a previous study of community dwelling elderly, which found that increasing severity of memory concern was associated with multiple factors including poor social network, negative age stereotyping, and depression. 55 Within this study, cognitive function was also related to QoL. In mild dementia, lower level of cognition is linked to lower health related QoL. 56 However, different cognitive functions had different relationships with each QoL outcome. A review examining the influence of specific cognitive func-tions on HR-QoL in neurological disease similarly identified differential impacts. 57 A total of 92% of participants in this study were identified as having nonamnestic MCI. The cognitive deficits and risk profiles associated with nonamnestic MCI are heterogeneous, giving rise to the subtyping of MCI. 58 Therefore, it is plausible that the various cognitive deficits in this cohort are differentially associated with QoL and further research is required. Finally, analysis of level 3 PWB indicated that primary level clinical aspects of MCI and depression, and level 2 evaluations of QoL, were both predictive of PWB. Subjective memory concern was directly linked to PWB after controlling for depression. This result is consistent with several previous studies; 29,45 however, conflicting results have also been noted. 59 Results here suggest that subjective level of complaint and self-ratings of memory function have differential impact upon PWB. Inconclusive and equivocal findings regarding QoL and memory concern may therefore, in part, reflect different assessment approaches for subjective memory concern 60 and QoL. 5 Recommendations have been made to examine the subdivisions of subjective cognitive complaint, 21 and this is supported here. Consequently, two practical clinical conclusions to draw from this study are, first, that measuring self-rated memory function in daily life may be more useful in understanding PWB than focusing on the presence or absence of complaint, and, second, that improving a person's memory function through external or internal aids may significantly improve PWB, potentially due to an increased sense of self efficacy. The lack of consensus regarding the etiology of memory concern, 61 and its association with psychopathology, has reinforced the notion that concern is purely subjective. As a result, individuals with complaints may be dismissed by health professionals as the "worried well". 62 However, as findings here suggest, concern may be linked to subtle cognitive difficulties and reduced daily function, and it is possible that adults perceive such changes when traditional cognitive measures are insensitive. 63 Furthermore, results from this study suggest that memory concerns in MCI are linked to lower ratings of QoL and PWB. In healthy adults, perceived deterioration in memory has been found to lead to anxiety. 51 Here, anxiety was linked to reduced psychological well-being. Consequently, this evidence supports a general recommendation that memory concerns should not be underestimated in clinical settings. 64 Clinicians' responses could focus on adjusting expectations, psychoeducation to alleviate anxiety, and practical strategies to minimize the impact of cognitive deficits. Our results also indicate that early subtle cognitive deficits are directly and significantly associated with PWB after controlling for depression. These results are consistent with findings from a systematic review that identified that even a mild deterioration in cognition has significant psychological impact. However, unlike previous research, results here also suggest that the negative impact of mild depressive and anxiety symptoms on PWB are mediated by evaluations of life quality across multiple domains. This finding may also have clinical implication. By improving aspects of a person's life, such as introducing increasing social support, recreational pursuits, and supporting community access, it is possible that the deleterious impact of depression and anxiety on PWB may be mitigated. Previous research indicates that frailty and cognitive deficits determine QoL in MCI and early stage dementia. Therefore, not surprisingly, lower cognitive function was significantly associated with compromised physical-QoL measured on the SF36 PCS in this study. Contrary to expectation in the multivariate model, high SF36 PCS was related to a significant reduction in psychological well-being, but mediated the impact of memory concern, cognitive difficulties, and negative affect on psychological well-being. This study has a number of limitations due to methodological constraints. The cross-sectional nature of the study restricts the extent to which causal inferences can be made. The sample was small, comprising only 100 MCI individuals with subtle cognitive deficits (mean MMSE 27.4) and who were motivated enough to volunteer for the study, and, thus, their wider representativeness is not clear. Therefore, given this MCI profile, another possible limitation that remains is the evolving nature of MCI diagnosis. 65 The impact of more severe memory deficits associated with amnestic-MCI was also not examined, and so results here may underestimate the burden of disability and reduced PWB in this group. In addition, being a convenience sample with psychopathology excluded, the associations between clinical depression and all factors, particularly PWB, could not be examined. Lastly, the sample completed a high level of education, and, within this range, lower education was significantly related to lower mood. Therefore, results may not necessarily apply to less educated individuals. Individuals with MCI encounter various unique practical and emotional difficulties. In this study, we formally tested a hierarchical model in which clinical aspects of disease influence QoL, which in turn influences PWB. We found that clinical aspects of MCI were significantly associated with reduced PWB, whilst high QoL was associated with high PWB. Results here indicate that, for individuals with the subtlest of cognitive changes, their PWB may be at risk. Intervention targeting emotional, functional, and social factors, in addition to cognitive health, may optimize PWB outcomes, and, ideally, such treatment should commence when individuals first present with concerns for their memory function.
2017-06-15T20:21:44.285Z
2014-05-08T00:00:00.000
{ "year": 2014, "sha1": "f21dc06661685466fc38d74f5168a377e98f4b03", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=19943", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d8a034acf4adc098f1b6463612ca8cf7ae2c880", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
201182582
pes2o/s2orc
v3-fos-license
EXPERIENCES OF METH-DEPENDENT PATIENTS UNDERGOING CONTINGENCY MANAGEMENT (CM) THERAPY This study explored the experiences of meth-dependent patients to gain insight on their thoughts and feelings on the Contingency Management (CM) intervention used during their three-month therapeutic session at a rehabilitation center in Dengkil, Malaysia. This basic qualitative research interviewed seven participants who had just finished undergoing the CM intervention process as the main therapeutic approach. The results indicate eight major themes from three research questions: (a) increased strength to turn their life around, (b) provide positive feelings, (c) application of knowledge, (d) reward as an afterthought, (e) realization of correcting past mistakes, (f) continuous encouragement to change, (g) more confidence in their ability to change, and (h) happy seeing positive rewards to their actions. Results of the study indicate that they found CM to be successful in their recovery process from meth dependence, especially in strengthening their intrinsic motivation. Patients also feel that the reward-based system used in CM has been beneficial in making them feel more happier and realizing the past mistakes that they have made. Volume: 4 Issues: 31 [June, 2019] pp.15-22] International Journal of Education, Psychology and Counseling eISSN: 0128-164X Journal website: www.ijepc.com Introduction The treatment and rehabilitation of substance use disorder is primarily a challenge for all especially for clinician, psychologists, counsellors, psychiatrists and social workers. Many approaches have been introduced to help treat addiction such as the used of substitution drugs (Kirby and Lamb, 1995), military-style approach (Mahmood, Shuaib, Lasimon, Rusli & Md. Zahir, 1999), homeopathy, spiritual healing and purification (Mahmood, Shauib & Ismail, 1998), behaviour modification (Monty & Rohsenow, 1997), psychology rehabilitation (Calaghan, Benton, Bradley, 1995), psychotherapy (Curran, Helene & Stephen, 2000) and a variety of other approaches. Latest approach includes medical assisted therapy or pharmacology and other psychosocial best practice approach. Medicine was also used in reducing harm through drug substitution therapy using Methadone. Drug addiction has become a menace in the Malaysian society. According to the National Anti-Drug Agency (NADA, 2015), in 2015 drug addiction cases has reached 26,668 cases, a worrying trend that has steadily risen over the years. However, in discussing the problem of drug addiction, focus should be on the behaviour modification aspect of an addict. Drug addiction is a disease that causes an impact on biopsychosocial of an individual. To help treat an addict, the behaviour element is the most crucial yet difficult part to treat rather than the physical or biological element which generally can be treated using medical approach. According to most studies, drug abuse is a disease resulting from a process of learning and strengthening to a response as stated in the Operant Conditioning Theory and Classic Behavioural Theory (Carroll et al., 2005). To treat the behaviour, the behaviour modification approach is needed. The approach has varieties of type, beginning with psychotherapy-based such as Cognitive-Behavioural Therapy (CBT), Motivational Enhancement Therapy (MET), Couple or Family Therapy and Contingency Management (CM) therapy. These approaches attempt to change the behaviour of a drug addict and the ability to manage stress and conflict productively. Addicts will be trained to say "no" to drugs, hate drugs, find a way to replace drugs, build a defence wall from drug, and manage all stress and pressure without the use of drugs. In the process of recovering from this behaviour, it's often touched on the aspect of selfesteem and self-concept, immature personality and unproductive defence mechanism performed by the individual (Mahmood et al., 1998). CM interventions (also known as motivational incentives) is designed based on the principles to establish a behaviour using reward or sometimes punishment. The procedure started from the economy token approach developed in the United States for over 40 years ago given to prisoners and detainees and are still being used to date. According to Higgins (2008), CM have shown impressive levels of efficacy across a wide range of Substance Use Disorder (SUDs) (Higgins et al., 2008). Many researchers have agreed that CM have a high level of efficacy in treatment and rehabilitation of substance use disorders (Davis et al., 2016). Determination of the appropriate standard magnitude of reinforcer is important to gauge the efficacious of rewards being given (Petry, Alessi, Barry & Carroll, 2015). Behaviour modification is important because most substance abusers were put in rehabilitation centres across Malaysia due to court orders, which makes their self-motivation to change to be very low (Ting Chie, Lian Tam, Bonn, Minh Dang, & Khairuddin, 2016). CM treatment rearranges the environment to directly detect drug use and encourage client participation in activities that promote recovery. This treatment provides a clear reinforcement or reward for proof of abstinence and a commitment to drug-free activities (Higgins et al., 1994;Petry, 2000). In most CM studies, reinforcement is given in the form of vouchers that can be exchanged with daily goods and services (Higgins et al., 1991(Higgins et al., , 1993(Higgins et al., , 1994(Higgins et al., , 2000a. To make the CM treatment even more attractive to stakeholders, innovative ideas such as making it cost effective is important to make sure it's implementation can be done seamlessly in a society (Rash, Stitzer, & Weinstock, 2016). This study focuses on understanding the experiences of meth-dependent patients who went through CM intervention in their recovery process. The study focuses on understanding what they have learned throughout their three-month therapy session, including changes that have occurred to their intrinsic motivation. Purpose of the Study The purpose of this study was to explore the experiences of meth-dependent patients undergoing CM therapy. This research focused on understanding the lived experiences of patients who are in recovery process from meth dependency by looking at the what they have gained from their 3-month experience in CM therapy. The three-research question were: • What are the experiences of meth-dependent patients undergoing Contingency Management (CM) Therapy? • What changes occur to their intrinsic motivation after undergoing Contingency Management (CM) Therapy? • How do they feel after undergoing Contingency Management (CM) Therapy? Research Design This qualitative study employed a basic qualitative approach and focuses on the meaning of their experiences as participants in a CM intervention group, or in Merriam's (2009) words "the essence of the meaning of the interaction" (p. 3). This qualitative approach provides a unique perspective on the participants in the research (Creswell, 2007) within their setting and culture that they are in (Ary, Jacobs & Sorensen, 2010). This study is part of a larger study conducted to view the efficacy of CM. The larger study consists of an experimental study by using both CM and treatment-as-usual (TAU) to look at the efficacy of CM when compared with TAU. Both groups were given their respective interventions for a period of three months. This study focuses on the experiences of seven of the participants in the study, all of which were randomly chosen by the researchers. The interviews were conducted one month after the CM group sessions ended. Participants Seven participants were interviewed in the study. The participants were interviewed once, with the interview ranging from 37 minutes to 58 minutes. The inclusionary criteria for the participants include: (1) participants who have undergone a 3-month Contingency Management (CM) intervention at a drug rehabilitation centre, (2) have ability to articulate experiences, and (3) willingness to participate in the research. A purposeful sampling procedure was used to find participants who matched the above characteristics. Although there has been no real consensus about the exact number needed for a qualitative study, Boyd (2001) has suggested that any number of participants from two to ten is considered good enough to reach a point of saturation in a qualitative study. The researchers interviewed seven participants for the study, and it was widely agreed among all researchers in this study that the point of saturation was already reached at six participants. Research Question One: What Are the Experiences of Meth-Dependent Patients Undergoing Contingency Management (CM) Therapy? The qualitative findings indicate three themes in this category: The three themes are: (a) increased strength to turn their life around, (b) provide positive feelings, and (c) application of knowledge. Theme One: Increased strength to turn their life around Five of the participants mentioned that they now find more strength within themselves to turn their life around after undergoing CM. The major reasons being given is that CM has given them the strength to achieve what they feel was not possible previously. For example, one of the participants mentioned that he felt that it would be hard to move away from his meth-dependence, however now he feels that nothing is impossible. With support, I can do it. Just, sometimes I don't have support (from family and strength). This session is where I learn new knowledge to help myself. Meanwhile, another participant talked about how he feels valued after getting rewarded for his positive actions during the course of therapy. However, he also mentioned that after some time, it was not the reward itself that he was after but rather the strength that he found within himself. I still feel OK if there is no reward, but t does provide me with some extra strength to do more. You know, it's like working and you get a salary, it will give you the extra push. Theme Two: Provide positive feelings Six of the participants shared that they had positive feelings when attending their therapy sessions. Participants stated that the positive feelings came from looking forward to attending the sessions as that had the goal of completing the task given by their rehabilitation officers. Besides that, participants also shared positive feelings such as happy, grateful, and enjoyment during their sessions. For example, one of the participants shared that he felt apprehensive at first about being rewarded if they managed to complete certain tasks or objectives, but after being given the reward he felt grateful and happy. I feel happy even after being given homework, I don't feel stressed out by the homework, I look forward to completing the task being given. Theme Three: Application of knowledge Five of the participants mentioned that they feel more confident of being able to apply knowledge that they have learned in their classes after doing CM. This is because they feel that they will be rewarded for their good behaviors if they continue doing so after leaving the rehabilitation center. One of the participants stated that the reward-based system has made him realize that he can still change despite his previous addiction to meth. Now that I am here (rehabilitation center), I know that I have to apply all the knowledge that I have learned. That if I act well, I will be rewarded for my actions, my future actions. Meanwhile, another participant stated that the knowledge learned was helped by the use of reward using CM. To get the reward, we work harder. At night, we would help each other in doing homework. So, this way we learn more about the topic that the teacher (rehabilitation officers) is teaching. Research Question Two: What Changes Occur to Their Intrinsic Motivation after Undergoing Contingency Management (CM) Therapy? The qualitative findings indicate three themes for the category. The three themes are: (a) reward as an afterthought, (b) realization of correcting past mistakes, and (c) continuous encouragement to change. Theme One: Reward as an afterthought All participants mentioned that while the reward itself is helpful as an added incentive for them, they felt that they were not really after the reward itself but rather what they have learned during their classes is more important. For them, reward is just an afterthought and that their intrinsic motivation to change increased regardless of the rewards. For example, one of the participants stated that he did not really seek the reward after some time and realized that the ultimate reward would be to stop taking meth once he finishes his time at the rehabilitation center. For me, it's not a problem (with no reward). Even if I don't get any reward it is fine. (But) the knowledge that I learned, that is more important so that I stop (taking meth). Theme Two: Realization of correcting past mistakes Four of the participants mentioned that that they have realized their past mistakes during their CM therapy sessions. They reported that their motivation to change have increased because they have realized about the problematic life that they lived previously and want to stay away from meth after this. This provides them strength within themselves to change their lifestyle and behavior after leaving the rehabilitation center. For example, one of the participants stated that he has found an inner strength to correct the past mistakes that he has done. He stated that he needs to think not just about himself but also others around him. When I look at my family, my brother (who takes care of me), I realize my errors. I know I have not done well, (so) I want to correct the mistakes. I know I can (beat addiction). Another participant also discussed about the effect that the CM intervention has had in his intention to move forward with his life. Theme Three: Continuous encouragement to change The intervention given has given participants extra encouragement to change their addiction patterns. Five of the participants mentioned that the continuous support given by their peers and rehabilitation officers has provided them with further encouragement that they can change. Encouragement is provided through the rewards being given, which gives them extra intrinsic motivation to complete the tasks being given to them. Being here (rehabilitation center) is new (for me For example, one of the participants mentioned about seeing the reward as an encouragement that he could strive for. It feels good to be given (reward). It's like when you are fasting when you are young, your parents will give you reward after Ramadan. That gives me that encouragement to do more. Another participant mentioned that the continuous encouragement helped him to focus on improving himself as an individual. I am just a human, so I make mistakes. But, the most I learned here is to never give up on being a better person. God willing, this (CM intervention) has helped. Research Question Three: How Do They Feel After Undergoing Contingency Management (CM) Therapy? The qualitative findings indicate two themes for the category. The two themes are: (a) more confidence in their ability to change, and (b) happy seeing positive rewards to their actions. Theme One: More confidence in their ability to change All participants stated that they feel more confident in their own recovery process in dealing with meth addiction after undergoing CM therapy. This comes after being satisfied with the CM intervention used throughout a three-month period. They feel that they have learned a lot throughout their time in the group and understand that good behavior will be equally rewarded in their daily life. One of the participants stated that he understood the concept of CM and looks forward to applying it to his daily life. Another participant mentioned that the reward is just a tool, but that the activities has helped him to be more confident in his ability. Another participant shared about the reward being important part of him doing well in the CM intervention process and making him feel happier in the process. For me, it's different from the other help usually given (types of therapy). I hope it continues, maybe it will be beneficial for others. Conclusion Findings of the study supported previous research which indicated that CM would be beneficial for patients struggling with drug addiction. Higgins et al. (1994) stated that rewarding patients for proof of their abstinence would be beneficial for their recovery process, and this study have further solidified that statement. From the research, it was found that CM has managed to improve their intrinsic motivation in their recovery process. The knowledge that they learned in their therapy group has been aided by the use of CM, especially with improving their confidence in beating their meth addiction, and increased strength and resilience to change.
2019-08-23T02:03:45.718Z
2019-07-07T00:00:00.000
{ "year": 2019, "sha1": "1e351b94696f1ef90dd159fed7731b2ad3533687", "oa_license": null, "oa_url": "https://doi.org/10.35631/ijepc.431002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "07e2b33fd175ae035af9b2fbef7b17b231b162c9", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
10003329
pes2o/s2orc
v3-fos-license
First and Second Order Statistics Features for Classification of Magnetic Resonance Brain Images In literature, features based on First and Second Order Statistics that characterizes textures are used for classification of images. Features based on statistics of texture provide far less number of relevant and distinguishable features in comparison to existing methods based on wavelet transformation. In this paper, we investigated performance of texture-based features in comparison to wavelet-based features with commonly used classifiers for the classification of Alzheimer’s disease based on T2-weighted MRI brain image. The performance is evaluated in terms of sensitivity, specificity, accuracy, training and testing time. Experiments are performed on publicly available medical brain images. Experimental results show that the performance with First and Second Order Statistics based features is significantly better in comparison to existing methods based on wavelet transformation in terms of all performance measures for all classifiers. Introduction Alzheimer's disease is a form of dementia that causes mental disorder and disturbances in brain functions such as language, memory skills, and perception of reality, time and space.World Health Organization [1] and National Institute on Aging (NIA) [2] highlighted that its early and accurate diagnosis can help in its appropriate treatment.One of the most popular ways of diagnosing Alzheimer by physician is a neuropsychological test like Mini Mental State Examination (MMSE) that test memory and language abilities.But problem with this approach is that it is subjective, human biased and sometimes does not give accurate results [3]. In Alzheimer's disease, the hippocampus located in the medial temporal lobe of the brain is one of the first regions of the brain to suffer damage [4][5][6].The research works [7][8][9][10] have found that the rate of volume loss over a certain period of time within the medial temporal lobe is a potential diagnostic marker in Alzheimer's disease.Moreover, lateral ventricles are on average larger in patients with Alzheimer's disease.Holodny et al. [11] measured the volume of the lateral ventricles for its diagnosis. Alzheimer's Association Neuroimaging Workgroup [12] emphasized image analysis techniques for diagnosing Alzheimer.Among various imaging modalities, Mag-netic Resonance Imaging (MRI) is most preferred as it is non-invasive technique with no side effects of rays and suitable for the internal study of human brain which provide better information about soft tissue anatomy.However, there is a huge MRI repository, which makes the task of manual interpretation difficult.Hence, computer aided analysis and diagnosis of MRI brain images have become an important area of research in recent years. For proper analysis of these images, it is essential to extract a set of discriminative features which provide better classification of MRI images.In literature, various feature extraction methods have been proposed such as Independent Component Analysis [13], Fourier Transform [14], Wavelet Transform [15,16], and Texture based features [17][18][19].It is a well-known fact that Fourier transform is useful for extracting frequency contents of a signal however it cannot be use for analyzing accurately both time and frequency contents simultaneously.In order to overcome this, wavelet analysis is proposed which analyze time information accurately with the use of a fixed-size window.With the use of variable sized windows, it captures both low-frequency and high-frequency information accurately. For the classification of Alzheimer's disease, Chaplot et al. [15] used Daubechies-4 wavelet of level 2 for the extraction of features from MRI. Dahshan et al. [16] pointed out that the features extracted using Daubechies-4 Wavelet were too large and may not be suitable for the classification.The research work used Haar Wavelet of level 3 for feature extraction and further reduced features using Principal Component Analysis (PCA) [20] before classification.Though PCA reduce the dimension of feature vector, but it has following disadvantages: 1) Interpretation of results obtained by transformed feature vector become the non-trivial task which limits their usability; 2) The scatter matrix, which is maximized in PCA transformation, not only maximizes between-class scatter that is useful for classification, but also maximizes within-class scatter that is not desirable for classification; 3) PCA transformation requires huge computation time for high dimensional datasets. In literature [17,18] features based on First and Second Order Statistics that characterizes textures are also used for classification of images.Features based on statistics of texture gives far less number of relevant, non-redundant, interpretable and distinguishable features in comparison to features extracted using DWT.Motivated by this, in our proposed method, we use First and Second Order Statistics for feature extraction.In this paper, we investigated performance of First and Second order based features in comparison to wavelet-based features.Since, the classification accuracy of a decision system also depends on the choice of a classifier.We have used most commonly and widely used classifiers for the classification of MRI brain images.The performance is evaluated in terms of sensitivity, specificity, accuracy, training and testing Time. The rest of the paper is organized as follows.A brief description of wavelet transform and First and Second order statistics are discussed in Sections 2 and 3 respectively.It is followed by Section 4 which includes experimental setup and results.Finally conclusion and future directions are included in Section 5. Wavelet Transform The feature extraction stage is one of the important components in any pattern recognition system.The performance of a classifier depends directly on the choice of feature extraction and feature selection method employed on the data.The feature extraction stage is designed to obtain a compact, non-redundant and meaningful representation of observations.It is achieved by removing redundant and irrelevant information from the data.These features are used by the classifier to classify the data.It is assumed that a classifier that uses smaller and relevant features will provide better accuracy and require less memory, which is desirable for any real time system.Besides increasing accuracy, the feature extraction also improves the computational speed of the classifier. In where s and  are scale and translation coefficients respectively.Discrete Wavelet Transform (DWT) is derived from CWT which is suitable for the analysis of images.Its advantage is that discrete set of scales and shifts are used which provides sufficient information and offers high reduction in computation time [21].The scale parameter (s) is discretized on a logarithmic grid.The translation parameter    is then discretized with respect to the scale parameter.The discretized scale and translation parameters are given by, 2 m s   and , where m and n are positive integers.Thus, the family of wavelet functions is represented by The DWT decomposes a signal x[n] into an approximation (low-frequency) components and detail (high frequency) components using wavelet function and scaling functions to perform multi-resolution analysis, and is given as [ where c i,k , i = 1 I are wavelet coefficients and d i,k , i = 1 I are scaling coefficients. The wavelet and the scaling coefficients are given by where g i [n -2 i k] and h I [n -2 I k] represent the discrete wavelets and scaling sequences respectively.The DWT for a two dimensional image x[m, n] can be similarly defined for each dimension separately.This allows an image I to decompose into a pyramidal structure with approximation component (I a ) and detailed components (I h , I v and I d ) [22].The image I in terms of first level approximation component and detailed components is given by If the process is repeated up to N levels, the image I can be written in terms of N th approximation component ( N a I ) and detailed components as Figure 1 shows the process of an image I being decomposed into approximate and detailed components up to level 3.As the level of decomposition is increased, compact but coarser approximation of the image is obtained.Thus, wavelets provide a simple hierarchical framework for better interpretation of the image information [23]. Mother wavelet is the compressed and localized basis of a wavelet transform.Chaplot et al. [15] employed level 2 decomposition on MRI brain images using Daubechies-4 mother wavelet and constructed 4761 dimensional feature vector from approximation part for the classification of two types of MRI brain images i.e. image from AD patients and normal person.Dahshan et al. [16] pointed out that the number of features extracted using Daubechies-4 wavelet were too large and may not be suitable for the classification.In their proposed method, they extracted 1024 features using level 3 decomposition of image using Haar Wavelet and further reduced features using PCA.Though PCA reduce the dimension of feature vector, but it has following disadvantages: 1) Interpretation of results obtained by transformed feature vector become the non-trivial task which limits their usability; 2) The scatter matrix, which is maximized in PCA transformation, not only maximizes between-class scatter that is useful for classification, but also maximizes within-class scatter that is not desirable for classification; 3) PCA transformation requires huge computation time for high dimensional datasets. Hence, there is need to construct a smaller set of features which are relevant, non-redundant, interpretable and helps in distinguishing two or more kinds of MRI images.This will also improve the performance of decision system in terms of computation time.In literature [17,18], First and Second Order Statistics based features are constructed which provide a smaller set of relevant and non-redundant features for texture classification. Features Based on First and Second Order Statistics The texture of an image region is determined by the way the gray levels are distributed over the pixels in the region.Although there is no clear definition of "texture" in literature, often it describes an image looks by fine or coarse, smooth or irregular, homogeneous or inhomogeneous etc.The features are described to quantify properties of an image region by exploiting space relations underlying the gray-level distribution of a given image. First-Order Statistics Let random variable I represents the gray levels of image region.The first-order histogram P(I) is defined as: number of pixels with gray level ( ) total number of pixels in the region Based on the definition of P(I), the Mean m 1 and Central Moments µ k of I are given by where N g is the number of possible gray levels. The most frequently used central moments are Variance, Skewness and Kurtosis given by µ 2 , µ 3 , and µ 4 respectively.The Variance is a measure of the histogram width that measures the deviation of gray levels from the Mean.Skewness is a measure of the degree of histogram asymmetry around the Mean and Kurtosis is a measure of the histogram sharpness. Second-Order Statistics The features generated from the first-order statistics pro-vide information related to the gray-level distribution of the image.However they do not give any information about the relative positions of the various gray levels within the image.These features will not be able to measure whether all low-value gray levels are positioned together, or they are interchanged with the high-value gray levels.An occurrence of some gray-level configuration can be described by a matrix of relative frequencies P θ,d (I 1 , I 2 ).It describes how frequently two pixels with gray-levels I 1 , I 2 appear in the window separated by a distance d in direction θ.The information can be extracted from the co-occurrence matrix that measures second-order image statistics [17,24], where the pixels are considered in pairs.The co-occurrence matrix is a function of two parameters: relative distance measured in pixel numbers (d) and their relative orientation θ.The orientation θ is quantized in four directions that represent horizontal, diagonal, vertical and anti-diagonal by 0˚, 45˚, 90˚ and 135˚ respectively. Non-normalized frequencies of co-occurrence matrix as functions of distance, d and angle 0˚, 45˚, 90˚ and 135˚ can be represented respectively as where    refers to cardinality of set, f(k, l) is intensity at pixel position (k, l) in the image of order ( ) and the order of matrix D is . Using Co-occurrence matrix, features can be defined which quantifies coarseness, smoothness and texturerelated information that have high discriminatory power. Among them [17], Angular Second Moment (ASM), Contrast, Correlation, Homogeneity and Entropy are few such measures which are given by:   , Correlation , Homogeneity 1 1 2 1 2 , Entropy log , ASM is a feature that measures the smoothness of the image.The less smooth the region is, the more uniformly distributed P(I 1 , I 2 ) and the lower will be the value of ASM.Contrast is a measure of local level variations which takes high values for image of high contrast.Correlation is a measure of correlation between pixels in two different directions.Homogeneity is a measure that takes high values for low-contrast images.Entropy is a measure of randomness and takes low values for smooth images.Together all these features provide high discriminative power to distinguish two different kind of images. All features are functions of the distance d and the orientation θ.Thus, if an image is rotated, the values of the features will be different.In practice, for each d the resulting values for the four directions are averaged out.This will generate features that will be rotations invariant. Experimental Setup and Results In this section, we investigate different combination of feature extraction methods and classifiers for the classification of two different types of MRI images i.e.Normal image and Alzheimer image.The feature extraction methods under investigations are: Features based on First and second order statistics (FSStat), Features using Daubechies-4 (Db4) as described by Chaplot et al. [15] and Haar in combination with PCA (HaarPCA) as described by Dahshan et al. [16].We will explore the classifiers used by Chaplot et al. [15] (SVM with linear (SVM-L), polynomial kernel (SVM-P) and radial kernel (SVM-R)), Dahshan et al. [16] (K-nearest neighbor (KNN) and Levenberg-Marquardt Neural Classifier (LMNC)) and C4.5.The polynomial kernel of SVM is used with degrees 2, 3, 4 & 5 and best results obtained in terms of accuracy are reported.Similarly radial kernel (SVM-R) is used with various parameters 10 i where I = 0 6 and only results corresponding to highest Accuracy is reported.Description of LMNC and remaining classifiers can be found in [25] and [26] respectively. Textural features of an image are represented in terms of four first order statistics (Mean, Variance, Skewness, Kurtosis) and five-second order statistics (Angular second moment, Contrast, Correlation, Homogeneity, Entropy).Since, second order statistics are functions of the distance d and the orientation  , hence, for each second order measure, the mean and range of the resulting values from the four directions are calculated.Thus, the number of features extracted using first and second order statistics are 14. To evaluate the performance, we have considered medical images from Harvard Medical School website [27].All normal and disease (Alzheimer) MRI images are axial and T2-weighted of 256 × 256 size.For our study, we have considered a total of 60 trans-axial image slices (30 belonging to Normal brain and 30 belonging to brain suffering from Alzheimer's disease).The research works [7][8][9][10] have found that the rate of volume loss over a certain period of time within the medial temporal lobe is a potential diagnostic marker in Alzheimer disease.Moreover lateral ventricles are on average larger in patients with Alzheimer's disease.Hence, only those axial sections of the brain in which lateral ventricles are clearly seen are considered in our dataset for experiment.As temporal lobe and lateral ventricles are closely spaced, our axial samples thus cover hippocampus and temporal lobe area sufficiently, which can be good markers to distinguish two types of images.Figure 2 shows the difference in lateral ventricles portion between a normal and an abnormal (Alzheimer) image. In literature, various performance measures have been suggested to evaluate the learning models.Among them the most popular performance measures are following: 1) Sensitivity, 2) Specificity and 3) Accuracy. Sensitivity (True positive fraction/recall) is the proportion of actual positives which are predicted positive.Mathematically, Sensitivity can be defined as actual negatives which are predicted negative.It can be defined as Accuracy is the probability to correctly identify individuals.i.e. it is the proportion of true results, either true positive or true negative.It is computed as where TP: correctly classified positive cases, TN: correctly classified negative cases, FP: incorrectly classified negative cases and FN: incorrectly classified positive cases. In general, sensitivity indicates, how well model identifies positive cases and specificity measures how well it identifies the negative cases.Whereas accuracy is expected to measure how well it identifies both categories.Thus if both sensitivity and specificity are high (low), accuracy will be high (low).However if any one of the measures, sensitivity or specificity is high and other is low, then accuracy will be biased towards one of them.Hence, accuracy alone cannot be a good performance measure.It is observed that both Chaplot et al. [15] and Dahshan et al. [16] used highly imbalance data whose classification accuracy was highly biased towards one.Hence, we have constructed balanced dataset (samples of both classes are in same proportion) so that classification accuracy is not biased.Two other performance measures used are training and testing time of learning model. The dataset was arbitrarily divided into a training set consisting of 12 samples and a test set of 48 samples.The experiment is performed 100 times for each setting and average sensitivity, specificity, accuracy, training and testing time are reported in Table 1.The best results achieved for each classifier corresponding to different performance measure is shown in bold.All experiments were carried out using Pentium 4 machine, with 1.5 GB RAM and a processor speed of 1.5 GHz.The programs were developed using MATLAB Version 7 using combination of Image Processing Toolbox, Wavelet Toolbox and Prtools [28] and run under Windows XP environment. We can observe the following from Table 1: 1) The classification accuracy with FSStat is significantly more in comparison to both Db4 [15] and Haar-PCA [16] for all classifiers. 2) Similar variation in observation is noticed with performance measure sensitivity. 3) For specificity, FSStat provide better results, except for classifiers SVC-P and LMNC, in comparison to both Db4 and HaarPCA. 4) The difference between sensitivity and specificity is large for both Db4 and HaarPCA in comparison to FSStat.Accuracy obtained using both Db4 and HaarPCA is more even though the sensitivity is low and specificity is high which suggest that classification accuracy obtained is biased. 5) The variation in classification accuracy with different classifiers is not significant with FSStat in comparison with both Db4 and HaarPCA. 6) The training time with FSStat is significantly less in comparison to both Db4 and HaarPCA.This is because the number of features obtained with FSStat is less and does not involve any computation intensive transformation like PCA in HaarPCA. 7) Testing time of an image is not significant in comparison to training time.However, testing time of an image is least with FSStat in comparison to both Db4 and HaarPCA. From above, it can be observed that the performance of decision system using FSStat is significantly better in terms of all measures considered in our experiment. Conclusions and Future Work In this paper, we investigated features based on First and Second Order Statistics (FSStat) that gives far less number of distinguishable features in comparison to features extracted using DWT for classification of MRI images. Since, the classification accuracy of a pattern recognition system not only depends on features extraction method but also on the choice of classifier.Hence, we investigated performance of FSStat based features in comparison to wavelet-based features with commonly used classifiers for the classification of MRI brain images.The performance is evaluated in terms of sensitivity, specificity, classification accuracy, training and testing time. For all classifiers, the classification accuracy and sensitivity with textural features is significantly more in comparison to both wavelet-based feature extraction techniques suggested in literature.Moreover it is found that FSStat features are not biased towards either sensitivity or specificity.Their training and testing time are also significantly less than other feature extraction techniques suggested in literature.This is because First and Second Order Statistics gives far less number of relevant and distinguishable features and does not involve in computational intensive transformation in comparison to method proposed in literature. In future, the performance of our proposed approach can be evaluated on other disease MRI images to evaluate its efficacy.We can also explore some feature extraction/construction techniques which provide invariant and minimal number of relevant features to distinguish two or more different kinds of MRI. Figure 1 . Figure 1.Pyramidal structure of DWT up to level 3. Figure 2 . Figure 2. Pyramidal structure of DWT up to level 3. Table 1 . Comparison of performance measures values for each combination of feature extraction technique and classifier. Due to huge dimension of Db4 feature vector, LMNC could not be executed; Clsf, Fe, Sn, Sp, Acc, Trn, Tst denotes Classifiers, Feature extraction technique, Sensitivity, Specificity, Accuracy, Training time and Testing time respectively.
2017-09-11T00:22:31.787Z
2012-05-30T00:00:00.000
{ "year": 2012, "sha1": "f3a303dec8ec9ca66c62a1d91345a67f6650b798", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=19553", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f3a303dec8ec9ca66c62a1d91345a67f6650b798", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252768007
pes2o/s2orc
v3-fos-license
Platelet-Derived Extracellular Vesicles Stimulate Migration through Partial Remodelling of the Ca2+ Handling Machinery in MDA-MB-231 Breast Cancer Cells Background: Platelets can support cancer progression via the release of microparticles and microvesicles that enhance the migratory behaviour of recipient cancer cells. We recently showed that platelet-derived extracellular vesicles (PEVs) stimulate migration and invasiveness in highly metastatic MDA-MB-231 cells by stimulating the phosphorylation of p38 MAPK and the myosin light chain 2 (MLC2). Herein, we assessed whether the pro-migratory effect of PEVs involves the remodelling of the Ca2+ handling machinery, which drives MDA-MB-231 cell motility. Methods: PEVs were isolated from human blood platelets, and Fura-2/AM Ca2+ imaging, RT-qPCR, and immunoblotting were exploited to assess their effect on intracellular Ca2+ dynamics and Ca2+-dependent migratory processes in MDA-MB-231 cells. Results: Pretreating MDA-MB-231 cells with PEVs for 24 h caused an increase in Ca2+ release from the endoplasmic reticulum (ER) due to the up-regulation of SERCA2B and InsP3R1/InsP3R2 mRNAs and proteins. The consequent enhancement of ER Ca2+ depletion led to a significant increase in store-operated Ca2+ entry. The larger Ca2+ mobilization from the ER was required to potentiate serum-induced migration by recruiting p38 MAPK and MLC2. Conclusions: PEVs stimulate migration in the highly metastatic MDA-MB-231 breast cancer cell line by inducing a partial remodelling of the Ca2+ handling machinery. Introduction Breast cancer represents the most widespread cancer in women and accounted for about 685.000 deaths in 2020. This death toll has been forecasted to rise to 7 million by 2040 [1]. Triple negative breast cancer (TNBC) is featured by the absence of the human epidermal growth factor receptor 2 (HER2), estrogen receptor, and progesterone receptor. Therefore, TNBC is barely sensitive to anti-HER2 and hormonal therapies, with poor survival rates [2]. Accordingly, TNBC presents high aggressiveness and short median time to relapse due to its ability to leave the primary tumour site and spread to distant organs, thereby causing patient death [3]. Understanding the cellular and molecular mechanisms that stimulate TNBC cells to migrate and colonize their new niches is indispensable when it comes to designing alternative treatments for TNBC patients. Platelet-derived extracellular vesicles (PEVs) are gaining growing interest as mediators of platelet function in different physiological and pathological contexts, from haemostasis to cardiovascular diseases [4,5]. In the last two decades, PEVs have also been recognized as critical players in the complex interplay occurring between blood platelets and cancer [6]. Exosomes and plasma membrane-derived vesicles are the two main classes of PEVs released by platelets into the bloodstream [4]. Platelet exosomes (also known as small PEVs) are stored in intracellular multivesicular bodies and released during cell secretion. Conversely, membrane-derived vesicles derive from the plasma membrane and are now typically defined as medium-large PEVs, but they were previously referred to as platelet-derived microparticles (PMPs) and platelet-derived microvesicles (PMVs) [7]. Medium-large PEVs (henceforth PEVs for brevity) have been widely studied in the frame of the platelet-cancer crosstalk. Their levels in the circulation are increased in oncological patients bearing diverse types of cancer, including cutaneous malignant melanoma [8], colorectal carcinoma [9], lung cancer [10], and breast cancer [11]. Moreover, PEVs were found to mediate intercellular communication by delivering bioactive compounds, thus initiating phenotypic and functional changes in recipient cells. These observations boosted research on the role played by these PEVs in cancer aiming to unravel their contribution in the progression of the disease but also to determine their potential use as early diagnostic markers and novel drug delivery tools [12,13]. PEVs are now known to directly regulate cancer cells as well as the cellular components of the tumour-microenvironment (TME). In this context, most studies have hinted at PEVs as crucial drivers of cancer progression, although a few investigations have discussed their anti-cancer functions [6,14,15]. We showed that thrombin-induced PEVs are internalized by different breast cancer cell lines, thereby exerting cell-specific reactions [16]. In particular, the TNBC MDA-MB-231 cell line efficiently internalizes PEVs, and this event potentiates cell migration and invasiveness. In agreement with these observations, long-term (up to 24 h) exposure to PEVs stimulates specific signalling pathways, such as p38 mitogen-activated protein kinase (p38 MAPK), myosin light chain-2 (MLC2), and Rho-associated protein kinase (ROCK) [16], that drive cancer cell motility and spreading [17][18][19][20]. Nevertheless, the molecular mechanism(s) whereby a prolonged exposure to PEVs promote(s) migration in MDA-MB-231 cells is still unclear. The remodelling of the Ca 2+ handling machinery supports several cancer hallmarks, including tissue invasion and metastasis [21][22][23][24][25]. An increase in intracellular Ca 2+ concentration ([Ca 2+ ] i ) in MDA-MB-231 cells can be elicited by inositol-1,4,5-trisphosphate (InsP 3 ) [26], which gates three subtypes of InsP 3 receptors (InsP 3 R1, InsP 3 R2, and InsP 3 R3) to release Ca 2+ from the endoplasmic reticulum (ER) [22]. InsP 3 -induced Ca 2+ mobilization, in turn, causes a strong reduction in ER Ca 2+ concentration, which leads to the activation of a Ca 2+ entry pathway on the plasma membrane, known as store-operated Ca 2+ entry (SOCE) [27][28][29]. In MDA-MB-231 cells, SOCE is mediated by the physical association between Stromal Interaction Molecule 1 (STIM1), which functions as a sensor of ER Ca 2+ concentration and is activated by a drop in intraluminal Ca 2+ levels, and Orai1, which forms the Ca 2+ -permeable channel on the plasma membrane [24,27,30,31]. The interaction between InsP 3 -induced ER Ca 2+ mobilization and SOCE finely shapes the intracellular Ca 2+ signals that increase the migration capacity of this highly invasive breast cancer cell line [24,26,31,32]. An increase in the expression of genes encoding for several InsP 3 R isoforms can stimulate multiple cancer hallmarks, including proliferation, migration, invasion, and apoptosis resistance [33,34]. Similarly, SOCE up-regulation, because of the over-expression of STIM and/or Orai1 proteins, can result in the activation of many pro-oncogenic signalling pathways in neoplastic cells [22,29,35]. Furthermore, the overexpression of several members of the Transient Receptor Potential (TRP) superfamily of non-selective cation channels can also support neoplastic transformation in virtually all cancer cell types [22,36]. Intriguingly, the signalling pathways recruited downstream of PEV stimulation, e.g., p38 MAPK and MLC2, can be activated following an elevation in [Ca 2+ ] i [16,18]. Herein, we thus sought to assess whether long-term exposure to PEVs stimulates MDA-MB-231 cell migration through the remodelling of the Ca 2+ handling machinery. By using a variety of approaches, we showed that PEVs cause a remarkable elevation in InsP 3induced ER Ca 2+ release by increasing Sarco-Endoplasmic Ca 2+ -ATPase 2B (SERCA2B) and InsP 3 R1/InsP 3 R2 transcript and protein expression. The larger depletion in ER Ca 2+ content, in turn, leads to enhanced SOCE activation. In agreement with this observation, serum elicited larger intracellular Ca 2+ signals and potentiated migration in MDA-MB-231 cells exposed to PEVs through the Ca 2+ -dependent recruitment of p38 MAPK and MLC2. Our evidence indicates that InsP 3 Rs rather than SOCE are required to drive MDA-MB-231 cell migration. These data shed novel light on the signalling pathways whereby PEVs can stimulate the metastatic spreading of TNBC cells and hint at the Ca 2+ toolkit as a promising target to prevent the detrimental interaction between PEVs and these highly aggressive breast cancer cells. Cancer Cell Culture The triple negative breast cancer line, MDA-MB-231, was provided by Professor Livia Visai (Department of Molecular Medicine, University of Pavia). Cancer cells were periodically checked to verify the absence of bacterial contamination and were maintained in DMEM supplemented with 10% foetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 µg/mL streptomycin, split every two days, and used for the experiments within 10 passages. The count of vital cells was determined by Trypan Blue staining and phase contrast microscopy analysis. PEV Isolation Human blood platelets were purified from healthy donors as recently described [16]. Upon the washing procedure, platelets were resuspended at a concentration of 3 × 10 8 /mL in HEPES buffer (10 mM HEPES, 137 mM NaCl, 2.9 mM KCl, and 12 mM NaHCO 3 , pH 7.4) supplemented with 1 mM CaCl 2 , 0.5 mM MgCl 2 and 5.5 mM glucose. To induce the release of PEVs, platelets were stimulated with the physiological agonist thrombin (0.2 U/mL) for 30 min at 37 • C under constant stirring. Platelets were pelleted by low-speed centrifugation (750× g, 20 min) and the supernatant was then centrifuged at 18,500× g for 90 min at 10 • C to collect medium-large PEVs, which were finally resuspended in HEPES buffer. The protein content of the different preparations of PEVs was determined by BCA assay. [Ca 2+ ] i Measurements As described elsewhere, Ca 2+ imaging was used to measure intracellular Ca 2+ signals in PEV-treated cancer cells [16,37]. MDA-MB-231 cells were loaded with 4 µM Fura-2/AM (1 mM stock in DMSO) in physiological salt solution (PSS) (150 mM NaCl, 6 mM KCl, 1.5 mM CaCl 2 , 1 mM MgCl 2 , 10 mM Glucose, 10 mM HEPES, pH 7.4) for 30 min at 37 • C and 5% CO 2 . After washing in PSS, the coverslip was fixed to the bottom of a Petri dish and the cells were either left untreated or treated with 30 µg/mL thrombin-induced PEVs. The cells were observed by an upright epifluorescence Axiolab microscope (Carl Zeiss, Oberkochen, Germany) equipped with a Zeiss ×40 Achroplan objective (water-immersion, 2.0 mm working distance, 0.9 numerical aperture). The cells were excited alternately at 340 and 380 nm, and the emitted light was detected at 510 nm. Custom software, working in the LINUX environment, was used to drive the camera (Extended-ISIS Camera, Photonic Science, Millham, UK) and the filter wheel, and to measure and plot on-line the fluorescence from rectangular "regions of interest" (ROI) enclosing 20-30 single cells. [Ca 2+ ] i was monitored by measuring, for each ROI, the ratio of the mean fluorescence emitted at 510 nm when exciting alternatively at 340 and 380 nm [ratio (F 340 /F 380 )]. An increase in [Ca 2+ ] i causes an increase in the ratio [38,39]. Ratio measurements were performed and plotted on-line every 3 s. The experiments were performed at room temperature (22 • C). The resting Ca 2+ entry was evaluated by exploiting the Mn 2+ -quenching technique. Mn 2+ has been shown to quench Fura-2/AM fluorescence. Since Mn 2+ and Ca 2+ share common entry pathways in the plasma membrane, Fura-2/AM quenching by Mn 2+ is regarded as an index of divalent cation influx [31,40,41]. Experiments were carried out at the 360 nm wavelength, the isosbestic wavelength for Fura-2/AM, and in Ca 2+ -free medium supplemented with 0.5 mM EGTA, as previously described [40,42]. This avoids Ca 2+ competition for Mn 2+ entry and therefore enhances Mn 2+ quenching. The basal [Ca 2+ ] i was evaluated by using the Grynkiewicz equation, as shown elsewhere [35,43]. Cell Migration Assay The effect of PEVs on the migration of MDA-MB-231 cells was evaluated using Falcon cell culture inserts (8 µm pore size) positioned in a 24-well plate, as shown in [16]. Cancer cells were serum starved for 6 h and then resuspended in DMEM with the addition of 0.5% FBS. Cells were either left untreated or treated with PEVs, supplemented or not with 2-Aminoethyl diphenyl borate (2-APB; 50 µM) or YM-58483 (also known as BTP-2; 20 µM), and then transferred inside the inserts. DMEM containing 10% FBS was added to the lower chamber. After 24 h, cells that moved through the porous membrane were stained with 0.5% crystal violet and counted at 10× microscope magnification with an Olympus BX51 microscope (Olympus Corporation, Tokyo, Japan). Real-Time Quantitative Reverse Transcription PCR (qRT-PCR) The total RNA was isolated from PEVs as well as PEV-treated and untreated MDA-MB-231 cells using Trizol reagent (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. For RNA extraction from PEVs, 20 µg of RNA grade glycogen (Thermo Fisher Scientific) was added in the precipitation step [44]. After DNAse treatment (Turbo DNA-free™ kit, Thermo Fisher Scientific), RNA was quantified with a BioPhotometer D30 (Eppendorf, Hamburg, Germany). cDNA was synthetised from 200 ng of MDA-MB-231 RNA and 100 ng of PEV RNA using the iScript cDNA Synthesis Kit (BioRad, Hercules, CA, USA). Gene expression analyses were performed in triplicate with specific primers (Table S1) and the SsoFast™ EvaGreen ® Supermix (BioRad) using a CFX Connect Real-Time System (BioRad) using the following program: an initial step at 95 • C for 30 s, 40 cycles of 5 s at 95 • C, and 5 s at 58 • C. Fluorescence measurements were taken at the end of each elongation step. The PCR mixture consisted of 10 µL SsoFast™ EvaGreen ® Supermix, 7 µL nuclease-free water, 1 µL cDNA, and 1 µL of each forward and reverse primer. Melting curve analysis was performed to ensure that single amplicons were obtained for each target gene after each qRT-PCR and primer efficiencies were determined using at least five different cDNA concentrations. Primers were designed on an exon-intron junction using the NCBI Primer tool [45]. Gene expression was evaluated using the ∆∆Ct method [46,47]. The genes actin beta and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were used as endogenous references for normalizing target mRNA. Data are presented as means ± standard error (SE) from three biological replicates. Results were analysed using the GraphPad Prism 5 software (version 5.01; GraphPad Software, San Diego, CA, USA) and the significance was determined using the unpaired Student's t-test. p-values < 0.05 were considered significant. Statistical Analysis All the reported figures are representative of three different experiments. As to the Ca 2+ data, the amplitude of Ca 2+ release in response to extracellular stimulation was measured as the difference between the ratio at the peak of intracellular Ca 2+ mobilization and the mean ratio of 1 min baseline before the peak. The magnitude of SOCE evoked by extracellular stimulation upon Ca 2+ restoration to the bath was measured as the difference between the ratio at the peak of extracellular Ca 2+ entry and the mean ratio of 1 in baseline before Ca 2+ re-addition. The rate of Mn 2+ influx was evaluated by measuring the slope of the fluorescence intensity curve at 400 s after Mn 2+ addition [40,42]. Pooled data are given as mean ± SEM, while the number of cells analysed is indicated above the corresponding histogram bars (number of responding cells/total number of analysed cells). For immunoblotting and cell migration analyses, all reported figures are representative of at least three different experiments and the quantitative data are reported as mean ± SD. Data from immunoblotting scanning were normalised considering the intensity of the band of interest in the sample not incubated with PEVs as one arbitrary unit (A. U.). For cell migration analyses, data are expressed as number of migrated cells per field at 10× microscope magnification. Comparisons between two groups were done using the Student's t-test, whereas multiple comparisons were performed using One-Way Analysis of Variance (ANOVA) with the Bonferroni post-hoc test. p-values less than 0.05 were considered statistically significant. Data were analysed using the GraphPad Prism (Version 5.01) software. Long-Term Exposure to PEVs Increases ER Ca 2+ Mobilization and SOCE Activation in MDA-MB-231 Cells The dose-response relationship showed in a previous study from our group demonstrated that 30 µg/mL represents the PEV dose that is more efficiently internalized and is more effective at inducing migration in MDA-MB-231 cells [16,49]. The long-term effect of PEVs on intracellular Ca 2+ dynamics was therefore evaluated in MDA-MB-231 cells maintained or not (control, Ctrl) in the presence of 30 µg/mL PEVs for 24 h and loaded with the Ca 2+ -sensitive fluorophore, Fura-2/AM. Preliminary recordings revealed that PEVs did not affect either the basal [Ca 2+ ] i ( Figure S1A) or the modest constitutive Ca 2+ entry that has previously been reported in MDA-MB-231 cells ( Figure S1B,C) [50]. Next, we exploited the Ca 2+ add-back protocol to evaluate whether long-term exposure to PEVs influences ER Ca 2+ mobilization and SOCE activation. As described elsewhere [27,28,30,37,50,51], this manoeuvre consists in stimulating the cells with cyclopiazonic acid (CPA), a selective inhibitor of SERCA, in the absence of extracellular Ca 2+ (0Ca 2+ ) to cause passive ER Ca 2+ efflux through Ca 2+ -permeable leakage channels. Subsequently, 1.8 mM Ca 2+ is restituted to external Ca 2+ to monitor SOCE through Orai1 channels that have previously been activated by STIM1 upon the depletion of the ER Ca 2+ store. Figure 1A,B show that CPA (10 µM) caused a transient increase in [Ca 2+ ] i under 0Ca 2+ conditions, which reflects ER Ca 2+ releasing ability in the absence (Ctrl) and presence of PEVs. PEVs caused a significant (p < 0.05) elevation in CPA-evoked ER Ca 2+ mobilization ( Figure 1C). The following restitution of extracellular Ca 2+ induced a second increase in [Ca 2+ ] i ( Figure 1A,B), which was entirely due to SOCE through Orai1 channels [27,28,30,50]. Statistical analysis revealed that long-term exposure to PEVs also increased SOCE in MDA-MB-231 cells ( Figure 1D). Altogether, these findings indicate that PEVs induce a remarkable remodelling of the Ca 2+ handling machinery in highly aggressive breast cancer MDA-MB-231 cells by enhancing both ER Ca 2+ release and SOCE. 1A,B), which was entirely due to SOCE through Orai1 channels [27,28,30,5 analysis revealed that long-term exposure to PEVs also increased SOCE in M cells ( Figure 1D). Altogether, these findings indicate that PEVs induce a re modelling of the Ca 2+ handling machinery in highly aggressive breast canc 231 cells by enhancing both ER Ca 2+ release and SOCE. Molecular Characterization of the Ca 2+ Handling Machinery in MDA-MB-231 Cells Exposed to PEVs In order to gain further insights into the molecular mechanisms whereby PEVs increase InsP 3 -induced ER Ca 2+ mobilization and SOCE activity, we carried out a thorough RT-qPCR analysis of mRNAs isolated from MDA-MB-231 cells exposed or not (Ctrl) to PEVs (30 µg/mL; 24 h). Figure 3 shows a significant increase in the expression levels of the transcripts encoding for SERCA2B ( Figure 3A), i.e., the major SERCA isoform in MDA-MB-231 cells [59], and for InsP 3 R1 ( Figure 3B) and InsP 3 R2 ( Figure 3C), which mediate ER Ca 2+ release [26]. Conversely, there was no change in the expression of the transcripts encoding for InsP 3 R3 ( Figure 3D) and for all the molecular components of the SOCE machinery in MDA-MB-231 cells, i.e., STIM1 ( Figure 3E) and Orai1 ( Figure 3F) [24,27,30,31]. Notably, RT-qPCR analysis of PEV transcripts did not reveal detectable levels of any of these mRNAs (data not shown), thereby showing that SERCA2B, InsP 3 R1, and InsP 3 R2 transcripts are not directly transferred to MDA-MB-231 cells from PEVs (mean Ct values ± SD of 37.5 ± 1, 38.5 ± 1, 38.2 ± 1.1, 34.9 ± 1.4, 34.8 ± 1, 37.3 ± 1.4, and 36.6 ± 1.3 for SERCA2B, InsP 3 R1, InsP 3 R3, Orai1, SERCA3, InsP 3 R2, and STIM1, respectively). Next, western blot analysis of SERCA2B, InsP3Rs, STIM1, and Orai1 protein expression was performed by exploiting affinity-antibodies, as illustrated in [37,40,53]. Immunoblots revealed a major band of ≈115 kDa for SERCA2B ( Figure 4A, left panel) and a large band over 250 kDa, deriving from the sum of the 313/260/250 kDa bands corresponding to InsP3R1, InsP3R2, and InsP3R3 ( Figure 4B, left panel), as previously shown in [53]. Furthermore, two major bands of ≈86 and ≈35 kDa were detected for, respectively, STIM1 ( Figure 4C, left panel) and Orai1 ( Figure 4D, left panel). Densitometric analysis confirmed that SERCA2B ( Figure 4A, right panel) and InsP3R ( Figure 4B, right panel) proteins were significantly up-regulated in MDA-MB-231 cells exposed to PEVs (30 μg/mL; 24 h). Conversely, and in agreement with the qRT-PCR data, there was no significant difference in the expression level of STIM1 ( Figure 4C, right panel) and Orai1 proteins ( Figure 4D, right panel). Overall, these findings strongly suggest that the increase in the amount of ER Ca 2+ that is releasable through InsP3Rs is due to the overexpression of SERCA2B and InsP3R proteins. Since there is no difference in the expression levels of its underlying constituents, i.e., STIM1 and Orai1, the increase in SOCE is likely to reflect the larger drop in ER Ca 2+ concentration following agonist stimulation of MDA-MB-231 cells pre-exposed to PEVs [60][61][62]. Next, western blot analysis of SERCA2B, InsP 3 Rs, STIM1, and Orai1 protein expression was performed by exploiting affinity-antibodies, as illustrated in [37,40,53]. Immunoblots revealed a major band of ≈115 kDa for SERCA2B ( Figure 4A, left panel) and a large band over 250 kDa, deriving from the sum of the 313/260/250 kDa bands corresponding to InsP 3 R1, InsP 3 R2, and InsP 3 R3 ( Figure 4B, left panel), as previously shown in [53]. Furthermore, two major bands of ≈86 and ≈35 kDa were detected for, respectively, STIM1 ( Figure 4C, left panel) and Orai1 ( Figure 4D, left panel). Densitometric analysis confirmed that SERCA2B ( Figure 4A, right panel) and InsP 3 R ( Figure 4B, right panel) proteins were significantly up-regulated in MDA-MB-231 cells exposed to PEVs (30 µg/mL; 24 h). Conversely, and in agreement with the qRT-PCR data, there was no significant difference in the expression level of STIM1 ( Figure 4C, right panel) and Orai1 proteins ( Figure 4D, right panel). Overall, these findings strongly suggest that the increase in the amount of ER Ca 2+ that is releasable through InsP 3 Rs is due to the overexpression of SERCA2B and InsP 3 R proteins. Since there is no difference in the expression levels of its underlying constituents, i.e., STIM1 and Orai1, the increase in SOCE is likely to reflect the larger drop in ER Ca 2+ concentration following agonist stimulation of MDA-MB-231 cells pre-exposed to PEVs [60][61][62]. Serum-Induced Intracellular Ca 2+ Signals Are Larger in MDA-MB-231 Cells Exposed to PEVs It has long been known that FBS stimulates the migration in cancer cells through an increase in [Ca 2+ ] i [26,30,31,52,53]. FBS consists of a mixture of growth factors that bind to specific tyrosine kinase receptors (TKRs) that are coupled to PLCγ and stimulate InsP 3 -production and InsP 3 -dependent Ca 2+ signalling. A recent report from our group showed that 24 h treatment with PEVs potentiated serum-evoked migration in MDA-MB-231 cells [16]. Therefore, we reasoned that the increase in the chemotactic response to FBS could be associated to an elevation in the underlying increase in [Ca 2+ ] i . The Ca 2+ add-back protocol confirmed that, upon long-term exposure to PEVs (30 µg/mL; 24 h), In agreement with previous reports on other cancer cell types [40,52], serum-evoked SOCE ( Figure 6A) was dampened by blocking Orai1 channels with either Pyr6 (10 µM) ( Figure 6B,D) or BTP-2 (20 µM) ( Figure 6C,D) in MDA-MB-231 cells. Taken together, these findings show that long-term exposure to PEVs leads to an increase in InsP 3 -induced ER Ca 2+ mobilization and SOCE that could potentiate the pro-migratory Ca 2+ response to serum stimulation. Serum-Induced Intracellular Ca 2+ Signals Are Larger in MDA-MB-231 Cells Exposed to PEVs It has long been known that FBS stimulates the migration in cancer cells through an increase in [Ca 2+ ]i [26,30,31,52,53]. FBS consists of a mixture of growth factors that bind to specific tyrosine kinase receptors (TKRs) that are coupled to PLCγ and stimulate InsP3production and InsP3-dependent Ca 2+ signalling. A recent report from our group showed that 24 h treatment with PEVs potentiated serum-evoked migration in MDA-MB-231 cells In agreement with previous reports on other cancer cell types [40,52], serum-evoked SOCE ( Figure 6A) was dampened by blocking Orai1 channels with either Pyr6 (10 μM) ( Figure 6B,D) or BTP-2 (20 μM) ( Figure 6C,D) in MDA-MB-231 cells. Taken together, these findings show that long-term exposure to PEVs leads to an increase in InsP3-induced ER Ca 2+ mobilization and SOCE that could potentiate the pro-migratory Ca 2+ response to serum stimulation. To assess whether the partial remodelling of the Ca 2+ handling machinery potentiates serum-induced migration upon pretreatment with PEVs, we evaluated MDA-MB-231 motility in the absence (Ctrl) and presence of specific InsP3R and SOCE inhibitors. The phar- PEVs Potentiate Serum-Dependent Migration through the Ca 2+ -Dependent Recruitment of p38 MAPK and MLC2 in MDA-MB-231 Cells To assess whether the partial remodelling of the Ca 2+ handling machinery potentiates serum-induced migration upon pretreatment with PEVs, we evaluated MDA-MB-231 motility in the absence (Ctrl) and presence of specific InsP 3 R and SOCE inhibitors. The pharmacological blockade of InsP 3 Rs with 2-APB (50 µM) significantly inhibited serum-migration in MDA-MB-231 cells pre-treated with PEVs (30 µg/mL, 24 h), whereas this process was not affected by SOCE inhibition with BTP-2 (20 µM) ( Figure 7A,B). Next, we assessed whether a serum-evoked increase in [Ca 2+ ] i is required to boost p38 MAPK and MLC2 activation in the presence of PEVs (30 µg/mL, 24 h). We confirmed that PEVs potentiated p38 MAPK and MLC2 phosphorylation ( Figure 7C-F), while they did not hyper-activate the Ca 2+ -sensitive Pyk2 ( Figure S4). We then examined the phosphorylation of p38 MAPK and MLC2 in the absence (Ctrl) and presence of 2-APB and BTP-2. Unlike migration, PEV-dependent increase in p38 MAPK and MLC2 phosphorylation was sensitive both to InsP 3 R inhibition with 2-APB (50 µM) and to SOCE blockade with BTP-2 (20 µM) ( Figure 7C-F). These findings demonstrate that InsP 3 Rs, rather than SOCE, support PEV-dependent migration in MDA-MB-231, although they can both induce p38 MAPK and MLC2 phosphorylation. Discussion Circulating platelets can contribute to cancer progression by releasing PEVs cilitate (by shrouding disseminated tumour cells from recognition by NK cells) or late cancer cell spreading from the primary site [6,13]. Multiple pieces of evidence s Discussion Circulating platelets can contribute to cancer progression by releasing PEVs that facilitate (by shrouding disseminated tumour cells from recognition by NK cells) or stimulate cancer cell spreading from the primary site [6,13]. Multiple pieces of evidence support the notion that PEVs promote cancer cell invasiveness by mediating the transfer of plateletderived instructive cues, consisting either in signalling proteins or genetic material, to the target cells [6,13,64]. The relationship between breast cancer and platelets is complex and is yet to be fully unravelled. Nevertheless, PEVs have also been shown to directly engage signal transduction pathways, such as Src, focal adhesion kinases (FAKs), p38 MAPK, and MLC2, which stimulate migration in the TNBC MDA-MB-231 cell line [16,65]. In accord, MDA-MB-231 cells can induce platelet activation and aggregation [49,66], thereby leading to the release of robust amounts of PEVs that, in turn, increase cancer cell migration and invasion [16]. Intriguingly, an increase in [Ca 2+ ] i may occur upstream of p38 MAPK and MLC2 recruitment [16,18] and can modulate p38 MAPK and MLC2 phosphorylation [67][68][69]. It has long been known that a complex remodelling of Ca 2+ handling machinery supports several cancer hallmarks, including migration and invasion [21,22,70]. Therefore, we sought to investigate whether the long-term exposure of the highly invasive MDA-MB-231 cells to PEVs potentiates migration by rewiring their Ca 2+ transport system. PEVs Induce Remodelling of the Ca 2+ Handling Machinery in MDA-MB-231 Cells A sustained increase in [Ca 2+ ] i is required to trigger the release of PEVs from the plasma membrane of activated platelets [4]. Preliminary reports suggested that PEVs can, in turn, also influence intracellular Ca 2+ dynamics in target cells. For instance, a recent investigation showed that PEVs elicit intracellular Ca 2+ oscillations in aortic vascular smooth muscle cells, thereby promoting migration and neointimal hyperplasia in a rat model of vascular injury [71]. Furthermore, we documented that 30 µg/mL PEVs induced a robust increase in [Ca 2+ ] i in MDA-MB-231 cells, with this being initiated by InsP 3 -induced ER Ca 2+ release and maintained by SOCE activation [16]. Nevertheless, MDA-MB-231 cells displayed enhanced migration and invasiveness upon 24 h incubation with PEVs [16]. Therefore, the immediate Ca 2+ response to PEVs is unlikely to engage the Ca 2+ -dependent molecular machinery that potentiates migration during the late stages of PEV stimulation. These pieces of evidence prompted us to assess whether long-term exposure to PEVs stimulates the p38 MAPK and MLC2 signalling pathways and potentiates MDA-MB-231 cell migration through remodelling the Ca 2+ handling machinery. PEV-Dependent Increase in InsP 3 -Induced ER Ca 2+ Release Is Associated to SERCA2B and InsP 3 R Up-Regulation Pre-treatment with PEVs did not change either the resting [Ca 2+ ] i or the basal plasma membrane permeability to Ca 2+ , which is mediated by a yet-to-be-identified Ca 2+ -permeable route in MDA-MB-231 cells, while it is mediated by Orai1 channels in low-migrating MCF-7 cells [41]. Since PEVs did not affect resting Ca 2+ influx in MDA-MB-231 cells, we did not further investigate its molecular structure. The Ca 2+ response of breast cancer cells to chemotactic cues is triggered by InsP 3induced Ca 2+ release from the ER and sustained over time by SOCE [24,26,32]. The Ca 2+ add-back protocol represents the most common strategy to evaluate both the amount of ER Ca 2+ that can be released through InsP 3 Rs and the extent of SOCE activation in cancer cells, including MDA-MB-231 cells [27,28,30,37,50]. We found that, in the absence of extracellular Ca 2+ , CPA-evoked intracellular Ca 2+ release was significantly increased in MDA-MB-231 cells exposed to PEVs when compared to untreated cells. The amount of Ca 2+ introduced into the cytosol via ER Ca 2+ leakage channels upon SERCA inhibition with CPA or thapsigargin is recognized as a reliable indicator of the releasable ER Ca 2+ pool. In accord, this protocol has been widely exploited to evaluate the differences in ER Ca 2+ content in primary vs. metastatic cancer cells [40] and in cancer cells exposed to different pharmacological or genetic treatments [26,30,54,57,72,73]. Molecular analysis showed that long-term exposure to PEVs caused an increase in the transcript and protein levels of SERCA2B, which represents the most abundant SERCA isoform in MDA-MB-231 cells [59]. Therefore, the increase in SERCA2B expression might be responsible for the increase in the ER Ca 2+ load unmasked by CPA-evoked intracellular Ca 2+ mobilization. Similarly, the increase in SERCA2B expression underlies the larger Ca 2+ releasing ability of colorectal cancer cells lacking the oncogenic K-Ras isoform, K-Ras G13D [54]. The increase in ER Ca 2+ content in PEV-treated MDA-MB-231 cells is associated to an increase in InsP 3 R1 and InsP 3 R2 transcript and protein expression. The Ca 2+ add-back protocol confirmed that both physiological (with ATP) and pharmacological (with InsP 3 -BM) stimulation of InsP 3 Rs resulted in a significant elevation in InsP 3 -dependent ER Ca 2+ mobilization in MDA-MB-231 cells exposed to PEVs. An increase in InsP 3 R1 expression may contribute to multiple oncological processes, including apoptosis resistance in prostate cancer [74], autophagy induction in clear cell renal cell carcinoma [75], and proliferation, invasion, and migration in osteosarcoma [76]. Similarly, InsP 3 R2 regulates migration in non-small cell lung cancer [77], maintains the self-renewal ability of liver cancer stem cells [78], and prevents apoptosis in B-cell lymphoma and chronic lymphocytic leukemia [79]. Interestingly, both InsP 3 R1 and InsP 3 R2 proteins drive migration in MDA-MB-231 cells [26], whereas InsP 3 R1 has also been shown to promote proliferation [80]. Therefore, the combined overexpression of SERCA2B and InsP 3 R1/InsP 3 R2 proteins nicely correlates with the increased ER Ca 2+ content and higher level of InsP 3 -induced ER Ca 2+ mobilization induced by long-term exposure to PEVs and could potentiate migration in MDA-MB-231 cells [16]. PEVs can transfer specific cargo molecules, such as mRNA, DNA, cytokines, and membrane receptors or enzymes, to recipient cells and thereby increase the capability of invasive behaviour in cancer cells [3,6,13]. Nonetheless, we could not find detectable levels of SERCA2B and InsP 3 R1/InsP 3 R2 transcripts in the mRNA content of PEVs. Therefore, we can conclude that long-term exposure to PEVs stimulates the partial remodelling of the Ca 2+ handling machinery in MDA-MB-231 cells by boosting the expression of genes encoding for SERCA2B, InsP 3 R1, and InsP 3 R2. PEV-Induced SOCE Potentiation Does Not Involve STIM1 and ORA1 Up-Regulation The larger ER Ca 2+ release could also lead to an increase in SOCE activation even though the expression of its underlying molecular components, i.e., STIM1 and Orai1, is not altered by PEVs [60][61][62], as shown in the present investigation. Although some studies have provided evidence against a straightforward relationship between the extent of ER Ca 2+ depletion and SOCE amplitude [81,82], other investigations have clearly shown a roughly linear relationship between the magnitude of InsP 3 -induced ER Ca 2+ release and SOCE activation [60][61][62]83,84]. In agreement with these observations, SOCE amplitude in MDA-MB-231 cells treated with PEVs was always significantly larger than in untreated cells whatever the stimulus inducing ER Ca 2+ release, i.e., CPA, ATP, or InsP 3 -BM. Interestingly, SOCE has also been shown to drive metastasis and invasion in MDA-MB-231 cells [24,30,31]. Therefore, the overall remodelling of intracellular Ca 2+ dynamics, i.e., the enhancement of InsP 3 -induced ER Ca 2+ release and SOCE activation, is predicted to stimulate migration in PEV-treated MDA-MB-231. An increase in [Ca 2+ ] i has long been known to mediate the pro-migratory effect of serum on cancer cells [26,30,31,52,53]. In line with this evidence, early work showed that PLCγ1 is recruited by serum to stimulate motility and adhesion [85] and that genetic silencing of Orai1 prevents serum-induced migration in MDA-MB-231 cells [31]. Herein, we provided the first characterization of the molecular mechanisms whereby serum triggers an increase in [Ca 2+ ] i in this highly migrating breast cancer cell line. Pharmacological manipulation confirmed that serum-evoked intracellular Ca 2+ release in MDA-MB-231 cells was inhibited by interfering with PLC activity with U73122 by inhibiting InsP 3 Rs with 2-APB and by depleting the ER Ca 2+ pool with CPA. Furthermore, serum-evoked extracellular Ca 2+ entry was dampened by BTP-2 and Pyr6, two highly specific inhibitors of Orai1. These findings concur with the involvement of InsP 3 Rs and SOCE in the Ca 2+ signal evoked by serum in MDA-MB-231 cells, as also documented in other types of cancer cells [40,53,86], including low-migrating MCF-7 breast cancer cells [87]. As expected by the preliminary characterization with CPA, ATP, and InsP 3 -BM, both phases of serum-evoked intracellular Ca 2+ signals (i.e., ER Ca 2+ mobilization and SOCE) were significantly increased upon exposure to PEVs. Previous work has shown that both InsP 3 Rs and SOCE drive migration in highly migrating MDA-MB-231 breast cancer cells [24,26,32]. Therefore, the potentiation of serum-evoked intracellular Ca 2+ signals could contribute to the enhanced migration rate that we recently reported in MDA-MB-231 cells exposed to PEVs [16]. In agreement with this hypothesis, blocking InsP 3 Rs with 2-APB significantly reduced serum-induced migration, a finding that is entirely consistent with the reported involvement InsP 3 R1 and InsP 3 R2 in MDA-MB-231 cell migration [26]. Surprisingly, inhibiting SOCE with BTP-2 did not affect motility in PEV-treated MDA-MB-231 cells. Nevertheless, both 2-APB and BTP-2 prevented serum-induced MLC2 and p38 MAPK phosphorylation, which drives cancer cell migration [17][18][19][20]. These data confirm our recent evidence that the long-term exposure to PEVs potentiates p38 MAPK and MLC2 phosphorylation in MDA-MB-231 cells [16] and further shows that the recruitment of these signalling pathways requires an increase in [Ca 2+ ] i . Under the same conditions, we failed to detect any effect with regard to PEVs on the activation of the Ca 2+ -sensitive focal adhesion kinase Pyk2, whose role in driving the breast cancer metastatic outgrowth was previously documented [16]. The seeming discrepancy between the divergent effects of 2-APB and BTP-2 on p38 MAPK and MLC2 phosphorylation and cell motility could reflect some degree of redundancy between multiple Ca 2+ sources which can engage the same Ca 2+ -dependent effectors [88][89][90]. However, the phosphorylation cascades triggered by SOCE do not seem to play a major role in MDA-MB-231 cell migration, which is instead finely tuned by the p38 MAPK and MLC2 pathways engaged by InsP 3 Rs. Alternatively, InsP 3 -induced ER Ca 2+ release could recruit additional Ca 2+ -dependent effectors of cell motility that are not coupled to SOCE and remain to be identified. Future work is mandatory to assess this issue, but InsP 3 -induced ER Ca 2+ release is clearly required to potentiate p38 MAPK and MLC2 phosphorylation and to stimulate migration in PEV-treated MDA-MB-231 cells. Conclusions Herein, we provide the first evidence that long-term exposure to PEVs induces a remarkable alteration in the Ca 2+ transport system in the highly aggressive TNBC cell line, MDA-MB-231, thereby leading to an increase in InsP 3 -induced Ca 2+ release and SOCE amplitude. In particular, the larger Ca 2+ mobilization from the ER is required to potentiate serum-induced migration through the recruitment of p38 MAPK and MLC2. These findings lay the foundation for targeting the Ca 2+ handling machinery not only to prevent breast cancer cell stimulation by pro-oncogenic chemical mediators and physical signals [23,24,29,37,55,56] but also by PEVs.
2022-10-10T15:04:41.314Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "8edf1a1ac0c4e631414e78629f7265213ec36277", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/19/3120/pdf?version=1664888712", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bc91e65383d18115bddee5bd03be7e12f502d51f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216338776
pes2o/s2orc
v3-fos-license
Sustainable Development of Rural Tourism in Guangdong Based on Remote Sensing and GIS The economic development and the improvement of people’s living quality promote the development of tourism. The acceleration of urbanization has given birth to rural tourism industry. Guangdong has a relatively high economic level and is rich in rural tourism resources which are of rural characteristics. As a result, rural tourism in Guangdong started earlier and has begun to take shape. However, in the context of new urbanization, there are still a series of problems in the development of rural tourism in Guangdong, which restrict the sustainable development of rural tourism in Guangdong. In order to promote the further development of rural tourism in Guangdong, it is necessary to carry out in-depth research and analysis of its reality. Taking Guangdong rural tourism as an example, on the basis of remote sensing and GIS technology, with the help of such research methods as Gini index, geographic concentration index and geographical connection rate, this paper analyzes the space characteristics and resource types of rural tourism in Guangdong and discovers the problems existing in its sustainable development. Introduction At present, with the development of China's economic level, especially the acceleration of urbanization, the pace of urban life is getting faster and faster and people are also facing great pressure. More and more people are longing for rural life [1] . In this case, rural tourism began to develop gradually. Rural areas have unique cultural and natural resources, and rural landscapes are also unique, which have become important driving factors for the development of rural tourism [2][3] . Especially in recent years, rural tourism has become a tourism hotspot. By taking advantage of the original nature and culture of the countryside, a series of tourism experience activities are carried out based on traditional agriculture, which not only helps to improve the overall development level of rural economy, promote the optimization and upgrading of rural industrial structure, but also accelerates the process of building a beautiful new socialist countryside [4][5] . However, the sustainable development of rural tourism is restricted due to the problems existing in the development mode of rural tourism. Therefore, it is particularly important to conduct in-depth research on the sustainable development of [6][7] . With the continuous development of science and technology, many researchers try to conduct in-depth research on rural tourism with the help of advanced technical means, among which remote sensing and GIS technology are prominent [8]. With the deepening of rural tourism, domestic and foreign scholars have also conducted a series of studies on rural tourism. Compared with foreign studies, rural tourism in China started late, so relevant studies are not in-depth enough [9][10] . At present, the researches of foreign scholars mainly focus on the development status, mode and development strategies of rural tourism. Domestic scholars mainly focus on the evaluation of the development model of rural tourism in some regions, the development prospect and the analysis of the development motivation [11][12] . With the deepening of the study, the research on rural tourism began to combine with other methods used in such subjects as geography, economics and so on. In general, domestic and foreign scholars mainly focus on the overall development of rural tourism. There are relatively few studies on the sustainable development of rural tourism. There are fewer researches on the sustainable development of rural tourism by means of advanced technologies such as remote sensing and GIS [13][14] . From this point of view, there is still much room for further research on rural tourism. In order to improve the rural tourism research theories, taking Guangdong rural tourism as an example, on the basis of remote sensing and GIS technology, with the help of such research methods as Gini index, geographic concentration index and geographical connection rate, this paper analyzes the space characteristics and resource types of rural tourism in Guangdong and discovers the problems existing in its sustainable development [15] . On the one hand, it promotes the sustainable development of rural tourism in China, and on the other hand, it provides a certain theoretical basis for the related research in the future. Data Sources and Tools With the principle of scientific and reliable data as well as typical research object, the data about Guangdong rural tourism in this paper is from the survey report of Guangdong leisure agriculture and rural tourism resources (2018). On the basis of remote sensing and GIS, it explores the rural tourism situation in Guangdong by analyzing tourism data with the help of relevant map analysis software. Data about the population and economy are from the statistical bulletin of the national economy and social development in Guangdong (2018). With the help of such algorithms of economics and geography as Gini index, geographical connection rate and rural tourism development level index, this paper conducts an in-depth research on the main types and spatial distributions of rural tourist attractions in Guangdong. It also probes into the relationship among economy, population and spatial distribution of rural tourism in Guangdong and on this basis analyzes the development of rural tourism in Guangdong as well its existing main problems. The main measuring tools in this paper are remote sensing and GIS. Remote sensing refers to the technology of measuring physical and set features of objects by means of non-contact sensors. Currently, it is mainly applied in the fields of resource exploration, planning and decision making and dynamic monitoring. It mainly consists of two parts, namely, image acquisition and information processing technology, whose main function is data acquisition. The full name of GIS is geographic information system, which is mainly a tool for information management and analysis. In this paper, remote sensing and GIS are used to measure rural tourism in Guangdong, which is beneficial to the acquisition and processing of rural tourism in Guangdong. Algorithm In order to comprehensively analyze the spatial distribution characteristics and resource types of rural tourism in Guangdong, this paper applies Gini index, geographic concentration index and geographic connection rate to thoroughly investigate rural tourism in Guangdong. Originally a concept in economics, Gini index was later introduced into tourism research to indicate the balance of tourism development in a certain region. In the formula, G represents the Gini index for rural tourism in Guangdong, and Pi represents the proportion of the number of rural tourist attractions in the No. i prefecture-level rural area in Guangdong. N, n is the total number of towns. The specific formula of Gini index is as follows: Geographical concentration index is a geographical concept, which is generally used to indicate the degree of aggregation of research objects. In the study of rural tourism, it is usually used to represent the spatial distribution of rural tourist attractions within a region. In the formula, Xi represents the number of scenic spots of a certain type of rural tourism in the No. i prefecture-level city, and T represents the total number of such scenic spots. N represents the number of prefecture-level cities in Guangdong. The specific formula is as follows: Geographical connection rate refers to the spatial coordination between the economy and population of a region. In the formula, V represents the geographical connection rate; Xi and yi respectively represent the proportion of economy and population in the No. i region. N is the number of regions. The specific formula is as follows: Analytical Procedure of the Sustainable Development of Rural Tourism in Guangdong First, rural tourism data are measured and acquired. Remote sensing and GIS technology are used to obtain relevant data of rural tourism in Guangdong, which are also simply processed. Second, data are constructed and spatially expressed. The data obtained by remote sensing and GIS technology are classified and processed, which is calculated by using the above mentioned research algorithms to construct databases of different data types, mainly including resource types, spatial distribution and internal structure. By analyzing the rural tourism data with the help of the database, the spatial expression of rural tourism resource types and spatial distribution characteristics in Guangdong is realized, and the main development of rural tourism in Guangdong is grasped. Third, data are analyzed and discussed. Based on the analysis of rural tourism data, the present situation of rural tourism in Guangdong is obtained. Thus, the existing problems and causes are extracted, which are analyzed by combing the related resource factors. On this basis, the paper puts forward some suggestions or strategies to promote the sustainable development of Guangdong. Division of Data Types According to the tourism data samples of Guangdong, by means of remote sensing and GIS technology and combining with the characteristics of rural tourism resources, data of the characteristics of rural tourism resources types, time changes and spatial distribution in Guangdong are obtained. The data results are shown in figure 1 and table 1 Discussion on the Classification of the Problems of Rural Tourism Development in Guangdong It can be concluded from the data in table 1 that the different types of rural tourism resources in this region from large to small are: rural settlements and architectural landscape, rural products, rural natural ecological landscape, rural production landscape, rural folk culture landscape, and rural landscape artistic conception. It indicates that there are many characteristic ancient villages in Guangdong and they are well protected, but the conversion rate of production landscape is relatively low and the awareness of developing folk landscape is not enough. From figure 1, it can be seen that rural settlements and architectural landscapes are increasing from 2014 to 2017, with a larger increase rate compared with other types, indicating that the market share of this type of rural tourism is gradually increasing. The proportion growth of natural ecological landscape in rural areas is relatively flat, which is caused by people's preference for natural scenery in the Pearl River delta. The number of rural production landscape is also increasing, due to its strong sense of experience and emphasis on education. New types of tourism can be developed by combining with the development of the tourism market. However, the development of rural landscape in Guangzhou is not in harmony with its rural resources. The ability of rural products to become tourism commodities is weak. The growth rate of rural folk culture is slow, which is directly affected by rural cultural resources, local folk customs, regional cultural characteristics and other factors. The artistic conception of rural landscape starts late, and the investment on this landscape is relatively small. It can be seen from table 2 that the developments of rural tourism in different regions of Guangdong are not coordinated, which is mainly directly affected by the economic development in the region. Through the comparison of economic development index and rural tourism development index in table 2, it is found that the rural tourism development index is directly proportional to the economic development index, indicating the direct influence of economy on rural tourism. South-central China is the Pearl River delta, where economy develops rapidly. Therefore, rural tourism in this region develops the most rapidly, with the tourism development index as high as 90%. Economic developments in other regions lag far behind and their tourism development indexes are also relatively lower than that in the region. Conclusion To sum up, sightseeing tourism accounts for a large proportion of rural tourism in Guangdong. Tourism characteristics and products are not prominent. Tourism commodity development needs to be improved and regional developments are not coordinated. In the future, the development of rural tourism in Guangdong should establish a clear understanding of the regional development, strengthen the excavation of rural culture, protect and develop rural tourism heritage, and increase the proportion of rural culture and rural landscape in rural tourism. It should also fully coordinate regional developments with better developed areas driving less developed areas where investment in rural tourism development should be strengthened. The radiation effect of rural tourism in the Pearl River delta should be brought into full play and the development of rural tourism all around Guangdong will be gradually realized.
2020-04-02T09:33:23.433Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "bcdc753ec89f4bcadeab420bad98d5dcc211b7c1", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/750/1/012146", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "284d48d0da149dbd31c7b56eabc03569e473bf24", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business" ] }
268555952
pes2o/s2orc
v3-fos-license
Beyond Performance: Quantifying and Mitigating Label Bias in LLMs Large language models (LLMs) have shown remarkable adaptability to diverse tasks, by leveraging context prompts containing instructions, or minimal input-output examples. However, recent work revealed they also exhibit *label bias*—an undesirable preference toward predicting certain answers over others. Still, detecting and measuring this bias reliably and at scale has remained relatively unexplored. In this study, we evaluate different approaches to quantifying label bias in a model’s predictions, conducting a comprehensive investigation across 279 classification tasks and ten LLMs. Our investigation reveals substantial label bias in models both before and after debiasing attempts, as well as highlights the importance of outcomes-based evaluation metrics, which were not previously used in this regard. We further propose a novel label bias calibration method tailored for few-shot prompting, which outperforms recent calibration approaches for both improving performance and mitigating label bias. Our results emphasize that label bias in the predictions of LLMs remains a barrier to their reliability. Introduction Large language models (LLMs) have demonstrated impressive abilities in adapting to new tasks when conditioned on a context prompt, containing tasksolving instructions (Wei et al., 2022) or few examples of input-output pairs (Brown et al., 2020).Still, recent work has shown that predictions of LLMs exhibit label bias-a strong, undesirable preference towards predicting certain answers over others (Zhao et al., 2021;Chen et al., 2022;Fei et al., 2023, see Fig. 1).Such preferences were shown to be affected by the choice and order of in-context demonstrations (Liu et al., 2022;Lu et al., 2022), the model's pretraining data (Dong et al., 2022), or textual features of the task data (Fei et al., 2023).Consequently, several approaches were proposed to address this problem, mostly by calibrating the model's output probabilities to compensate for this bias (Zhao et al., 2021;Fei et al., 2023).Despite these efforts, label bias evaluation relies on performance metrics such as accuracy, rather than metrics designed to directly quantify the bias.In doing so, we might inadvertently overlook crucial aspects of model behavior.Indeed, although a given method could effectively improve performance, substantial bias might still persist in the model's predictions-deeming the method insufficient and the model unreliable.Alternatively, performance could remain relatively unchanged, but with the bias mostly removed. In this work, we take a step towards a more comprehensive understanding of the extent of label bias in LLMs and the effects of mitigation approaches.Using metrics to directly measure the label bias in model predictions, which we derive from previous work on fairness and label bias esti-arXiv:2405.02743v1[cs.CL] 4 May 2024 mation, we evaluate ten LLMs on 279 diverse classification and multiple-choice tasks from SUPER-NATURALINSTRUCTIONS (Wang et al., 2022).We examine both performance and bias along axes such as scale and number of in-context demonstrations.We also evaluate the impact of label bias mitigation methods, such as calibration and fewshot LoRA fine-tuning (Hu et al., 2022). Our investigation reveals substantial label bias in the predictions of LLMs across all evaluated settings, indicating that raw LLM output scores often represent simple, heuristic solutions.While increasing model size, providing in-context demonstrations, and instruction-tuning all contribute to reducing bias, ample bias persists, even after applying mitigation methods.Surprisingly, these results also hold for tasks where the labels are all semantically equivalent (e.g., in multi-choice question answering).Further, although the examined calibration methods can reduce bias and improve performance, we also find cases where they negatively impact both bias and overall performance. Motivated by these findings, we propose a novel calibration method for few-shot prompting that more accurately estimates a model's label bias, using only its predictions on the in-context demonstrations.Compared to existing LLM bias calibration methods, our method improves performance while also removing considerably more bias. Our findings highlight the necessity of considering and measuring biases in the predictions of LLMs when evaluating their performance.Moreover, adjusting models to their tasks through more accurate and effective estimation of biases holds promise for improving the reliability of LLMs and their applications. LLM Label Bias Our objective is to broaden the understanding of label bias in LLMs and the effectiveness of mitigation strategies, focusing on classification tasks.In this section, we define metrics designed to quantify bias in model predictions, providing a nuanced examination of label bias that extends beyond traditional performance metrics.We describe the setting of label bias in in-context learning ( §2.1), briefly outline methods to mitigate it ( §2.2), and finally review approaches to evaluate label bias as well as define the metrics we use in this work ( §2.3). Label Bias When employing LLMs for classification tasks through prompting, the model is given a test example x, preceded by a context C.This context can contain a (potentially empty) set of examples of the task's input-output mapping rpx 1 , y 1 q, . . ., px k , y k qs, henceforth demonstrations, and may also include task instructions.To determine the model's prediction from a set of answer choices Y , the likelihood it assigns to each continuation y P Y is computed, and the highest probability option is taken as the model prediction: These output probabilities often exhibit label bias, where the model tends to assign higher probability to certain answers regardless of the input test example x (Fig. 1).Multiple factors were posited to influence this bias, including the choice of verbalizers Y , the choice and order of in-context examples in C, and the overall textual features of task input x (Zhao et al., 2021;Fei et al., 2023). Bias Mitigation The predominant approach to alleviate label bias is to calibrate the model's output probabilities posthoc, for a specific context prompt C. Such methods typically first estimate the model's label bias using its output probabilities on a set of inputs, which can be content-free (e.g., "N/A" or random words from the task's domain; Zhao et al. 2021;Fei et al. 2023) or ordinary task inputs (Han et al., 2023).Next, calibration parameters are chosen based on this estimate, and used to adjust the original output probabilities during inference to generate the (hopefully unbiased) output. Evaluation Measures Most LLM label bias analysis relies on indirect assessments.For instance, some work inspected improvements in overall performance gained after applying techniques to mitigate it (Fei et al., 2023;Holtzman et al., 2021;Zhao et al., 2021).However, these do not indicate the extent of bias originally present, or that which remains after mitigation.We next examine approaches to measure this bias more directly, and define the metrics we use in this work.Importantly, we focus on label bias measures that could be used effectively both before and after applying mitigation techniques such as calibration. Drawing from previous research on fairness and bias in machine learning, we observe that there are two distinct yet related aspects in which label bias can be measured in LLM predictions: through the probabilities assigned by the model to different answers, e.g., assigning the label "yes" with an average output probability of 0.55, while "no" with 0.45; and through the model's predictions for different labels, e.g., achieving a recall of 0.50 for instances labeled "yes", compared to 0.40 on "no" (Mehrabi et al., 2021).Below we describe methods to measure each of these notions of bias. Probabilistic approach Previous work used qualitative assessments to visualize model output distributions on selected datasets (Zhao et al., 2021;Han et al., 2023).However, these cannot be used to rigorously evaluate models at larger scales.Recently, Fei et al. (2023) proposed to measure a model's label bias by considering two sets of inputs: a set of synthetic, content-free task inputs Xcf , and inputs consisting of random vocabulary words Xrand .For each input, they compute the output probabilities on every label y P Y , and finally compute the model's mean predicted probabilities across both sets, pcf and prand : The model's bias is then defined to be the total variation distance d T V between the two distributions: Importantly, since Fei et al. (2023) also use the model's predictions on the content-free inputs Xcf to calibrate it, this metric cannot be used to quantify the label bias remaining after calibration. In this work, we simplify the computation of this metric and adapt it to be used after calibration.First, we hold-out a set of inputs to be used exclusively for measuring bias.Second, when estimating the model's average output probabilities, instead of using synthetic inputs, we use indistribution examples held-out from the test set, Xi.d." ppx 1 , y 1 q, . . ., px m , y m qq.q, which recent mitigation approaches considered as the "ideal" and unbiased mean output distribution (Zhao et al., 2021).Finally, we define the model's bias score as the total variation distance between these two distributions: Outcomes-based approach When considering the effects of label bias on model predictions, strong label bias will likely result in disparities in task performance on instances of different classes.However, metrics to assess such disparities were not used in previous analyses of label bias. We propose to use the Relative Standard Deviation of class-wise accuracy (RSD ; Croce et al. 2021;Benz et al. 2021), a metric used for studying fairness in classification.RSD is defined as the standard deviation of the model's class-wise accuracy pacc 1 , . . ., acc |Y | q, divided by its mean accuracy acc on the entire evaluation data:3 Intuitively, RSD is low when model performance is similar on all classes, and high when it performs well on some classes but poorly on others. Discussion We note that each evaluation approach could detect biases that the other does not. For example, a slight bias in the model's average output probabilities (e.g., 55% vs. 45%) could render dramatic bias in actual outcomes if the model always assigns higher probability to some label.Conversely, when the output probabilities are biased on average but the model's class-wise performance is balanced, this hidden bias could result in actual performance disparities on more difficult instances.We therefore suggest reporting both measures. 3 Experimental Setting Datasets We evaluate models on 279 diverse tasks from the SUPER-NATURALINSTRUCTIONS benchmark (Wang et al., 2022).We select all available classification and multi-choice question answering tasks where the output space is a set of predefined labels, such as "yes/no" or "A/B/C".We sample 1,000 evaluation examples for all tasks with larger data sizes, and additionally sample 32 held-out examples for computing the bias score metric ( §2.3), and 64 more examples to use as a pool of instances for choosing in-context demonstrations and LoRA fine-tuning examples.We only include tasks with at least 300 evaluation examples in our experiments. For details on the selected tasks, see App.B. Models and Evaluation Setup We experiment with models of different sizes from three LLM families: Llama-2 7B and 13B (Touvron et al., 2023), Mistral 7B (Jiang et al., 2023a), and Falcon 7B and 40B (Penedo et al., 2023).We use both the base and instruction fine-tuned versions of each model.We evaluate models using context prompts with k P t0, 2, 4, 8, 16u demonstrations, and average the results across 3 different sets of demonstrations for each k.To control the evaluation budget, we run the more expensive Falcon 40B experiments with k P t0, 8, 16u averaged across 2 sets of demonstrations.We use the task instructions and prompt template defined in SUPER-NATURALINSTRUCTIONS. For tasks where the answer choices y P Y have unequal token lengths, we use length-normalized log-likelihood to compute the output probabilities (Holtzman et al., 2021). For additional implementation details, see App. A. Data contamination During their instruction tuning, Llama-2 chat models were initially fine-tuned on the Flan data collection (Chung et al., 2022;Longpre et al., 2023).As roughly 20% of Flan consists of examples from SUPER-NATURALINSTRUCTIONS, our evaluation of Llama-2 instruction-tuned models is likely affected by data contamination (Magar and Schwartz, 2022).Still, our results show both 7B and 13B chat models exhibit extensive label bias, possibly due to later fine-tuning on other data.As it is unclear from the implementation details of Touvron et al. (2023) which exact instances in SUPER-NATURALINSTRUCTIONS were included in train-ing, we do not take extra steps in attempt to reduce possible overlap and contamination. Bias Mitigation Techniques We evaluate the effects of three label bias mitigation methods: two calibration methods designed to correct a model's label bias by adjusting its output scores; and few-shot LoRA fine-tuning (Hu et al., 2022), which adapts the model to the task and its label distribution.We describe the methods below. Contextual calibration (CC) Zhao et al. (2021) proposed to use calibration in order to remove the label bias arising from the context prompt C and the model's pretraining.Inspired by confidence calibration methods (Guo et al., 2017), they define a matrix W that is applied to the model's original output probabilities p during inference to obtain calibrated, debiased probabilities q " softmaxpW pq. To determine the calibration parameters W , they first estimate the bias by computing the model's average predicted probabilities p on a small set of "placeholder" content-free input strings, such as "N/A", which replace the ordinary task input that follows C. 4 Finally, they set W " diagppq ´1, which ensures that the output class probabilities for the average content-free input are uniform, aiming to reduce bias on unseen examples. Domain-context calibration (DC) Following CC, Fei et al. (2023) proposed to estimate and mitigate the label bias arising from the textual distribution of the task's domain, by using task-specific content-free inputs to compute p.They construct such inputs by sampling and concatenating L random words from the test set, where L is the average instance input length in the data.They repeat this process 20 times, and set p to be the average output probabilities over all examples.Given a test example with original output probabilities p, they then use the calibrated probabilities q " softmaxpp{pq. Few-shot fine-tuning Finally, we experiment with few-shot, parameter-efficient fine-tuning for adapting LLMs to a given task's label distribution, thus potentially mitigating label bias.We fine-tune task-specific models for each context prompt using Low-Rank Adapation (LoRA;Hu et al., 2022) Figure 2: Performance (higher is better) and label bias metrics (lower is better) for Llama-2 pretrained and instruction-tuned models (7B/13B).Both performance and RSD improve with scale, instruction tuning, and number of demonstrations.In contrast, BiasScore is substantially worse after instruction tuning and does not improve when scaling models up in most evaluated settings. computational constraints, we only run LoRA on Llama-2 7B and Mistral 7B, only consider values of k P p0, 8, 16q, and average across two sets of demonstrations.See App.A for more details. 4 Quantifying Label Bias in LLMs 4.1 LLMs are Label-Biased We begin by examining the performance and label bias of models with and without instructiontuning.We report averaged results across all tasks for Llama-2 models in Fig. 2. Results for other models show similar trends (see App. C.1).We first verify that, as expected, model performance (Fig. 2a) substantially improves with scale, with instruction tuning and with the number of demonstrations.We then consider the two bias metrics-RSD (Fig. 2b) and BiasScore (Fig. 2c).We observe that label bias is substantial across most evaluated settings: When prompted with two or no demonstrations, all models obtain high RSD values of 0.6 or more, with base models obtaining even higher values around 0.9.This implies a widespread disparity in model performance across classes in many of the evaluated tasks, and indicates that for most tasks, models primarily succeed on instances of certain classes, while consistently failing on others.Increasing the number of demonstrations to 8 helps reduce the bias, but RSD remains substantial at around 0.4, and adding further demonstrations results in little to no improvement. Similarly, we find BiasScore improves considerably when using sufficient demonstrations, with models obtaining values as high as 0.25 when using no demonstrations, to around 0.05 for the best model and setting.High BiasScore values indicate the model is uncalibrated, and tends to make overly confident predictions on certain labels regardless of the input.Although BiasScore can be relatively small for some models-indicating their average output distribution is close to uniform-when observed together with high RSD , it implies that the model subtly but persistently assigns more probability mass to the preferred labels, resulting in substantial bias. Differences between the Bias Measures We further observe that, interestingly, both bias metrics show divergent trends.Although RSD values, much like model performance, sharply improve after instruction-tuning, the resulting models' BiasScore is often higher than their base counterparts.Similarly, while RSD improves with scaling, the BiasScore of smaller models is lower. We note that higher performance together with lower RSD means that the model's performance has improved across most classes.In contrast, higher BiasScore indicates that its average predicted probabilities grew farther than uniform.Taken together, this implies that the scaled-up and instruction-tuned models are making more confident predictions on some classes, but not on others.This could mean more confident correct predictions on the preferred classes, or more confidently wrong predictions on others (or both).Altogether, this suggests more subtle forms of bias persist after instruction-tuning or scaling up (Tal et al., 2022). Overall, we find the two metrics to be complimentary due to their measurement of different aspects of label bias.We hence use both in further experiments to provide a more comprehensive understanding of label bias in model predictions. Label Bias Persists after Mitigation We have seen that LLMs demonstrate extensive label bias across different models, scales and tasks ( §4.1).We next examine techniques aimed at mitigating such bias, and assess the extent of label bias remaining after their application.We report our results for Llama-2 models in Fig. 3, and observe similar trends for other models (App.C.2). We first consider the effect of bias mitigation on model performance (Fig. 3a) using the three methods described in §3.3: contextual calibration (CC), domain-context calibration (DC), and fewshot fine-tuning with LoRA.Compared to standard prompting (black lines), we find that applying CC (orange) provides little to no gains.Moreover, it can even undermine model performance, especially for instruction-tuned models, as previously observed by Fei et al. (2023).In contrast, DC (purple) can provide substantial performance gains, especially when using few or no in-context demonstrations, where baseline performance is relatively low.However, when calibrating instructiontuned models prompted with a higher number of demonstrations, we find that DC mostly fails to improve performance.Finally, LoRA considerably improves performance in all cases (green in Fig. 3, upper row), vastly outperforming both CC and DC. We next turn to measure label bias (Fig. 3b and 3c).Notably, here we observe that for the two calibration methods, changes in both RSD and BiasScore are correlated with changes in performance.We find that CC substantially worsens label bias in instruction-tuned models, and can also increase bias for base models.Conversely, while DC alleviates bias in many of the evaluated settings, it is largely unsuccessful in mitigating it when prompting instruction-tuned models with 8 or more demonstrations.LoRA proves effective for improving RSD in all settings, but RSD values still remain relatively high.In contrast, BiasScore noticeably increases after LoRA fine-tuning, indicating that more subtle bias persists. Overall, our results indicate that existing bias calibration approaches are insufficient for diminishing label bias in essential cases, particularly for instruction-tuned models.Further, while LoRA fine-tuning is effective in both improving performance and mitigating certain aspects of bias (though not others), it is also considerably more computationally expensive than calibration. Mitigating Label Bias by Calibrating on Demonstrations Motivated by the challenges of existing calibration approaches on instruction-tuned models ( §4.3), we aim to develop an effective calibration method for such scenarios.We hypothesize a possible cause for such difficulties is that the inputs used for calibration in CC and DC are very distinct from the more curated, high-quality inputs models observe during instruction-tuning (Touvron et al., 2023). 5eeking to use more naturally-occurring inputs, and to avoid any reliance on additional held-out examples, we propose to calibrate models using the in-context examples readily available in few-shot prompts.We therefore need to obtain the model's output probabilities on these inputs to estimate its bias.However, as these examples appear alongside their labels in the context provided to the model, it could simply copy the correct answer from the prompt, leading to unreliable bias estimates.We introduce a simple method to alleviate this concern. Leave-One-Out Calibration (LOOC) Our goal is to estimate the model's average output probabilities p at test-time by using the k demonstrations rpx 1 , y 1 q, . . ., px k , y k qs provided in the context C, and then use it for calibration.Drawing from leave-one-out cross-validation, when evaluating the model on the i-th demonstration's input x i , we prompt it with an edited context C ´i comprised of the original context C after removing the current demonstration px i , y i q, resulting in k ´1 demonstrations. 6We thus obtain k output probabilities: To reliably estimate p, we further need to account for the demonstrations' labels y i : for imbalanced choices of demonstrations (e.g., class imbalance), using the average of p i 's could lead to an underestimation of the probability assigned to infrequent labels.We therefore compute the average output probabilities p by taking into account the labels y i , as we do for computing BiasScore ( §2.3):We first average p i 's associated with the same label ℓ, D ℓ " tp i | y i " ℓu, and then set p as the mean of these intra-label averages: Finally, we use p to compute calibration parameters and score new examples using the same methodology as Zhao et al., 2021 ( §3.3).We refer to our method as Leave-One-Out Calibration (LOOC). Results We use LOOC to calibrate models in the same setup of §4.3.We report our results for Llama-2 models in Fig. 3 (cyan lines), finding similar trends in other models (App.C.2). Comparing our method to other calibration approaches, we find LOOC surpasses CC and DC by a wide margin in both performance and bias metrics for prompts with k " 8, 16 demonstrations.Importantly, using LOOC to calibrate instruction-tuned models in this setting dramatically improves upon the uncalibrated model, whereas other calibration methods fail to achieve meaningful gains ( §4.3).Further, LOOC nearly closes the gap with LoRAlevel performance while improving upon it in both bias metrics, yet uses substantially less compute. As LOOC relies on the in-context demonstrations for bias estimation, k needs to be sufficiently large for calibration to succeed.Surprisingly, we find that with as few as k " 4 demonstrations, our method is often comparable to the next best calibration method on all metrics.Finally, we note that while our method can substantially reduce label bias compared to other approaches, the remaining RSD is still considerable, indicating that model performance is still biased on some tasks. Analysis We study the effect of different factors on the extent of label bias in model predictions: the semantic meaning of the task labels ( §6.1), the level of label imbalance in the demonstrations ( §6.2), and the choice of demonstrations ( §6.3). Semantically Equivalent Labels The output space for classification tasks often consists of labels with strong semantic meaning (e.g., "Positive" vs. "Negative").Recent work has indicated that, when faced with such labels, models are affected by semantic priors from their pretraining or instruction-tuning (Wei et al., 2023;Min et al., 2022b) that could affect label bias (Fei et al., 2023). We examine whether models exhibit lower label bias when the task's labels are semantically equivalent and interchangeable.We extract all multi-choice QA tasks-with label spaces such as "A/B/C/D" or "1/2/3"-and all sentence completion tasks, where models choose a logical continuation for an input text between two options, usually labeled A and B. This results in 18 tasks with semantically equivalent labels. We compare label bias on this subset of tasks and the entire evaluation suite for Llama-2 models in Fig. 4, with results for other models largely fol-lowing similar trends (App.C.3).We find that, in most cases, models demonstrate lower label bias on tasks with semantically equivalent labels.This is especially evident in settings with few or no demonstrations, where models are typically strongly biased ( §4.1).Still, RSD levels for such tasks remain relatively high across all evaluated settings.Further, we observe that instruction-tuned models prompted with 8 or more demonstrations are often more biased on this subset of tasks.In summary, although using semantically equivalent labels may potentially mitigate bias in scenarios with limited demonstrations, LLMs still exhibit substantial label bias when faced with such labels. Imbalanced In-context Demonstrations Label imbalance in the in-context demonstration set was previously shown to amplify label bias (Zhao et al., 2021) as well as decrease model performance (Min et al., 2022a), but such results were derived on a restricted set of tasks.We use our evaluation suite to investigate the observed label bias and performance of models when varying the level of imbalance in the demonstrations.To establish a consistent definition of label imbalance across different tasks, we use the subset of binary classification tasks (N " 197) with k " 8 demonstrations.Given a task with labels L " tℓ A , ℓ B u and a context C, we define p Ò as the proportion of the most frequent label in the demonstrations of C, such that p Ò attains values in t0.5, 0.625, 0.75, 0.875, 1.0u.Specifically, p Ò " 0.5 means the labels are perfectly balanced, and p Ò " 1.0 means the demonstrations only include examples for one of the labels. For every task, we prompt Llama-2 (7B/13B) and Mistral (7B) models using 10 different sets of demonstrations, with 2 sets for each value of p Ò : one where ℓ A is the most frequent label in C, and another where ℓ B is the most frequent, as well as two different balanced sets (p Ò " 0.5). 7We group measurements taken across different tasks and demonstration sets by their level of label imbalance p Ò , and inspect the average results per level. We report our results in Fig. 5. Examining the two bias metrics, RSD (Fig. 5a) and BiasScore (Fig. 5b), we observe that both pretrained and instruction-tuned models are resistant to label imbalance: Increased imbalance does not result in Figure 4: Label bias metrics for Llama-2 models (7B/13B), when evaluated on all tasks in our evaluation suite (All) vs. a subset of tasks with semantically equivalent labels (Sem.Eq.Labels).LLMs exhibit label bias even on tasks with semantically equivalent labels, such as multi-choice question answering.notable gains in bias, unless the imbalance is very extreme-specifically, when the demonstrations include only a single or no demonstrations for one of the labels (p Ò ą 0.75).Interestingly, model performance follows the same trends (Fig. 5c).Overall, our results indicate that for most tasks, the impact of label imbalance in the demonstrations set is minimal, except for cases of severe imbalance. Choice of Demonstrations The performance of LLMs in in-context learning was shown to be sensitive to the exact choice of demonstrations used to prompt the model (Liu et al., 2022;Chang and Jia, 2023).We examine whether such choices also impact the extent of label bias in model predictions.We assess the performance and bias of Llama-2 (7B/13B) and Mistral (7B) models across 5 different sets of k " 8 demonstrations for each task in our evaluation suite.In addition to reporting the mean and standard deviation of each metric, we use several oracle methods to aggregate and choose a specific demonstration set per task when computing the overall cross-task performance and bias metrics.Specifically, we select the demonstration sets that attain the following, per task: best performance; worst performance; median performance; least bias; and most bias.We report our results for Llama-2 7B base in Tab. 1, with other models showing similar trends (App.C.4).We find that label bias, similarly to model performance, is highly sensitive to the choice of demonstrations, as indicated by the high variance across sets.Interestingly, the set of demonstrations that attains the worst performance also leads to strong bias, and vice-versa.In fact, we find that performance and bias are anti-correlated, with strong Pearson correlation for RSD (r " ´0.74) and moderate for BiasScore (r " ´0.30), indicating that when LLMs underperform in classification, it is often due to prompts that exacerbate bias.We leave further research into demonstrations that lead to biased and unbiased predictions to future work. Related Work Biases in LLM predictions Recent work has revealed various biases in the predictions of LLMs.Wang et al. (2023a) showed that models exhibit positional bias when presented with several texts for evaluation and ranking.Pezeshkpour and Hruschka (2023) and Zheng et al. (2024) exposed a similar bias in multi-choice QA.Si et al. (2023) studied inductive biases in in-context learning.Complimentary to these works, we study label bias and seek to improve its evaluation and mitigation. Calibrating Label Bias in LLMs Recent work introduced calibration methods to mitigate label bias in LLMs (Zhao et al., 2021;Fei et al., 2023).Han et al. (2023) proposed to fit a Gaussian mixture to the model's output probabilities and use it for calibration, but their approach requires hundreds of labeled examples.Concurrently to our work, Jiang et al. (2023b) proposed to generate inputs for calibration by conditioning models on the context prompt, and Zhou et al. (2023) calibrate models using model output probabilities on the entire test set.While the motivation for both methods is similar to ours, our approach does not require access to the test set, or any compute to obtain inputs for calibration.Importantly, unlike previous work on bias calibration, our main focus is the evaluation of label bias in LLMs. Conclusion The label bias of LLMs severely hinders their reliability.We considered different approaches for quantifying this bias.Through extensive experiments with ten LLMs across 279 classification tasks, we found that substantial amounts of label bias exist in LLMs.Moreover, we showed this bias persists as LLMs increase in scale, are instructiontuned, are provided in-context examples, and even when they are calibrated against such bias.We proposed a novel calibration method, which outperforms existing calibration approaches and reduces label bias dramatically.Our results highlight the need to better estimate and mitigate LLM biases. Limitations Model sizes Although we experiment with models of several sizes, the models we use are all in the 7B-40B range.We chose not to include relatively small models as these often exhibit poor performance in prompt-based settings.While recent efforts have released better and more efficient models, we leave those for future work.We chose not to experiment with very large LLMs such as Llama 70B due to limitations in computational resources, and as many of them (e.g., GPT-4) are closed (Rogers et al., 2023).Therefore, the extent to which our findings apply to such models is unclear. Prompt format Our evaluations are performed on a large and diverse set of tasks extracted from SUPER-NATURALINSTRUCTIONS. Still, all tasks contain similar prefixes before introducing instructions, demonstrations and task inputs.Furthermore, each task only has one human-written instruction.We leave experimentation with more varied formats and examination of bias across different phrasings to future work. Evaluating multilingual tasks To build our evaluation suite, we extracted tasks from SUPER-NATURALINSTRUCTIONS, focusing only on English tasks.We leave analysis on label bias for multilingual tasks to future work. A Experimental Setting Our implementation and pretrained model checkpoints use the Huggingface Transformers library (Wolf et al., 2020).Our code for model evaluation on SUPER-NATURALINSTRUCTIONS is based on the code from Wang et al. (2023b). Inference When running inference, we load all models using bf16, except for Falcon-40B, which we load using 8-bit inference.We evaluate models using a maximum sequence length of 1024.When incorporating in-context demonstrations into the prompt, the demonstrations are added one by one until the maximal sequence length is reached, while ensuring enough space remains for the input of the evaluated example.Any remaining demonstrations exceeding this length are excluded from the prompt.Consequently, when evaluating tasks with k demonstrations, the contexts for tasks with very long inputs may contain fewer than k demonstrations.In our experiment detailed in §6.2, which investigates the impact of label imbalance in the demonstrations set on label bias, we use a sequence length of 2048 and only analyze results for tasks where the prompt contains precisely k demonstrations, excluding other instances from our reported findings. Compute We run all experiments on Quadro RTX 6000 (24GB) and RTX A6000 (48GB) GPUs, except for Falcon-40B experiments, which we run on A100 GPUs.Average inference run-times on our entire evaluation suite is 18 hours for 7B models, 24 hours for 13B models, and 24 hours for 40B models.Running LoRA fine-tuning along with inference for 7B models takes 26 hours.Computing calibration parameters, including running inference on inputs required for calibration, takes around 30 minutes to 2 hours for each model, depending on the method used. LoRA hyperparameters We use all of the hyperparamets used by Dettmers et al. (2023) when fine-tuning on SUPER-NATURALINSTRUCTIONS, except for using bf16 training instead of 8-bit, a warm-up rate of 0.0, and 5 training epochs.Specifically, we use a learning rate of 0.002, LoRA r " 64 and LoRA α " 16. B Evaluation Suite We evaluate models on a subset of 279 tasks from the SUPER-NATURALINSTRUCTIONS benchmark (Wang et al., 2022).We use up to 1000 evaluation examples for each task.Altogether, our evaluation set consists of 264,176 examples. We detail the categories of the selected tasks along with the number of tasks corresponding to each category in Tab. 2. We also report the distribution of the number of labels across tasks in Tab. 3, as well as the 20 most frequent labels in Tab. 4. C.1 Performance and Label Bias We provide additional results for the performance and label bias of models ( §4.1) for Mistral (Fig. 8) and Falcon (Fig. 9) models. C.2 Bias Mitigation Methods We present additional results for the impact of bias mitigation methods ( §4.3) for Mistral (Fig. 10) and Falcon (Fig. 11) models. C.3 Semantically Equivalent Labels We present additional results for the analysis on label bias for tasks with semantically equivalent labels ( §6.1) for Mistral (Fig. 6) and Falcon (Fig. 7) models. Figure 1 : Figure 1: LLMs exhibit label bias-a tendency to output a given label regardless of the context and input (in this example, 'yes' over 'no').In this work we evaluate LLM label bias across ten LLMs and 279 classification tasks, showing label bias is a major problem in LLMs. , training adapters on 16 held-out training examples for 5 epochs.Importantly, we use the same context C during both fine-tuning and evaluation.Due to (a) Performance (Macro-F1) (b) Label bias (RSD ) (c) Label bias (BiasScore) Figure 3 : Figure3: The effect of label bias mitigation methods on performance and bias for Llama-2 models.CC improves neither performance nor bias; DC and LoRA fine-tuning improve both; our Leave-One-Out Calibration (LOOC) method leads to the best performance among the calibration methods, and the overall lowest bias for k P t8, 16u. Figure 5 : Figure5: Label bias and performance metrics for Llama-2 (7B/13B) and Mistral (7B) models, when aggregated by the level of imbalance in the demonstrations set used for prompting the model, measured by the proportions of its most frequent label (p Ò ).For most tasks, label imbalance has only minor impact on both bias and performance, unless the imbalance is extreme.Instruction-tuned models are less sensitive to imbalance. Figure 6 : Figure6: Label bias metrics for Mistral 7B models when evaluated on all tasks in our evaluation suite (All) vs. a subset of tasks with semantically equivalent labels (Sem.Eq.Labels). Figure 7 : Figure7: Label bias metrics for Falcon (7B/40B) models when evaluated on all tasks in our evaluation suite (All) vs. a subset of tasks with semantically equivalent labels (Sem.Eq.Labels). Figure 8 : Figure 8: Performance and label bias metrics for Mistral 7B pretrained and instruction-tuned models. Table 1 : Results of Llama-2 7B base model when prompted with 5 different sets of demonstration on our evaluation suite.We employ oracles to aggregate and compute cross-task results by choosing a specific set of demonstrations for each task.Label bias is highly sensitive to the choice of in-context examples. Table 2 : Categories of tasks included in our evaluation suite, based on SUPER-NATURALINSTRUCTIONS, along with the number of tasks per category and the total number of instances used for evaluating models. Table 3 : Distribution of the number of labels across tasks in our evaluation suite. Table 4 : The 20 most frequent labels in our evaluation suite and the number of tasks they appear in. Table 5 : Results of Llama-2 7B chat model when prompted with 5 different sets of demonstration on our evaluation suite.We employ oracles to aggregate and compute cross-task results when choosing a specific set of demonstrations for each task. Table 8 : Results of Mistral 7B base model. Table 9 : Results of Mistral 7B instruct model.
2024-03-22T05:36:03.864Z
2024-05-04T00:00:00.000
{ "year": 2024, "sha1": "91e7be4f080feae4773289775279a67b508e648f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ArXiv", "pdf_hash": "287a5175889091a2ba430b6b41e09431aab47099", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
218836727
pes2o/s2orc
v3-fos-license
Early immunotherapy is highly effective in IgG1/IgG4 positive IgLON5 disease In the pathogenesis of anti-IgLON5 disease, there is evidence for both neurodegenerative and autoimmune mechanisms [1–3]. While strong association to anti-IgLON5-antibodies and distinct HLA haplotypes suggest an active role of inflammatory processes, older age of manifestation, progressive neuronal tauopathy without signs of inflammation, and an association to the MAPT-H1 haplotype, are more typical for neurodegenerative diseases [1]. Data on immunotherapy are contradictory and cannot solve this dilemma. Stabilization as well as no therapy effect with lethal outcome have been reported so far [3]. Improvement under immunotherapy is described in a few cases [3, 4]. Here, we present a patient with an acute to subacute bulbar manifestation of anti-IgLON5 disease, mimicking a myasthenic crisis, and a dramatic recovery under immunotherapy started 1 week after disease onset. An 82-year-old lady suffered for a few days from dysesthesia in her hands and feet, an unsteady gait, and augmented sweating. The patient suffered from a marked fatigue and two until three sleep attacks per day that occurred in passive situations, lasted for 15 until 20 min and did not interfere with her routine activities. Initial clinical examination revealed moderate weakness in cervical flexion and extension (MS 4/5), severe gait ataxia, pronounced dysphagia, dysarthria, hypophonia, and a minimal right-side ptosis, suspicious of myasthenia gravis. Repetitive nerve stimulation as well as brain and spinal MRI were unremarkable. On the second day of admission, she progressed to aphonia, severe dysphagia, and weakness of the neck muscles (MS 3/5). Fiber optic evaluation of swallowing (FEES) revealed a pronounced hypotonic deglutition with penetration and slight aspiration of viscous liquids (Fig. 1a). Under suspicion of a myasthenic crisis, intravenous immunoglobulins (IVIg, 2 g/ kg BW) were initiated. During the next 3 days, her condition improved dramatically, she was able to walk without any help and the neck weakness normalized. Furthermore, her voice became clear and repeated FEES showed a striking improvement without further penetration or aspiration (Fig. 1b). A broad serological screening revealed positive anti-IgLON5 IgG (in both tissue based and cell based immunofluorescence, IgG 1:2560, IgG1 1:80, IgG2 1:40, IgG4 1:640, IgG3 negative), whereas antibody testing against acetylcholine-receptor, muscle-specific kinase and titin resulted negative. The patient refused to perform a lumbar puncture. The patient carried the HLA haplotype DQB1*05:01, but not DRB1*10:01. Five weeks later, she was nearly symptom-free and IVIg therapy (1 g/ kg every 4 until 6 weeks) was continued. Today, 1 year after the commencement of the symptoms, the patient is asymptomatic and lives alone without a care service. Despite dramatic clinical recovery of the patient, the serum titer of anti-IgLON5 antibodies did not decline at the last follow-up. Here, we describe for the first time a patient with antiIgLON5 disease and a complete recovery under early immunotherapy. In particular, two factors appear to be decisive for the therapy success in this case. Firstly, IVIg was started within 1 week after disease onset. A slowly progressive disease course and late diagnosis usually result in substantial therapy delay [2, 5]. In line with this, previously reported therapy effects were less obvious and some authors doubted if an immunosuppressive treatment would be effective in this disease [1, 2]. However, subacute disease progression has been reported * Ilya Ayzenberg ilya_ayzenberg@yahoo.com as well as brain and spinal MRI were unremarkable. On the second day of admission, she progressed to aphonia, severe dysphagia, and weakness of the neck muscles (MS 3/5). Fiber optic evaluation of swallowing (FEES) revealed a pronounced hypotonic deglutition with penetration and slight aspiration of viscous liquids (Fig. 1a). Under suspicion of a myasthenic crisis, intravenous immunoglobulins (IVIg, 2 g/ kg BW) were initiated. During the next 3 days, her condition improved dramatically, she was able to walk without any help and the neck weakness normalized. Furthermore, her voice became clear and repeated FEES showed a striking improvement without further penetration or aspiration (Fig. 1b). A broad serological screening revealed positive anti-IgLON5 IgG (in both tissue based and cell based immunofluorescence, IgG 1:2560, IgG1 1:80, IgG2 1:40, IgG4 1:640, IgG3 negative), whereas antibody testing against acetylcholine-receptor, muscle-specific kinase and titin resulted negative. The patient refused to perform a lumbar puncture. The patient carried the HLA haplotype DQB1*05:01, but not DRB1*10:01. Five weeks later, she was nearly symptom-free and IVIg therapy (1 g/ kg every 4 until 6 weeks) was continued. Today, 1 year after the commencement of the symptoms, the patient is asymptomatic and lives alone without a care service. Despite dramatic clinical recovery of the patient, the serum titer of anti-IgLON5 antibodies did not decline at the last follow-up. Here, we describe for the first time a patient with anti-IgLON5 disease and a complete recovery under early immunotherapy. In particular, two factors appear to be decisive for the therapy success in this case. Firstly, IVIg was started within 1 week after disease onset. A slowly progressive disease course and late diagnosis usually result in substantial therapy delay [2,5]. In line with this, previously reported therapy effects were less obvious and some authors doubted if an immunosuppressive treatment would be effective in this disease [1,2]. However, subacute disease progression has been reported in a few other cases, probably representing a favorable timepoint for immunotherapy initiation [5][6][7]. Interestingly, up to 20% of patients have a relative rapid clinical presentation, in less than 4 months. Secondly, we identified both IgG4 and IgG1 anti-IgLON5 antibodies in our patient. Anti-IgLON5-IgG1 (but not IgG4) cause an irreversible internalization of surface IgLON5 in hippocampal neurons [8]. It can be speculated that this early immune-mediated effect on the IgLON5 clusters induces a further intracellular pathological cascade, making later immunotherapy less effective. If true, IgG subunit analysis could reflect ongoing inflammatory activity and probably even predict the immunotherapy response. In line with this, a case with exclusively IgLON5-IgG1 inflammatory changes in the brain biopsy and MRI and a temporary response to immunotherapy has recently been reported [7]. In conclusion, early immunotherapy can be highly effective in anti-IgLON5 disease, confirming a key pathogenetic role of initial autoimmune mechanisms. Despite clinical heterogeneity, a subacute onset of characteristic symptoms, including sleep attacks and bulbar signs, should increase clinical suspicion. Being safe and nonimmunosuppressive, IVIg should be tried in suspicious cases without delay. Author contributions TG design and conceptualized study; acquisition of data; analyzed the data; drafted the manuscript for intellectual content. VB acquisition of data; revised the manuscript for intellectual content. CIB acquisition of data; interpreted the data; revised the manuscript for intellectual content. RG revised the manuscript for intellectual content. IA design and conceptualized study; acquisition of data; analyzed the data; drafted the manuscript for intellectual content. Funding None. Availability of data and material The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Compliance with ethical standards Conflicts of interest Dr. Grüter received travel reimbursement from Sanofi Genzyme and Biogen Idec, none related to this manuscript. Dr. Bien reports no disclosures. Dr. Behrendt reports no disclosures. Prof. Gold serves on scientific advisory boards for Teva Pharmaceutical Industries Ltd., Biogen Idec, Bayer Schering Pharma, and Novartis; has received speaker honoraria from Biogen Idec, Teva Pharmaceutical Industries Ltd., Bayer Schering Pharma, and Novartis; serves as editor for Therapeutic Advances in Neurological Diseases and on the editorial boards of Experimental Neurology and the Journal of Neuroimmunology; and receives research support from Teva Pharmaceutical Industries Ltd., Biogen Idec, Bayer Schering Pharma, Genzyme, Merck Serono, and Novartis, none related to this manuscript. Dr. Ayzenberg received travel grants from Biogen Idec and Guthy-Jackson Charitable Foundation, served on scientific advisory boards for Roche and Alexion and received research support from Chugai Pharma and Diamed, none related to this manuscript. Ethical standards The authors received written consent from participants in this case report prior to their inclusion in the study and have them on file in case they are requested by the editor. The manuscript does not contain clinical or animal studies. Consent to participate The authors received written consent from participants in this case report prior to their inclusion in the study and have them on file in case they are requested by the editor. Consent for publication Dr. Ayzenberg has full access to all of the data, and the right to publish any and all data separate and apart from any sponsor. All authors have approved the final submitted version. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not Fig. 1 Fiber optic evaluation of swallowing (FEES) before (a) and after (b) IVIg treatment. Before treatment, green viscous fluid penetrated the laryngeal inlet with slight contact with the vocal folds. After treatment, no penetration or aspiration were evident after swallowing green fluid permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-05-23T14:08:10.422Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "dd62026cf5e4e91fd5761ead843021a5eeb08f4e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-020-09924-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dd62026cf5e4e91fd5761ead843021a5eeb08f4e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233742629
pes2o/s2orc
v3-fos-license
Damage Localization and Severity Assessment of a Cable-Stayed Bridge Using a Message Passing Neural Network Cable-stayed bridges are damaged by multiple factors such as natural disasters, weather, and vehicle load. In particular, if the stayed cable, which is an essential and weak component of the cable-stayed bridge, is damaged, it may adversely affect the adjacent cables and worsen the bridge structure condition. Therefore, we must accurately determine the condition of the cable with a technology-based evaluation strategy. In this paper, we propose a deep learning model that allows us to locate the damaged cable and estimate its cross-sectional area. To obtain the data required for the deep learning training, we use the tension data of the reduced area cable, which are simulated in the Practical Advanced Analysis Program (PAAP), a robust structural analysis program. We represent the sensor data of the damaged cable-stayed bridge as a graph composed of vertices and edges using tension and spatial information of the sensors. We apply the sensor geometry by mapping the tension data to the graph vertices and the connection relationship between sensors to the graph edges. We employ a Graph Neural Network (GNN) to use the graph representation of the sensor data directly. GNN, which has been actively studied recently, can treat graph-structured data with the most advanced performance. We train the GNN framework, the Message Passing Neural Network (MPNN), to perform two tasks to identify damaged cables and estimate the cable areas. We adopt a multi-task learning method for more efficient optimization. We show that the proposed technique achieves high performance with the cable-stayed bridge data generated from PAAP. Introduction Cable-stayed bridges, one of the essential transportation infrastructures in modern society, are damaged and corroded by external environments such as natural disasters, climate, ambient vibrations, and vehicle loads. As damage accumulates, the condition of the structure deteriorates, and the bridge loses its function. Damaged bridges even lead to collapse, causing severe problems such as human injury and economic loss. In particular, the stayed cable is a necessary but vulnerable primary component of cablestayed bridges [1]. When the cable starts to be damaged, the stiffness and cross-sectional area decrease [2]. Since the cable has a small cross-sectional area, it may be lost due to low resistance against accidental lateral loads. The cable loss may cause overloading in the bridge and adversely affect adjacent cables [3]. Therefore, we must thoroughly inspect the cable conditions. However, we cannot directly know the damaged cable and its crosssectional area only with raw data collected from the sensors on the bridge, such as the cable tension. Furthermore, if the damage degree is not significant, it may be challenging to determine whether the damage occurs visually, unlike cracks detection. Manual checking of all cables one by one is very inefficient and increases maintenance costs. Therefore, to ensure the safety and durability of the bridge, we need a technology-based evaluation and spatial information. Studies using point clouds produced by laser scanning have been proposed to evaluate the structure state regarding the entire bridge structure and 3D spatial information data between sensors [24][25][26]. We notice that 3D spatial information of bridges can provide helpful information about the structure states from previous studies. However, using point cloud data for structural health evaluation is possible only for visible elements such as cracks and spalling [27,28], and the point cloud data are not directly related to the loss of stiffness or strength since the point cloud data do not adequately capture the depth due to occlusion [29]. Since GNN can learn the graph-structured data, it resolves the limitation of CNN, which accepts only the grid-structured data. Thanks to the rapid development of GNN, which is capable of recent graph prediction, the utilization of deep learning increases in various domains such as traffic forecasting [30,31], recommendation system [32,33], molecular property prediction [34], and natural language processing [35,36]. Recently, a study using GNN for cable-stayed bridge monitoring has been proposed. Li et al. [37] explored the spatiotemporal correlation of the sensor network in a cable-stayed bridge using the graph convolutional network and a one-dimensional convolutional neural network. They showed that the proposed method effectively detects sensor faults and structural variation. We expect GNN to be actively examined as an SHM technology in the future. In this study, we use Message Passing Neural Network (MPNN), a representative architecture designed to process graph data. Glimer et al. [34] proposed the MPNN, a GNN framework that represents the message transfer between the vertices of the graph as a learnable function. MPNN learns the representation of the graph while repeating a vertex update with messages received from neighboring vertices. We create a graph with the connection relationship between the cable-stayed bridge nodes and apply the node and element data as the vertex features and edge features of the graph, respectively. We train MPNN to estimate damaged cables using the graphed sensor data. We also estimate the cross-sectional area of the damaged cable and identify the damaged cable location to reveal a detailed bridge condition. We adopt a multi-task learning method to secure that our deep learning model predicts two tasks effectively. The multi-task learning benefits while learning related tasks together [38]. Since estimating the location and cross-sectional area of damaged cables are not independent tasks, deep learning models can be optimized efficiently while simultaneously learning both tasks. Background Structural health conditions of cable-stayed bridges are generally monitored based on cable tension changes related to cable area parameters. The tensile forces on cables inevitably change when one or more cables are damaged. A machine learning model is one of the damage detection techniques that identify damage location and degree. This section presents a fundamental understanding of the cable-stayed bridge model and our proposed approach for damage detection. A robust structural analysis program, Practical Advanced Analysis Program (PAAP), is introduced, followed by our cable-stayed bridge model. We then introduce a deep learning theory to understand Message Passing Neural Network (MPNN) adopted as a damage detection technique in this work. Practical Advanced Analysis Program (PAAP) The PAAP is an efficient program in capturing the geometric and material nonlinearities of space structures using both the stability function and refined plastic hinge concept. The Generalized Displacement Control (GDC) technique is adopted for solving the nonlinear equilibrium equations with an incremental-iterative scheme. This algorithm accurately traces the equilibrium path of the nonlinear problem with multiple limit points and snap-back points. The details of the GDC are presented in [17,39]. In many studies of cable-stayed bridges [21,40], cables have been modeled as truss elements, while pylons, girders, and cross-beams were modeled as plastic-hinge beam-column elements. The plastic-hinge beam-column elements utilize stability functions [41] to predict the secondorder effects. The inelastic behavior of the elements is also captured with the refined plastic hinge model [42,43]. To correctly model the realistic behaviors of cable structures, the catenary cable element is employed in the PAAP due to its precise numerical expressions [40]. The advantage of the PAAP is that the nonlinear structural responses are accurately obtained with only one or two elements per structural member leading to low computational costs [17,21]. Thus, the PAAP is employed to analyze and determine the cable tensions in our cable-stayed bridge model. Cable-Stayed Bridge Model A cable-stayed bridge model of the semi-harp type is proposed as shown in Figure 1. The bridge has a center span of 122 m and two side spans of 54.9 m. Two 30 m-high towers support two traffic lanes with an overall width of 7.5 m. Pylons, girders, and cross beams are made of steel with a specific weight of 76.82 kN/m 3 . The specific weight of the stayed cable is 60.5 kN/m 3 . In the PAAP, the girders, pylons, and cross beams are modeled as plastic-hinge beam-column elements. The stayed cables are modeled as catenary elements. For simplicity in determining the damage of the cable, only the dead load induced by the self-weight of the bridge is considered. Multilayer Percpetron The most straightforward neural network, Multilayer Percpetron (MLP), has a structure that includes multiple hidden layers between the input layer and the output layer. In each fully-connected hidden layer, the activation function is applied to the affine function of hidden unit h (i) as follows. where, w (i) and b (i) represent the i th hidden layer weight and bias, respectively. σ is an activation function for nonlinear learning. There are various activation functions, and mainly Rectified Linear Unit (ReLU), hyperbolic tangent (tanh), and sigmoid functions defined below are applied frequently. Dropout is applied to the hidden layer to prevent overfitting of the neural network. During the training process, the dropout disconnects randomly selected hidden units at a certain probability, such as the dropout rate. Then, the network becomes more robust because the network output does not depend only on a specific unit. Recurrent Neural Network Recurrent Neural Network (RNN) generates the output with current input and hidden state representing past information of sequence data. Typical RNN structures, Gated Recurrent Units (GRUs) [44] and Long Short-Term Memory (LSTM), support the gating of the hidden state and control information flow. Figure 2a shows how the hidden state is calculated in GRU. GRU computes the reset gate r t ∈ R k that controls the memory from i th data m i ∈ R d , where d is the dimension of m i , in the input sequence and the update gate z t ∈ R k , where k is the dimension of the hidden state, that controls the similarity between the new state and the old state. GRU integrates the computed gates to determine the candidate hidden stateh t ∈ R k and the hidden state h t ∈ R k . The equations of GRU are as follows. where, and σ are Hadamard product and sigmoid functions, respectively. W mr , W mz , W mh ∈ R d×k and W hr , W hz , W hh ∈ R k×k are weight parameters. b r , b z , b h ∈ R k are biases. Figure 2b shows the computation process of hidden state in LSTM. The cell state c t ∈ R k and hidden state h t ∈ R k for input data x t ∈ R d with input gate i t , forget gate f t , output gate o t ∈ R k are computed as follows. where W xi , W x f , W xo , W xc ∈ R d×k and W hi , W h f , W ho , W hc ∈ R k×k are weight parameters. The set2set model [45] is permutation invariant for input data using an attention mechanism. where m i is the memory vector, q t is the query vector, and f is the dot product. Message Passing Neural Network We assess the damage of the bridge structure using Graph Neural Network (GNN) to apply the sensor network topology. GNN is a powerful deep learning model that manipulates graph-structured data, and it is recently adopted in various domains. GNN updates the hidden state of the vertex with the neighbor information, captures the hidden patterns of the graph. Moreover, it effectively analyzes and infers the graph. MPNN [34] is a general framework of GNN. It has been employed to evaluate chemical properties by representing 3D molecular geometry as a graph. Graph, G, consists of a vertex set, V, and an edge set, E. We denote the feature of vertex, v ∈ V, as x v and the feature of edge, (u, v) ∈ E, as e uv . As shown in Figure 3, MPNN processes the embedded vertices into a message-passing step and a readout step. In the message-passing step, each vertex receives the aggregated message m t+1 v from the adjacent vertices along the edges with the message function, M t . The hidden state of each vertex is updated with the received message, and the previous state of the vertex is updated with the update function, U t . The message passing step is repeated T times until the message is delivered to a wider range in the graph. In this study, the t th message function M t and the update function U t are defined as follows. where σ is the ReLU activation function. A(·) is a two-layer neural network generating a matrix and consists of a layer with 2k neurons and ReLU activation, and a layer with k × k neurons. The neighbors of a vertex, N(v) = {u ∈ V | (u, v) ∈ E}, are adjacent vertices connected through edges from v. h t v and h t u are the t th hidden states of vertices, v and u, respectively. The initial hidden state, h 0 v , is the embedding of vertex v obtained by substituting x v into the differentiable function. In Equation (21), we define the update function U t as GRU described in Section 2.4. GRU integrates the state of the vertex itself and the message received as M t from adjacent vertices Finally, the hidden state of the t th updated vertex v is defined as follows. The readout step aggregates the last hidden states, h T , after iterating the message passing T times. The prediction,ŷ, for the target data is calculated with the readout function, R, as follows. We define the readout function R as the set2set model presented in Section 2.4. Since the set2set is invariant for graph isomorphism, it effectively integrates the vertices of the graph and produces a graph level embedding. Data Generating Procedure In this section, we describe how the cable-stayed bridge data used for the MPNN training is generated. The cable damage model is presented based on the elemental area reduction parameter before the measured cables are specified. Then, the structural analyses are performed to analyze the proposed model for reliable datasets that are essential to construct the machine learning model later. Cable Damage Model During the service life of cable-stayed bridges, cables are the most critical load-bearing components [46,47]. Thus, the potential damage of cables should be identified early to prevent terrible disasters [48,49]. In this study, the damage of cable-stayed bridges is assumed to be caused solely by the cable damages. In the cable-stayed bridge model, there are a total of 40 cables corresponding to the 40 catenary elements that are numbered as shown in Figure 1. The cable element is supposed to be perfectly flexible [40] with the selfweight distributed along its length. It has a uniform cross-sectional area of 3846.5 mm 2 in the intact state of the bridge. The cable damage is expressed through a scalar area reduction variable α with the value between 0 and 1 as follows: where A i represents the cross-section area of the cable in the intact state and A d denotes the cross-section area of the cable in the damaged state. α is the elemental area reduction parameter to be identified. It is noted that α = 1 indicates a destroyed cable, and α = 0 indicates an intact cable. Observed Cables In most structural health monitoring systems of cable-stayed bridges, sensors are installed to collect data from specific cables due to the cost-effectiveness. The quantity of surveyed cables depends on the scale and complexity of the bridge and the monitoring objectives [47,50]. At surveyed locations, cable sensitivity and safety degrees are evaluated. The measured data is automatically observed and stored as essential sources for later usage during the monitoring time. In this study, 10 out of 40 cables are surveyed, including 5 cables on the front side and 5 cables on the backside. We examine five sensor layout cases as shown in Figure 4. However, we do not include optimization of sensor placement (OSP) in the scope of this study. Since we do not apply the OSP technology, the sensors are evenly arranged. We analyze multiple cases to avoid skewing the experimental results to specific cases. The tensile forces within these cables are determined by simulating the proposed model in the PAAP. Using the GDC method [20,51] to solve nonlinear problems, the PAAP divides the dead load into many incremental steps. Obtained results from the structural system at each incremental step, including internal forces, deformations, displacements, etc., are exported and stored in data files. However, only cable tensions at the step corresponding to the bridge self-weight are considered as measured data. Generating Data Different cable-stayed bridge models are constructed and analyzed by using the PAAP. The geometry configurations of the bridge girders, pylons, and cross beams are kept constant, while only the cable cross-sectional areas vary. The output is the tensile force on observed cables as determined in Section 3.2. The complete procedure for generating data is presented in Table 1. For single-cable damage, 4000 data samples are generated as the elemental area reduction parameter varies from 0 to 1 with a step of 0.01. To evaluate the cable system failure based on the simulation results, the prediction model performs a reverse problem. The tensile forces of 10 observed cables are examined as the input, while the predefined elemental area reduction parameters are employed for the target data. These input and target data are utilized for the training and validation of the proposed damage detection model concerning the cable-stayed bridge. Upon completion, the model predicts damaged cable and its cross-section area A d according to 10 inputs of cable tensile forces S = [T 1 , T 2 , T 3 , . . . , T 10 ]. Table 1. Data generating procedure. Input structural geometry, material configurations and set applied loads. where A j is the cross-section area of damaged cable jth in sample ith, that is determined as shown Equation (24). Calculate the tension of 10 observed cables S i = [T 1 , T 2 , T k , . . . , T 10 ] that mentioned in Section 3.2 corresponding to the sample C i using the PAAP, where T k is the measured tension of cable k th in sample i th . Step 4. Save the input and output data to result files. Proposed Method for Damage Assessment In this section, we describe MPNN for damage assessment of the cable-stayed bridge. We present the specific MPNN configuration and show how to apply the proposed multitask learning to identify the location of the damaged cable created in Section 3 and the cross-sectional area of the corresponding cable. Configuration of the Proposed Network We define the graph vertex feature, x v , with tensile forces of the 10 cables. The edge feature e uv is defined as the thresholded Gaussian kernel [52] as presented in the equation below using the XYZ coordinates of the nodes on the girder connected to the 10 cables. where we set the threshold γ to 0.1 and σ is the standard deviation of distances. Since we only define vertices and edges as tension and distance, respectively, the dimensions of vertex and edge features are all 1. We embed the vertex features x v representing the tensor into single fully connected hidden layers with the ReLU activation function. The embedded vertex state is updated with the message function M t in Equation (19), and the update function U t in Equation (21). The hyperparameters of the network we tune in the message-passing step include the vertex embedding dimension, the number of iterations of the message passing step, and the hidden state dimension. Also, we tune the number of LSTM layers of the set2set model for the global pooling, readout function R, and the number of computations, which is another hyperparameter of the set2set model. We add the fully connected hidden layer with the ReLU activation function with the same number of neurons as the vertex embedding. The predictions for target data are generated in two output layers, each of 20 linear units. We describe the two outputs in the next section. Multi-Task Learning on MPNN The target data to determine the cable health of the cable-stayed bridge are the damaged cable location and the damage degree (i.e., cross-sectional area, A d ). Therefore, we adopt multi-task learning to make MPNN learn two tasks effectively. The advantage of multi-task learning is that by predicting multiple tasks simultaneously, related tasks could be learned more efficiently. Therefore, learning to predict the cross-sectional area of the damaged cable and learning to classify the damaged cable simultaneously improves learning efficiency. As shown in Figure 3, the proposed MPNN has outputs for task1 and task2, which are the classification of the damaged cable and the prediction of the cross-sectional area of the damaged cable, respectively. The first task is classification, and the second task is a prediction on continuous data(i.e. regression). Therefore, we utilize the cross-entropy loss function for task1 and the mean absolute error loss function for task2 defined as follows. where D i represents the target for ith label for the case that the ith cable is damaged. A d is the target for the cross-sectional area of a single damaged cable in the range, 0.99 to 0.0. A ∈ R 40 is the vector output by the network for the second task. We define mask M ∈ R 40 as a vector in which one element corresponding to the index of the damaged cable is 1, and all others are 0. In the training phase, the position of 1 in the mask M is actually the index of the damaged cable. L task2 is actually the error between the cross-sectional area of the damaged cable, A d , and the dot product of and M. Therefore, the loss for task2 is actually only calculated on the damaged cable. In the test step, the mask M is created as an output for the network classification. Then · M means the estimated cross-sectional area of the cable that the network classified as damaged. We define the total loss L total by combining L task1 and L task2 as follows. The total loss L total is the sum of task1 (classification) and task2 (regression) scaled by L1-norm of mask M. Performance Evaluation To evaluate the proposed model introduced in Section 4, we generate data with the cable-stayed bridge having damaged cables through the PAAP described in Section 3. We preprocess the generated data for the model training and optimize the MPNN model. Then, we train the proposed MPNN to validate the prediction outcomes. We also train the MLP and compare it with the MPNN model results. The input data of the MLP is only ten cable tension data. MLP has four hidden layers with ReLU activation, and we have added a dropout layer to each hidden layer. Similar to MPNN, the MLP output layer generates 80 predictions for two tasks. Additionally, we compare the results with ones by the machine learning technique, XGBoost. Also, we compare the multi-task learning with a network performing only one task. The number of network outputs for the damaged cable classification, which is task1, is 40 and the loss function is the cross-entropy shown in Equation (26). Furthermore, the number of network outputs for the area estimation of damaged cables, which is task2, is 1, and the loss function is the mean absolute error presented in Equation (27). Data Preprocessing and Optimization As mentioned in Section 3, we generate the data for 4000 cases. The input data is the cable-stayed bridge data represented as a graph, as described in Section 4, and the target data include the index and its cross-sectional area of the damaged cable labeled between 1 and 40. The cross-sectional area of the damaged cable is (1−α), which is between 0.0 (broken state) and 0.99. The α is an elemental area reduction parameter defined in Section 3. We scale the vertex feature values, tensile forces, between 0 and 1, as presented as follows. We divide the data into 6:1:3 and generate a 2400 training set, 400 validation set, and 1200 test set. Table 2 presents the ranges of hyperparameters and selected optimal values for each model. We select the best hyperparameters in the validation set using Tree-structured Parzen Estimators (TPE) [53] with 20 trials. Moreover, we terminate trials with poor performance using Asynchronous Successive Halving Algorithm (ASHA) [54]. We specify the hyperparameters of MPNN in Section 4.1. The hyperparameters of MLP are the number of hidden neurons in each layer and the dropout rate. We optimize the hyperparameters that determine the network structure, batch size, and learning rate. We perform the hyperparameter optimization individually for each of the 5 cases and models. We utilize the ADAM optimizer [55] and train the MPNN model to minimize the loss function, which is defined in Equation (28). We set the number of epochs to 1000. Then we decay by multiplying the learning rate decided from the hyperparameter optimization by 0.995 per epoch. We use Pytorch and Deep Graph Library (DGL) on a single NVIDIA Geforce RTX2080Ti GPU for network implementation and optimization. We train the MLP model with the same settings as the MPNN model. Results In this section, we report the results of the deep learning network for the test set. We examine the accuracy to evaluate the damaged cable classification performance. We employ the mean absolute error (MAE), the root mean squared error (RMSE), and the correlation coefficient between target data and output data as measures to compare the cross-sectional area prediction. MAE, RMSE, and correlation coefficient are defined as follows. where n is the number of samples, and y,ŷ,ȳ, andȳ are the target, output, and the average of the target, and the average of the output, respectively. The lower the MAE and RMSE and the higher the correlation coefficient, the better the performance. Table 3 summarizes the results of MPNN, MLP, and XGBoost. When comparing MPNN with MLP and XGBoost, the classification accuracy and correlation are always higher, and the error for cross-sectional area estimation is lower. Besides, the classification accuracy of MLP drops to 93.58%, depending on the sensor layouts. However, MPNN is more stable with an accuracy of over 98.33% in all 5 cases. Also, for the cross-sectional area prediction, MPNN is better and more stable than MLP and XGBoost. Meanwhile, the multi-task learning performance is similar to one of the single-task learning in which each task is individually trained. However, when multi-task learning is applied, we need to train the network only once, whereas training the network with the single-task increases the time cost by the number of tasks. Therefore, multi-task learning is efficient because it learns multiple tasks simultaneously while achieving performance similar to learning single tasks. Figure 5 shows scatter plots showing the relationships between the predicted values and actual values for the cross-sectional area estimation of the damaged cables. As shown in Figure 5a,b, which are the results of MLP, since many points deviate enormously from the straight line, especially in cases 2, 4, and 5, we confirm that the errors in the prediction of the cross-sectional area are considerable. However, in the scatter plots of MPNN shown in Figure 5c,d, the data points are closer to the straight line than MLP for all cases. For the classification analysis, in the multi-task learning results Figure 5a,c, we confirm that the red points, which are misclassified data, are mainly concentrated when the cross-sectional area is close to 1. It appears that the smaller the damage, the more likely the damaged cable will be misclassified. Figure 6 shows the histogram of correctly classified data and incorrectly classified data for varying cross-sectional areas. We observe that in all four network results, in general, the correctly classified data (blue) are evenly distributed, and the misclassified data (red) are skewed toward the cross-sectional area close to 1. For more accurate verification, we divide the cross-sectional area range by 0.1 and calculate the classification accuracy of the data included in each range. Table 4 presents the accuracies according to the cross-sectional areas. When the crosssectional area is less than 0.9, the accuracy of the MLP and XGBoost is between 81% and 100%. When the cross-sectional area is more than 0.9, the classification performance of MLP drops to 50%, and the best accuracy is 79.59% in case 5. However, when the crosssectional area of MPNN is less than 0.9, the accuracy is over 99.2%. Also, in both multi-task learning and single-task learning, the accuracy of all cases is almost 100%. When the crosssectional area is more than 0.9, the accuracy of MPNN is at least 73.47% and at most 91.84%. When the cable cross-sectional area loss is small, the accuracy of MPNN decreases slightly, but we notice that MPNN classifies damaged cables relatively more reliably than MLP and XGBoost. Besides, it is noticed that for each cross-sectional area change, none of the multi-task learning method and the single task learning method always outperforms in all cases. Figure 7 shows the confusion matrix of MPNN combining all 5 cases. Since there are a few misclassified data, we highlight the misclassified data with the orange shade. We observe that the location of the misclassified cable tends to be close to the damaged cable. For example, when the actual labels are 4, 7, 13, and 14, the predicted labels are 3, 6, 14, 16, and 15, respectively. These cables are located next to each other. Figure 8 shows a histogram of the sensor distances corresponding to the actual damaged cable and the cable incorrectly classified by the network for all 5 cases to illustrate the spatial relationship between the actual labels and the predicted labels in more detail. Of 75 incorrectly classified data, the distance between 17 actual damaged cables and predicted damaged cables is only 12,200, which is the distance between adjacent cables. Therefore, if we apply the proposed method to an actual bridge, we urge that the cables on both sides of the classified cables must be checked to avoid more significant damage. The tensions used as input data are measured only on ten cables. However, the proposed technique assesses damages to all 40 cables. Therefore, we need to compare the predictions for the ten cables with tension data and the 30 cables without tension data. Table 5 shows the results for the damaged cables with sensors and the damaged cable without sensors in all cases. We notice that the performance of MPNN is more remarkably different than that of MLP and XGBoost when the cable with no sensor attached is damaged. When the cable with the sensor is damaged, the classification accuracy is higher compared to the case that the cable without the sensor is damaged. The regression performance also shows the similar pattern except for case 1, 2, and 3. In case 1, 2, and 3, on the contrary, the result is better when the cable without the sensor is damaged in results of MLP and MPNN. This seems to be related to a problem with the sensor position. As seen in Figure 4, unlike case 4 and case 5, the spacing of sensors in cases 1, 2, and 3 is always less than five cables. In case 1, 2, and 3, since the sensors are evenly distributed, we observe that even if the cable without the sensor is damaged, especially in the regression task, the performance degradation does not appear. Table 6 for more detailed results for case 3 shows the classification result (accuracy, precision, recall, and F1 score) and the regression result (MAE, RMSE, and correlation coefficient) for each cable of the cross-sectional area when the cable is damaged. Table 6. Results of MPNN for each cable in case 3. The rows shaded in green indicate ten cables with tension data and values in the lower 5% of performance appear in red, and values in the upper 5% of performance appear in blue. MTL-Classification MTL-Regression where TN is true negative, TP true positive, FN false negative, and FP false positive. The rows shaded in green indicate ten cables with tension data, and all four measures for the classification, including accuracy, precision, recall, and F1 score, are 1.00. For each measure, values in the lower 5% of performance appear in red, and values in the upper 5% of performance appear in blue. Then we observe that cable 25 and cable 33 have the lowest precisions, which can be interpreted as a relatively high probability of misclassification among those predicted by MPNN that cable 15 and cable 24 are damaged in multi-task learning. Cable 14 has the lowest recall in both multi-task learning and single-task learning. Therefore, we can interpret that when cable 14 is damaged, MPNN is relatively likely to predict that the other cable is damaged. Also, the accuracy and F1 score of cable 14 and cable 15 are the lowest in multi-task learning. In addition to this, cable 14 in both multi-task learning and single-task learning have lower performance metric values for the classification. The cables mentioned so far are all sensorless cables in case 3. Unlike classification, the cross-sectional area prediction seems to be mostly unrelated to the use of tension data. For example, the regression performance is excellent even for the damaged cable 17 and 9 that do not have any tension data. Therefore, estimating the cross-sectional area of a single damaged cable is less related to the tension data-position than the classification. Discussion We have shown that MPNN can successfully assess cable damage estimation and outperform MLP. When the cable cross-section is damaged less than 0.9, MPNN always classifies the damaged cable more accurately than MLP. However, when the cable cross-section area damage is negligible as 0.9 or more, the classification accuracy slightly deteriorates. Once we improve the deep learning network to work more accurately for the bridge structure data with minor damage, we expect that the overall accuracy becomes 100%. Misclassified cables by MPNN are often located right next to the actual damaged cables. We can utilize these MPNN misclassification trends to update the algorithm and training process. However, since MPNN has reached 98% or even higher accuracy, achieving sufficiently satisfactory results, we believe that MPNN has potential as an SHM technology. Also, the multi-task learning performance is similar to the multiple single-task learning performance. Therefore, we have shown that multi-task learning can efficiently learn a single network that evaluates a bridge state. Besides, we have presented that the multi-task learning technique achieves similar performance to the network learning two tasks while learning only one network. Therefore, it is possible to evaluate the bridge conditions in several ways using only one network. It is worth adding more tasks other than predicting only the cross-sectional area of the cable in the future. Contribution We confirmed that MPNN has a higher overall performance than MLP and XGBoost. In particular, MPNN has a significant difference in performance from MLP and XGBoost when a cable without a sensor is damaged. Since MPNN can process spatial information between sensors, it appears that damages to cables without sensors can be estimated more successfully. MPNN has the advantage of being able to transmit information according to the connectivity relationship between sensors through a structure that passes messages. Also, by adding a readout function, MPNN produces an output as a graph unit value from node information, making it possible to predict the state of 40 cables effectively. In this paper, we captured spatial correlation by considering sensor geometry with MPNN. Moreover, we showed that two tasks (classifying damaged cables and estimating crosssectional area) could be efficiently trained using multi-task learning. Besides, we proposed a loss function using a mask so that the damaged cable could be more successfully estimated. Limitation We could represent the sensor geometry as a graph in this study but did not consider mechanical properties such as the material type or Young's modulus of the structure since it is difficult to define the relationship among the sensors when several types of materials are included between the two sensors. Besides, it may be challenging to learn the entire structure behaviors with the proposed method because it is not possible to deduce the entire topology of the bridge with only a few sensor data, which may necessitate the installation of a sufficiently large number of sensors. However, it cannot always be satisfied due to cost constraints. Therefore, to understand the condition of the entire bridge with only a small number of sensors, we fundamentally need to examine the influence of the sensor locations and apply it to enhance the model. Extension To Multiple Damaged Cables In this study, we assess the GNN-based SHM technology assuming that only one cable is damaged. To generate more realistic data similar to a real-world bridge, we need to simulate several damaged cables. We can apply the proposed technique by transforming from one label classification to a multi-label classification problem even when the number of damaged cables is unknown. A straightforward approach is to replace the cross-entropy loss function with a loss function used for multi-label classification, such as binary crossentropy. However, the threshold for deciding how many cables to classify as damaged should be appropriately set, which is an essential component. If the target data, which indicates the cross-sectional areas of the actual damaged cables, is represented as a vector of dimension w, where w is the number of damaged cables, then we can use the mask as shown in Equation (27) and compute the loss for the regression task. Suppose we multiply the 40 outputs of the network for task2 and the mask that is a 40 × w matrix, where each column is a one-hot encoding vector representing the locations of the damaged cables. In that case, we obtain the cross-sectional areas of the damaged cables. Similarly, the mask is created as the actual label in the training step and the predicted label in the test step. The total loss is obtained by combining the loss for the classification task and the loss for the regression task by scaling the L1-norm of the mask. We do not desire to weigh the regression task more than the classification task as the number of damaged cables increases. We can prevent this by scaling the regression loss with the mask. Therefore, even with multiple damaged cables, we can still apply the proposed method as a multi-task learning approach. As discussed above, we will review the conditions for making the cable-stayed bridge model and the real-world bridge similar and improve our technique to apply to the real-world bridge. Conclusions In this paper, we defined the sensor data as a graph composed of vertex and edge features. We proposed a damage assessment method of a cable-stayed bridge applying the graph representation on MPNN. We used tension data of only 10 cables to increase the practicality of the experiment. It is challenging to assess the conditions of all cables with only a limited number of sensors. Nevertheless, MPNN successfully estimated the damage of the cable-stayed bridge. We adopted multi-task learning to enable MPNN to efficiently learn two tasks: to locate damaged cables and predict the cable areas. The performance of MPNN is better than MLP trained for the comparison. MPNN classified damaged cables more reliably than MLP, not only when the cable is completely broken and has a zero area, but also when the damage is relatively small. Therefore, we presume that MPNN can detect damages at an early stage for structural maintenance. Furthermore, we can apply MPNN to actual bridge data when we have material information about the structural components. For example, we can train MPNN with stayed-cable bridge data simulated under the same conditions as real bridges in PAAP and utilize pre-trained MPNN with real bridge data for prediction directly. Additionally, although we simulated only one damaged cable in this study, we will generate data with multiple damaged cables to train the network to consider the more general real-world bridge cases. We also introduced an approach to conduct damage localization and severity assessment with the proposed method when several cables are damaged as a future study. Our model is likely to be extended by applying additional data such as the displacement of nodes and xyz coordinates for vertex features. Moreover, we can further expand the study by training MPNNs to predict structural damages, such as decreased stiffness besides cable conditions.
2021-05-06T06:16:10.225Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "b0bb6e3d09d1449475aea46bd79c27619b8c4433", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/9/3118/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d38a29aacf8e38f37c834de905be4f8dd414b496", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
17093376
pes2o/s2orc
v3-fos-license
Impaired Fasting Glucose and Diabetes as Predictors for Radial Artery Calcification in End Stage Renal Disease Patients Objective. The objective of the study was to assess the relationship between selected clinical and biochemical parameters of end stage renal disease (ESRD) patients and arterial calcification. Materials and Methods. The study comprised 59 stage 5 chronic kidney disease patients (36 hemodialyzed and 23 predialysis). The examined parameters included common carotid artery intima-media thickness (CCA-IMT), BMI, incidence of diabetes and impaired fasting glucose (IFG), dyslipidemia, hypertension, and 3-year mortality. Plasma levels asymmetric dimethylarginine (ADMA), osteopontin (OPN), osteoprotegerin (OPG), and osteocalcin (OC) were also measured. Fragments of radial artery obtained during creation of hemodialysis access were stained for calcifications using von Kossa method and alizarin red. Results. Calcification of radial artery was significantly associated with higher prevalence of IFG and diabetes (P = 0.0004) and older age (P = 0.003), as well as higher OPG (P = 0.014) and ADMA concentrations (P = 0.022). Fasting glucose >5.6 mmol/l (IFG and diabetes) significantly predicted vascular calcification in multiple logistic regression. The calcification was also associated with higher CCA-IMT (P = 0.006) and mortality (P = 0.004; OR for death 5.39 [1.20–24.1] after adjustment for dialysis status and age). Conclusion. Combination of renal insufficiency and hyperglycemic conditions exerts a synergistic effect on vascular calcification and increases the risk of death. Introduction Vascular calcification is an active process similar to the mineralization that occurs in bone [1,2]. Vascular smooth muscle cells undergo phenotypic differentiation into osteoblast-like or chondroblast-like cells and they synthesize calcification regulating proteins and matrix components typically found in bone and in cartilage [3,4]. Calcification of the arterial media, observed even in small vessels (Mönckeberg's calcification), is common in uremic patients and seems to be less associated with inflammation as compared to intimal mineralization typical for atherosclerosis [5,6]. Vascular mineralization advances with age and is intensified in diabetes, dyslipidemia, chronic kidney disease, and hypertension. In newly treated hemodialysis and peritoneal dialysis patients, diabetes, dialysis duration, and the previous presence of aortic arch calcification (AAC) accelerate further progression of AAC (an important risk factor for cardiovascular complications) [7]. Hyperinsulinemia and insulin resistance (clinical signs of type 2 diabetes) are positively correlated with the arterial calcification [8,9]. Arterial medial calcifications often occur also in diabetic individuals as a component of the diabetic macroangiopathy. In the animal model, insulin resistance induced in rats by fructose feeding resulted in increased aortic calcium deposition, elevated calcium-phosphate index, and local hyperplastic changes in the aortic media [10]. Blood vessels obtained from end stage renal disease (ESRD) patients were often studied by various histological techniques to assess vascular calcification. They were collected during renal transplantation (epigastric or iliac arteries) or by peripheral arterial biopsy (radial artery) [3,11,12]. In the present study we used small samples (otherwise routinely discarded) of radial arterial walls obtained during creation of arteriovenous fistula for hemodialysis access. The study was aimed at investigating the relationship between selected clinical and biochemical parameters, with special emphasis on diabetes markers, and the level of histologically assessed radial artery calcification in end stage renal disease patients. Patients. The study population consisted of 59 patients (38 males, 21 female; mean age at the beginning of the study 61 ± 16 yrs), including 36 on maintenance hemodialysis (HD) and 23 on predialysis (stage 5 of CKD). The study was approved by the Bioethics Committee of the Jagiellonian University and all patients signed an informed consent for their participation. The data on mortality was collected over a period of three years. During this period, all the patients were treated at the Department of Nephrology, Jagiellonian University Hospital. The mortality data, including the causes of death, was based on the patients' records. Routine biochemical tests were carried out using automatic biochemical analyzers: Hitachi 917 (Hitachi, Japan) and Modular P (Roche Diagnostics, Mannheim, Germany). Concentrations of hsCRP were measured using immunonephelometric method on Nephelometer BN II (Siemens Healthcare Diagnostics, Germany). Hematological parameters were measured using Sysmex XE 2100 Hematological Analyzer (Sysmex Corp., Japan). The mean arterial pressure (MAP) was calculated according to the formula: MAP = DBP + 1/3(SBP-DBP), where SBP is systolic blood pressure and DBP is diastolic blood pressure. The intima-media thickness of the common carotid artery trunk (CCA-IMT) was assessed by ultrasonography (B presentation, Acuson 128 XP/10 apparatus equipped with a linear head at 5/7 MHz). The measurements were performed bilaterally at 0.5 cm and 2 cm below the division of the common carotid artery during diastolic phase of the heart cycle. The results were expressed as the arithmetic means of the values obtained for the left and right arteries. Histology. Fragments of radial artery, approx. 5 × 2 mm in size, were collected during the first creation of arteriovenous fistula for hemodialysis access. The samples were immediately immersed in 10% phosphate-buffered formalin and fixed overnight and then rinsed in PBS and soaked in 30% sucrose. The material was snap-frozen and tissue blocks were positioned in a cryostat for cutting sections in a plane encompassing the entire thickness of the vascular wall. Serial 10 m-thick cryosections were cut and thaw-mounted on poly-L-lysine coated slides. Sections were stained routinely with Mayer's haematoxylin and eosin (HE), with von Kossa method and with alizarin red. The stained sections were examined using an Olympus BX-50 microscope (Olympus, Tokyo, Japan) in brightfield mode and images were registered using Olympus DP-71 digital CCD camera controlled by Olympus AnalySIS FIVE software. The advancement of vascular calcification was semiquantitatively evaluated in von Kossa and alizarin red-stained sections by two independent observers. The degree of mineralization was classified according to the following scale: 0: no mineral content, 1: a few small dispersed concretions, 2: numerous small dispersed concretions, 3: larger granular concretions, and 4: large areas occupied by fused mineral deposits. The reproducibility of the morphological analysis was confirmed by Bland-Altman method and by calculating intraclass correlation coefficient (ICC) which was 0.88. Statistical Methods. The number of patients (percentage of the group) was reported for categorical variables and mean ± standard deviation or median (lower-upper quartile) for continuous variables, depending on distribution. The Shapiro-Wilk test was used to assess normality. Contingency tables were analyzed with Pearson chi-squared test. Student ttest or Mann-Whitney test was used for simple comparisons between the groups. Multiple logistic regression models were constructed using the variables that differed significantly between the groups in simple comparisons and/or predefined sets of confounders, as pointed in the results. Odds ratios (OR) for 1 unit increase with 95% confidence intervals (95% CI) being reported, unless otherwise stated. All tests were two-tailed and the results were considered statistically significant at ≤ 0.05. Statistica 10 software (StatSoft, Tulsa, OK, USA) was used for computations. Histology. Routine histology (HE) showed general morphology of the radial artery, with intimal thickening in the vast majority of the examined specimens (Figure 1(a)). Basophilic deposits were visible in arterial wall in cases of very intense calcification (Figure 1 of the mineralization degree showed that von Kossa method was less sensitive; thus we employed alizarin red staining for further analysis and for correlation of the vascular calcification level with clinical and biochemical data. Morphologically, mineral deposits were found in all layers of the arterial wall but they were most frequently localized in the media (Figures 1(b)-1(d)). In less advanced lesions, deposits were preferentially located close to the outer and inner elastic laminae (Figure 1(e)). Some deposits were fine and dispersed (Figure 1(f)), while others occupied larger areas and in the most advanced cases even the entire thickness of the media (Figures 1(b)-1(d)). Only very scanty mineral deposits were occasionally seen in the vascular intima. Among 59 radial artery samples examined histologically, 34 showed positive alizarin red staining indicative of the calcification process. The proportion of samples with arterial wall mineralization (Table 1) as well as its advancement (Figure 2(a)) did not differ between HD and predialysis patients ( = 0.6). In further analysis, HD patients were studied together with predialysis patients; nevertheless all multiple regression models were adjusted for HD status. Table 1 summarizes differences in clinical and biochemical parameters between the groups with and without calcifications as assessed by alizarin red staining of radial artery. Vascular calcification was associated with higher age of patients, higher glucose, and diabetes. MAP was slightly lower in patients with calcifications and the levels of ADMA and OPG were higher in this group. Clinical criteria of the metabolic syndrome [14,15] were compared between patients with and without vascular calcifications in radial artery ( Table 2). Among patients with vascular calcifications, the number of individuals with fasting glucose level above 5.6 mmol/L, that is, patients with IFG (prediabetes) [16] and diabetes, was significantly higher ( = 0.0004). Moreover, vascular calcifications were more severe in the group of patients with IFG and diabetes (Figure 2(b)). Other criteria of the metabolic syndrome did not differ between the groups with or without calcifications. Biochemical and Clinical Data. The association of radial artery calcifications with IFG and diabetes was further confirmed by multiple logistic regression (Table 3). Three models were constructed, containing age and fasting glucose > 5.6 mmol/L as independent variables: the first one was adjusted for gender, HD status, and Ca × Pi product, and the second was additionally adjusted for dyslipidemia, hypertension, high BMI, and hsCRP. The third model, adjusted as the first one, included other variables that were significantly associated with arterial calcification in simple comparisons, that is, MAP, ADMA, and OPG. Fasting glucose > 5.6 mmol/L was the only variable independently associated with the vascular calcifications in all three models. The results were similar when diabetes was substituted for increased fasting glucose level in the models: OR = 17. Sixteen patients with calcifications in the radial artery (47%) died during 3 years of the followup, while in the group without calcifications the mortality was lower: 3 deaths (12%). All except 3 deaths occurred due to cardiovascular causes (Table 1). Vascular calcification was significantly associated with patients' mortality in simple analysis ( = 0.004) and after adjustment for HD status and age (OR for death 5.39; 95% CI 1.20-24.1; = 0.024). Discussion This study presents a comprehensive comparison of biochemical and clinical data with the calcification status assessed histologically in the peripheral arteries of ESRD patients. Biopsies of radial artery collected during the creation of vascular access for hemodialysis were used previously by other authors to study calcification. However, in that study, the surgical anastomosis was performed in an end-to-end fashion; thus a sample encompassing the entire circumference of the artery could be excised and used for further analysis, allowing for a more reliable assessment of the calcification extent in the arterial wall [12]. In the study mentioned above, the authors found mineral deposits in 37% of the examined radial arteries. After adopting stringent morphological criteria to include even the finest calcifications and using more sensitive alizarin red staining for calcium detection we found arterial calcification in 57% of cases. Calcification can develop in two distinct layers of the artery: intima and media [17]. Intimal calcification occurs in advanced atherosclerotic lesions and is associated with lipid accumulation and infiltration of the inflammatory cells, such as macrophages and T cells. Medial arterial calcification (MAC) displays features very similar to those of physiological calcification in bone [18]. MAC develops independently of atherosclerosis and is a commonly observed pathology in diabetes, ESRD, and ageing [17]. The present study revealed a significant association of arterial medial calcifications with impaired fasting glucose (IFG, prediabetes) and diabetes but not with other criteria of the metabolic syndrome including overweight. Our results are in accordance with the study of Lim et al. [19] demonstrating the relationship between anthropometric parameters, metabolic profiles, and coronary artery calcium scoring (CACS). Subjects with IFG or diabetes had higher CACS and more advanced coronary stenosis than normal subjects. Moreover, several studies confirmed that fasting plasma glucose is a better independent determinant of the progression of coronary artery calcification than the other metabolic syndrome risk factors [20][21][22]. The above mentioned papers presented a relationship between hyperglycemia and vascular calcification based on noninvasive imaging of blood vessels. In our study, this relationship was analyzed for the first time using histologically examined samples of peripheral arteries. Hyperglycemia is an established risk factor for cardiovascular disease. Our study showed that fasting hyperglycemia, mostly associated with type 2 diabetes, was the only significant predictor of vascular calcifications in ESRD patients. Consistently, type 2 diabetes was associated with more severe 6 International Journal of Endocrinology calcification. Recent evidence suggests that medial calcification in diabetes is an active, cell-mediated process, similar to that observed in patients with end stage renal disease [23,24]. Vascular calcifications and atherosclerosis are frequent in patients with ESRD and they are associated with increased cardiovascular morbidity [25]. Coronary artery calcification (CAC) was found in 70.2% of dialysis patients and was significantly associated with CCA-IMT and the thickness of atherosclerotic plaques [26]. These results indicate that both, medial calcification and atherosclerotic lesions, frequently coexist in patients with ESRD and that CCA-IMT, increased in patients with calcifications examined in this study, may serve as a surrogate marker of vascular calcification. The mechanisms responsible for vascular calcification include inflammation and oxidative stress, as well as bone and mineral metabolism disturbances. In our study higher ADMA and OPG levels were associated with vascular calcification. High ADMA levels are associated with endothelial dysfunction and cardiovascular damage [27]. Serum levels of ADMA in chronic kidney disease increase due to its defective inactivation and excretion. Coen at al. [28] postulated that ADMA may play a role in the pathogenesis of vascular calcification in dialysis patients. Increased serum OPG is associated with type 2 diabetes, chronic kidney disease, and the severity of vascular calcification and coronary artery disease [29][30][31]. It could represent a compensatory mechanism for vascular damage, also showing a protective effect against vascular calcification [32,33]. Arterial calcification was associated with higher mortality in ESRD patients. According to our knowledge, the relationship between vascular calcification assessed histologically and the long-term mortality in chronic kidney disease patients has not yet been studied. Ogawa et al. [34] examined the effect of CT-assessed aortic arch calcification on mortality in the 401 hemodialysis patients during 4-year follow-up period and demonstrated that cardiovascular mortality was significantly higher in patients with calcification. In our study, employing a different assessment model (radial artery and histology) this effect of arterial calcification on mortality in ESRD patients has been confirmed. Conclusions Small samples of radial artery obtained during creation of vascular access for hemodialysis may successfully serve as source of the material for histological assessment of vascular mineralization. In end stage renal disease patients, impaired fasting glucose (prediabetes) and diabetes predict vascular calcification which is significantly associated with higher mortality. These results indicate that combination of International Journal of Endocrinology 7 renal insufficiency and hyperglycemic conditions exerts a synergistic effect on vascular calcification and increases the risk of death.
2016-05-15T03:41:43.220Z
2013-12-18T00:00:00.000
{ "year": 2013, "sha1": "7dc763fa27ad660f2040ef8b229df1882d2e48c5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2013/969038", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "444c79191a51690463d4869b908681601b87d3bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49658866
pes2o/s2orc
v3-fos-license
Human mesenchymal stromal cells inhibit platelet activation and aggregation involving CD73-converted adenosine Background Mesenchymal stromal cells (MSCs) are promising cell therapy candidates. Clinical application is considered safe. However, minor side effects have included thromboembolism and instant blood-mediated inflammatory reactions suggesting an effect of MSC infusion on hemostasis. Previous studies focusing on plasmatic coagulation as a secondary hemostasis step detected both procoagulatory and anticoagulatory activities of MSCs. We now focus on primary hemostasis and analyzed whether MSCs can promote or inhibit platelet activation. Methods Effects of MSCs and MSC supernatant on platelet activation and function were studied using flow cytometry and further platelet function analyses. MSCs from bone marrow (BM), lipoaspirate (LA) and cord blood (CB) were compared to human umbilical vein endothelial cells or HeLa tumor cells as inhibitory or activating cells, respectively. Results BM-MSCs and LA-MSCs inhibited activation and aggregation of stimulated platelets independent of the agonist used. This inhibitory effect was confirmed in diagnostic point-of-care platelet function analyses in platelet-rich plasma and whole blood. Using inhibitors of the CD39–CD73–adenosine axis, we showed that adenosine produced by CD73 ectonucleotidase activity was largely responsible for the LA-MSC and BM-MSC platelet inhibitory action. With CB-MSCs, batch-dependent responses were obvious, with some batches exerting inhibition and others lacking this effect. Conclusions Studies focusing on plasmatic coagulation suggested both procoagulatory and anticoagulatory activities of MSCs. We now show that MSCs can, dependent on their tissue origin, inhibit platelet activation involving adenosine converted from adenosine monophosphate by CD73 ectonucleotidase activity. These data may have strong implications for safety and risk/benefit assessment regarding MSCs from different tissue sources and may help to explain the tissue protective mode of action of MSCs. The adenosinergic pathway emerges as a key mechanism by which MSCs exert hemostatic and immunomodulatory functions. Electronic supplementary material The online version of this article (10.1186/s13287-018-0936-8) contains supplementary material, which is available to authorized users. Background Due to their numerous and promising therapeutic capacities, mesenchymal stromal cells (MSCs) are already applied clinically [1]. So far, clinical trials have documented the safety of MSC applications with rare and minor adverse events in humans [2][3][4]. In animals, however, increased death rates have been observed due to thromboembolic events after infusion [5][6][7][8][9]. Importantly, there has been one report of a patient dying following a pulmonary embolism, potentially related to the MSC application [10]. Thromboembolic events may occur via different mechanisms. First, the relatively large MSCs may be entrapped in and may subsequently occlude small vessels, particularly lung capillaries [7,9]. This may explain why in animals the cell dose and infusion velocity have been linked to embolic side effects [11]. Second, MSCs may impact hemostasis and actively promote coagulation through high expression of procoagulant molecules like tissue factor (TF), triggering the clotting cascade [6,[12][13][14]. In consequence, clinical trial protocols have already been modified to add antithrombotics such as heparin or hirudin [12,13,[15][16][17]. A recent study suggests selecting for TF-negative MSCs to avoid thromboembolism and IBMIR [14]. With respect to their procoagulant properties, MSCs from different tissue sources seem to differ at both expressional as well as functional levels [14,18,19]. In a mouse model, lipoaspirate-derived MSCs (LA-MSCs) were associated with higher risks of embolic events in comparison to bone marrow-derived MSCs (BM-MSCs) [9]. MSCs derived from placental decidua showed increased activation of plasmatic clotting compared to BM-MSCs [13]. In contrast, very few studies have considered the cellular component of thrombosis/hemostasis. Observed effects have varied from increased platelet thrombus formation [12,15,16] to antithrombotic properties in vascular grafts [20,21]. We have therefore analyzed the influence of MSCs on platelet activation. We compared MSCs from different tissue sources (BM, LA and cord blood), and evaluated whether conditioned medium or cells stimulate or inhibit the activation of resting or agonist-induced activated platelets. Platelet activation and aggregation was measured using different methods including diagnostic point-of-care techniques. Blood collection and preparation Blood was collected with 21-gauge butterfly needles from antecubital veins into citrate phosphate dextrose adenine (CPDA)-containing or hirudin-coated tubes. Donors were volunteer healthy persons giving informed consent, who had not been taking any platelet inhibiting medication for at least 2 weeks. Platelets were deployed for experiments immediately after collection. Either whole blood or platelet-rich plasma (PRP) was used, obtained by centrifugation of whole blood at 100 × g for 10 min. The PRP was diluted 1:1 with phosphate buffered saline (PBS) before subsequent use. MSCs, HUVECs and HeLa cells Human MSCs from the three different tissue sourcesbone marrow (BM), lipoaspirate (LA) and cord blood (CB)-as well as human umbilical vein endothelial cells (HUVECs) were isolated from multiple different donors and characterized as described previously [22,23] [24,25]. All cells were stored cryopreserved in fetal bovine serum (FBS)/10% DMSO and were then thawed and cultivated for at least one passage before use. HUVECs were cultured in EGM-2 (Lonza, Basel, Switzerland), and MSCs and HeLa cells in DMEM (Lonza) supplemented with 10% FBS (PromoCell, Heidelberg, Germany), 4 mM glutamine and antibiotics. To standardize conditions for MSCs, HUVECs and HeLa cells, respectively, cells were seeded at a defined density in T175 flasks 2 days before performing the experiments: MSCs at 1 × 10 6 cells, passages 3-4 (to test for replicative aging also until passage 6); HUVECs at 2 × 10 6 cells, passages 3-5; and HeLa cells at 5 × 10 6 cells. Immediately before the experiments, the cells were detached with trypsin-EDTA, washed, counted and resuspended in PBS. The cell doses (10 5 , 5 × 10 5 , 2.5 × 10 6 cells/ml) employed for our study were calculated according to the cell numbers applied clinically [1]. Conditioned medium (CM) was collected 48 h after seeding 10 6 cells in T175 flasks. Pure culture medium served as a control. Flow cytometry Flow cytometry was performed on a BD FACSCanto™ II (Becton Dickinson, Heidelberg, Germany). Data were obtained with BD FACS Diva software and analyzed with FlowJo software (FlowJo, LLC, Ashland, OR, USA). Before stimulation, platelets were incubated at room temperature with respective cells or CM for 10 min in the presence of the staining antibodies. Following this, platelets were activated with TRAP-6 (protease-activated receptor 1 (PAR-1) agonist), ADP (P2Y1, P2Y12 and P2X1 receptor agonist) or U46619 (thromboxane A2 (TP) receptor agonist) (all 5 μM; Roche, Mannheim, Germany) for 10 min. Experiments were performed at staggered times or samples were fixed directly after the stimulation period by 0.5% paraformaldehyde and then analyzed. Inhibitors Different mechanisms have been shown to interfere with platelet activation. To understand which is affected by MSCs, we used different inhibitors, as specified in the following [27][28][29]. CD62P was blocked by the mouse anti-human antibody AK-4 (eBioscience, ThermoFisher, San Diego, CA, USA). PRP 50 μl was preincubated with 1 μg AK-4 or the respective isotype control for 20 min before adding the MSCs. Detection of ectonucleotidase activity Ectonucleotidase activity was measured in the cells as described previously [32]. Briefly, cells were seeded at 10,000 cells/cm 2 in 24-well plates and then incubated for 24 h. ATP, ADP and AMP (1 mM; Santa Cruz) were then added in phosphate-free buffer and incubated for either 1 h (ATP and ADP) or 30 min (AMP). Supernatant was harvested for protein quantification (BCA assay; Thermo Fisher, Waltham, MA, USA), inorganic phosphate quantification (malachite green assay kit, according to the manufacturer's instructions; Sigma Aldrich) or adenosine detection. Quantitative determination of adenosine by LC-MS/MS The samples were separated by HPLC (Agilent 1100, Waldbronn, Germany) using a LiChrospher 100 RP C-18, 5 μm column (125 mm × 4 mm) in combination with a gradient method of acetonitrile and 0.1% acetic acid at a flow rate of 500 μl/min. Mass spectrometric analysis was carried out using an API 4000™ quadrupole mass spectrometer (Applied Biosystems/MDS Sciex, Toronto, Canada) equipped with an electrospray ionization (ESI) source in the positive mode. MS/MS infusion experiments were performed to determine the specific mass transitions of adenosine (quantifier m/z 268 to m/z 136, qualifiers m/z 268 to m/z 119 and m/z 268 to m/z 92) for multiple reaction monitoring (MRM) analysis. All quantitative analyses were carried out using a sample volume of 10 μl containing adenosine-d5 as internal standard. The adenosine content of the samples was determined by a standard calibration function in the required concentration range. Alkaline phosphatase and adenosine deaminase activity measurement Both enzymatic activities were measured in cell lysates with defined cell numbers by fluorometric assay kits according to the manufacturer's instructions (ALP assay kit ab83371 and ADA assay kit ab204695; both Abcam). Vasodilator-stimulated phosphoprotein phosphorylation state The phosphorylation state of vasodilator-stimulated phosphoprotein (VASP), indicative for cyclic nucleotide levels in platelets, was measured using cytometric bead technology (VASPFix; Platelet Solutions, Nottingham, UK) [33]. Briefly, PRP was coincubated for 10 min with 10 μM adenosine or 5 × 10 5 cells/ml, followed by addition of ADP (5 μM). After 5 min, 5 μl of the platelet suspension was mixed with 25 μl of VASPFix reagent, vortexed and incubated for 2 h. The VASP phosphorylation state was assessed by flow cytometry using an APC (bead) and FITC (VASP-P) dot-plot gate, assessing the change in VASP-P-FITC mean fluorescence intensity (MFI). Platelet function analyzer Platelet adhesion, activation and aggregation were assessed in a system simulating the in-vivo hemodynamics in the small capillaries (PFA-100; Siemens Healthcare Diagnostics, Eschborn, Germany). Citrated whole blood was aspirated at high shear rates though a small aperture coated with collagen and ADP. The gradual occlusion of the aperture by adhering platelets was measured as the closure time. Briefly, 900 μl of citrate-anticoagulated whole blood (platelet count > 150,000/μl and hematocrit > 35%) was mixed with either 10 μM adenosine or 5 × 10 5 cells/ml and incubated for 10 min. The whole blood suspension (800 μl) was added to the analysis cuvettes, the ADP/collagen measurement was started and the closure time recorded. Light transmission aggregometry The rate and extent of platelet activation, aggregation and agglutination was measured by light transmission aggregometry. PRP was stirred in a cuvette at 37°C and photometrically monitored. Agonist-induced activation and aggregation induces a change from light absorbance to increased transmission (Platelet Aggregation Profiler®, Model PAP-8E; MöLab GmbH, Langenfeld, Germany). Briefly, PRP was prepared by centrifugation at 150 × g for 10 min and then carefully removed. The remaining blood was centrifuged at 2700 × g for 15 min to obtain platelet-poor plasma (PPP). The PPP was used to calibrate the system to 100% light transmission. Measurements were made on 250 μl aliquots of PRP preincubated for 10 min with 10 μM adenosine or with 5 × 10 5 cells/ml at 37°C in the aggregometry cuvette, after addition of 5 μM ADP. Real-time quantitative PCR Real-time quantitative PCR (RT-qPCR) of procoagulant and anticoagulant factors was performed as described previously [34]. The RNeasy Mini Kit® (Qiagen, Hilden, Germany) was used for mRNA isolation, the Transcriptor High Fidelity cDNA Synthesis Kit (Roche Diagnostics, Mannheim, Germany) for cDNA transcription and the SensiFast™ Probe No-ROX Kit (Bioline, Luckenwalde, Germany) for PCR. The intron-spanning primers and probes (Universal Probe Library, Roche, Mannheim, Germany) presented in Additional file 1: Table S1 were used with a Light Cycler 480 (Roche). Relative quantification was performed using the Ε-method with GAPDH and SFRS4 as reference genes. The efficiency of all primers was in the range of 1.9-2.2. Statistical analysis For statistical analysis, the degree of platelet activation was quantified by the mean fluorescence intensity (MFI) using flow cytometry. For some experiments, data were normalized to the respective control without cells/stimulator/inhibitor added. Significance testing was performed using a paired t test, repeated-measures one-way ANOVA, one-way or two-way ANOVA followed by Holmes-Sidak or Dunnett's test or the Kruskal-Wallis test for nonparametric data (Sigma Plot 11.0; Systat Software, San Jose, CA, USA; and Graph-Pad Prism 7; GraphPad Software, La Jolla, USA). MSC conditioned medium does not affect platelet activation To see whether MSCs produce any soluble factors influencing platelet activation, we quantified activation marker expression on resting (w/o agonist) and ADP or TRAP-6-stimulated platelets. There was no effect of any culture medium (CM) on resting platelets (data not shown). On agonist-stimulated platelets, there was also no effect of CM on BM-MSCs and LA-MSCs, independent of the added dose of 2.5-10% (Fig. 1). Low concentrations of CB-MSC CM suppressed the activation of platelets (0.40 ± 0.034 for CD62p and 0.61 ± 0.036 for PAC-1 at 2.5% CM compared to the control set to 1), but higher concentrations did not suppress compared to the control (10% CM, 1.30 ± 0.23 for CD62p and 1.18 ± 0.19 for PAC-1; significant differences between 2.5 and 10% CM). HUVEC CM had no effect on platelet activation (1.05 ± 0.04 for CD62p and 0.96 ± 0.04 for PAC-1 at 10% CM). HeLa CM, however, increased the agonist-induced activation of platelets dose dependently (from 0.89-fold at 2.5% CM to 4.05-fold at 10% CM for CD62p). MSCs inhibit agonist-induced platelet activation Having observed that CM does not affect platelet activation, we then assessed the effect in cell-platelet cocultures. When platelets were agonist stimulated, all MSCs reduced the degree of platelet activation. All activation markers assessed were similarly affected (Fig. 2a). For CB-MSCs, however, donor-specific batch variation was apparent, with some batches even increasing platelet activation. HUVECs, as expected, reduced platelet activation, while HeLa cells had no significant effect. Next, we tested different MSC concentrations in the range of the concentrations employed in clinical treatments (10 5 , 5 × 10 5 , 2.5 × 10 6 cells/ml). There was a clear dose-dependent effect of BM-MSCs ( Fig. 2c for CD62p; other markers not shown). Interestingly, for LA-MSCs this was not apparent and with higher CB-MSC numbers the inhibitory effect was reduced, similar to the CM. HUVECs caused the expected dose-dependent inhibition, while HeLa cells had no significant effect at any concentration. Next, we checked whether the MSCs activate resting platelets. BM-MSCs, LA-MSCs and HUVECs did not activate resting platelets (Fig. 2b, d). Interestingly, 2.5 × 10 6 CB-MSCs/ml led to a 5-fold increase in CD62P expression and an 11-fold increase in PAC-1 binding, comparable to HeLa cells. We verified that the inhibitory effect was donor specific and not affected by cellular aging (passages 3-6; data not shown), as reported for other MSC properties [35]. Different culture supplements used to expand MSCs (FBS, human AB serum or human platelet lysate) did not affect the inhibitory capacity of either BM-MSCs or LA-MSCs (data not shown) [36]. Additional experiments were performed: under shear-flow conditions, BM-MSCs likewise appeared to reduce the number of platelet aggregates formed on fibronectin (Additional file 2: Figure S1); using impedance aggregometry in whole blood, all cell types reduced platelet aggregation (multiplate device, Additional file 3: Figure S2); and potential platelet binding to MSCs was assessed by microscopy and flow cytometry. TRAP-6-stimulated platelets formed thrombi. HeLa cells and also CB-MSCs induced aggregation of activated and resting platelets (Additional file 4: Figure S3A-I, p = 0.004 for unstimulated vs stimulated platelets and p = 0.02 for unstimulated platelets vs stimulated platelets + HeLa cells), whereas BM-MSCs prevented stimulus-induced platelet aggregation (Additional file 4: Figure S3I). No platelet binding to any of the cells was apparent using flow cytometry, assessed by gating on MSC FSC/SSC and then calculating for CD41 positivity (Fig. 3J, no variance between cells w/o platelets and with unstimulated or stimulated platelets comparing n = 3 biological replicates for MSC and HUVECs, respectively). MSCs inhibit platelet activation independent of the activation pathway, P-selectin and cyclooxygenase The fact that BM-MSCs and LA-MSCs significantly reduced platelet activation prompted the question of whether specific activation pathways are affected. Using flow cytometry, we measured three individual activation markers, CD62p, PAC-1 and CD63, plus three different platelet agonists, TRAP-6 (chosen for all subsequent analyses), ADP and U46619. Inhibition was apparent independent of the anticoagulant (data not shown) and for all three activation-dependent markers and agonists (Figs. 1 and 2, Additional file 5: Figure SA, B). Based on these data we conclude that the MSC-mediated effect is not directly linked to a specific pathway but interferes with platelet activation globally. Endothelial progenitor cells have been shown to suppress platelet activation via CD62p and COX [27,28]. However, neither the CD62P blocking antibody AK4 nor the nonselective COX inhibitor indomethacin were capable of neutralizing the MSC inhibitory activity (Additional file 5: Figure S4C, D), although MSCs express Cox-2 mRNA at highly differing levels (data not shown). CD73-converted adenosine is involved in MSC platelet inhibitory activity We postulated that adenosine converted by CD73 ectonucleotidase activity may be responsible for the platelet inhibition. Adenosine has been shown previously to be inhibitory in endothelial-platelet interactions, and to contribute to MSC immunomodulatory activity [29,[37][38][39][40]. Extracellular ATP metabolism provides the prothrombotic ligands ATP and ADP (released from dense granula upon platelet activation and hydrolyzed by CD39) and the antithrombotic AMP and adenosine (by CD73 activity) detected by P1 adenosine receptors (on platelets provides mainly adenosine A2 receptor (A2AR)) [29,41]. We first checked CD39, CD73 and A2AR expression. Platelets expressed both CD39 and A2AR at high MFI values, whereas CD73 was only dimly expressed (Fig. 3a). In MSCs, in contrast, CD73 was highly expressed, but CD39 and A2AR only dimly. HUVECs had the highest CD73 reactivity, with low CD39 and A2AR expression. To test our hypothesis, we measured platelet activation and in parallel the adenosine concentration in MSCplatelet cocultures (Fig. 3b). In the presence of MSCs and HUVECs, adenosine levels were increased irrespective of TRAP-6 stimulation. HUVECs, despite a higher CD73 expression, had adenosine levels only slightly higher than LA-MSCs and CB-MSCs, with comparable antithrombotic activity. BM-MSCs, which showed the highest adenosine concentrations, exerted the highest inhibitory activity on platelet activation. These data support our hypothesis that MSC-generated adenosine conferred antithrombotic activities. In fact, the concentrations measured in cocultures were exactly in the range of the inhibitory adenosine concentration, strongest at 10-0.01 μM adenosine for both PRP and whole blood (see Fig. 6a). To quantify the activity of CD39 and CD73, cells were incubated with ATP, ADP and AMP and with inhibitors of the nucleotide degradation cascade (POM-1 for CD39, AMP-CP for CD73 and SCH 58261 for A2AR). The released phosphate and adenosine was measured. ATP and ADP generated only minor amounts of phosphate and adenosine, indicating very low ATP/ADP degradative properties. However, MSCs were able to catabolize AMP to phosphate and adenosine (Fig. 3b, c), significantly inhibited by AMP-CP, indicating that CD73 activity is crucial for AMP conversion. Despite the high expression of CD73 in HUVECs, they showed little phosphate and adenosine production. In BM-MSCs and LA-MSCs, POM-1 significantly inhibited phosphate, but not adenosine generation, suggesting a minor involvement of CD39. Platelets per se, neither unstimulated nor stimulated, produced detectable amounts of adenosine. To verify that CD73-converted adenosine regulates platelet reactivity, we added the aforementioned inhibitors to MSC-platelet cocultures. POM-1 (both 100 and 10 μM) significantly reduced TRAP-induced platelet activation ( Fig. 4a; ADP and U46619 not shown), supporting our notion that CD73-mediated adenosine generation causes platelet inhibition (Fig. 4b) [42]. AMP-CP and SCH 58261 had no effect on platelet activation per se. POM-1 did not counteract the MSC-mediated antithrombotic activity. The CD73 inhibitor AMP-CP, however, significantly antagonized the inhibitory effect of BM-MSCs, LA-MSCs and HUVECs, supporting our notion that CD73-mediated adenosine generation causes platelet inhibition (Fig. 4c-f ). In CB-MSCs, again, data varied with different cell batches. The adenosine receptor inhibitors, SCH 58261 specific for A2AR and nonspecific P1 receptor inhibitor caffeine, partially reversed the inhibitory effects of BM-MSCs and LA-MSCs, demonstrating that adenosine sensed by adenosine receptors converts the inhibitory signal. The facts that caffeine exerted a stronger effect in reducing MSC inhibition while SCH 58261 strongly reduced the inhibitory activity of adenosine suggest that A2AR is the main adenosine receptor on platelets, but that in MSCplatelet cocultures other P1 receptors are predominant, probably expressed on MSCs [43]. These findings indicate that the CD73-adenosine axis is a key mechanism in platelet inhibition by MSCs. It was striking that PAC-1 expression was strongly increased in the presence of the inhibitors exceeding the expression level of stimulated platelets (set to 1). This suggests that when the inhibitory adenosine action is inhibited, MSCs can accelerate induced platelet activation acting on specific pathways. For HUVECs and BM-MSCs there was a large discrepancy between CD73 expression intensity and enzymatic activity, suggesting that other factors are involved or that CD73 expression does not correlate to enzymatic activity. Alkaline phosphatase (ALP) can act synergistically to CD73 to metabolize AMP to adenosine [31]. Adenosine deaminase (ADA) activity converts adenosine to inosine, stopping the inhibitory action of adenosine [30]. HeLa cells showed high ALP activity (Fig. 5a). BM-MSCs exerted a highly batch-dependent ALP activity, while the ALP activity of LA-MSCs, CB-MSCs and HUVECs was low. ADA activity was comparable for all samples, except for LA-MSCs where two out of the three donors showed high ADA activity. To test for their effects in MSC-platelet cocultures, we added the ALP inhibitor levamisole and ADA. Levamisole at a concentration of 1 mM inhibited platelet activation per se, similar to POM-1. At 100 μM this inhibitory effect was negligible (Fig. 5c). Levamisole slightly, but not significantly, reduced the inhibitory effect of all MSCs and HUVECs. Externally added ADA ameliorated the adenosine action and significantly reduced the inhibitory effect of LA-MSCs, probably adding on the ADA activity of LA-MSCs to achieve adenosine neutralization. Verification using additional platelet function analyses To document that MSCs triggered adenosine signaling in platelets, and that the suppressive strength was relevant not only in PRP but also in whole blood, we performed further platelet function tests including point-of-care technologies. First, we checked the concentration range of adenosine in PRP and whole blood. In fact, adenosine concentrations obtained in MSC-platelet cocultures were in the inhibitory range of adenosine in both PRP and whole blood (Fig. 6a). Second, the phosphorylation state of vasodilator-stimulated phosphoprotein (VASP) was assessed in ADP-stimulated PRP. VASP-P levels are indicative of cAMP levels, known as master switches of platelet activation and aggregation [33,41]. Adenosine activates adenylate cyclase, increasing cAMP levels and VASP phosphorylation, thus inhibiting platelet aggregation [44]. Indeed, adenosine increased the MFI of VASP-P, further raised by ADP stimulation (Fig. 6b). Without ADP stimulation, MSCs and HUVECs led to a slight increase and to a further rise upon ADP stimulation that was significant for BM-MSCs. CB-MSCs clearly split into two clusters, one inducing little VASP-P elevation after ADP addition while the other promoted VASP-P levels similar to BM-MSCs. HeLa cells did not influence VASP-P. These data support that MSCs induce the same signaling events as adenosine. Third, we analyzed the MSC effects on platelet adhesion, activation and aggregation in whole blood using the platelet function analyzer (PFA-100), a well-established diagnostic point-of-care test [45]. As expected, adenosine prolonged the time needed to form a platelet plug closing an aperture (Fig. 6c). BM-MSCs, LA-MSCs and HUVECs likewise delayed platelet function, while CB-MSCs exerted a minor effect. HeLa cells repeatedly caused an error in the measurement. Fourth, we used light transmission aggregometry (platelet aggregation profiler, PAP 8E), which distinguishes between the different stages of platelet activation: primary aggregation induced by the added agonist; secondary aggregation induced by endogenous agonists; and maximal and final aggregation which may differ when disintegration of platelet aggregates occurs. The speed of aggregation is measured as a primary slope. Area under the curve (AUC) values reflect the entire reaction cascade. Upon ADP stimulation, adenosine reduced the primary aggregation response, the speed and also the final aggregation compared to the control (Fig. 6d). The effects of BM-MSCs and LA-MSCs were similar. CB-MSCs and HUVECs exerted less pronounced platelet inhibition. HeLa cells (n = 1) showed a similar pattern, possibly related to the platelet donor (slight inhibitory activity was seen in the corresponding flow cytometry experiments). Despite the low sample number, AUC values indicated statistically significant differences of adenosine; all MSCs and HUVECs reduced platelet aggregation. In conclusion, all performed platelet function analyses confirmed the inhibitory effect of at least BM-MSCs and LA-MSCs on platelet function to a similar extent as adenosine. Discussion With respect to the safety of MSC infusion, our paper combines a translationally relevant issue with an important basic research question about the underlying mechanism. Focusing on the effect of MSCs on platelet function, we document that BM-MSCs and LA-MSCs, and with batch variations also CB-MSCs, inhibited the agonist-induced activation and aggregation of platelets-even more than endothelial cells, well known to regulate platelet reactivity. This inhibitory activity was confirmed to happen in both PRP and whole blood by applying a variety of platelet function tests including point-of-care diagnostic tests, underlining the physiological relevance. We identified the underlying mechanism to involve CD73-converted adenosine as summarized in Fig. 8. Activated platelets release ATP and ADP from their dense granules. Subsequent dephosphorylation of these agonists to the antagonists AMP by platelet CD39 and adenosine by MSC CD73 induces P1 receptor signaling to raise cAMP levels and VASP phosphorylation. This finally stops the activation cascade and reduces excessive platelet reactivity. In fact, we confirmed that adenosine is produced in MSC-platelet cocultures at levels inhibitory for platelet function in PRP and whole blood. Using inhibitors of these enzymes and adenosine receptors, we verified the crucial role of CD73-converted adenosine. The CD39 inhibitor POM-1 had only minor effects on phosphate and adenosine release but inhibited platelet activation per se, supporting the CD39 expression and activity in platelets. In contrast, blockade of CD73 by AMP-CP resulted in a compensation of MSC inhibitory effects along with a significant inhibition of AMP hydrolysis to phosphate and adenosine, correlating receptor expression to function. Caffeine, an unspecific adenosine P1 receptor blocker, and SCH 58261, a specific A2AR antagonist, reduced the inhibitory effect of MSCs and adenosine. The fact Fig. 6 Platelet function analyses. Platelet function analyses. a Dose-dependent effect of adenosine on platelet activation (MFI of CD62p and PAC-1, n=3). b Phosporylation state of vasodilator-stimulated phoshoprotein (VASP), VASP-P MFI levels c Closure time measured as time platelets need to close an ADP/collagen-coated aperture measured using PFA-100 device, comparing nontreated, adenosine and cell-cocultured whole blood. ***p < 0.001. d Platelet function assessed using light transmission aggregometry (platelet aggregation profiler). Aggregation cascade can be separated into primary aggregation and slope, final aggregation (all left y axis) and area under the curve (AUC; right y axis) assessing entire aggregation response. *p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001, ****p ≤ 0.0001. ADP adenosine diphosphate, BM bone marrow, CB cord blood, HUVEC human umbilical vein endothelial cell, LA lipoaspirate, MFI mean fluorescence intensity, n/a not analyzed, PRP platelet-rich plasma, TRAP thrombin receptor activator for peptide, WB whole blood, w/o without that SCH 58261 had a stronger compensatory effect on adenosine than caffeine, but minor effects in MSC cocultures, suggests the involvement of further adenosine receptor subtypes, nucleotide processing enzymes or nucleoside transporters [31,40,43]. It is, however, beyond the scope of this study to fully dissect the cascade of purinergic signaling. Recent studies have documented that MSC CD73-converted adenosine contributes to the immunomodulatory capacities of MSCs [37][38][39][40]. As in our study, CD39 and CD73 activity may be influenced by the tissue milieu (e.g. in cancer) and may require cooperation between different cell types [37,40,46]. We show that platelets express both CD39 and A2AR, but CD73 only weakly. In contrast, MSCs had low CD39 and low A2AR expression but high CD73 expression. The observed differences between BM-MSCs, LA-MSCs and CB-MSCs may relate to differences in nucleotide hydrolysis activity [32]. In contrast to our data, other authors detected CD39 expression in MSCs. Schuler et al. [40,46] suggest that CD39 expression is largely influenced by the tissue source and activation state. Kerkelä et al. [37] observed CD39 expression only in BM-MSCs, but not in CB-MSCs. LA-MSCs have been tested negative for CD39 [47]. In addition, MSCs from different murine tissues have been shown to differ with respect to Fig. 7 RT-qPCR analysis of prothrombogenic and antithrombogenic genes. Gene expression analyzed in BM n = 3, LA FBS n = 3, LA human ABserum n = 2, CB n = 3, HUVECs n = 3, ECFCs n = 3, PBMCs, HepG2 cells and HeLa cells each n = 1. ***p < 0.001. BM bone marrow, CB cord blood, ECFC endothelial colony forming cell, FBS fetal bovine serum, HUVEC human umbilical vein endothelial cell, LA lipoaspirate, PBMC peripheral blood mononuclear cell ectonucleotidase activity. Murine adipose tissue-derived MSCs had significantly higher ATP hydrolysis capacity than BM-MSCs, although the AMP hydrolysis activity was comparable. The authors concluded that MSCs exert tissue-specific roles in regulating the purinergic system [48]. Besides CD39, ectonucleotide pyrophosphatases/phosphodiesterases (E-NPPs) could be involved in AMP generation, as shown for HeLa cells and HUVECs [49]. Our data indicate that CD73 is the key enzyme involved in antithrombotic adenosine production. Strikingly, the enzymatic activity of HUVECs was low, despite high CD73 expression. Discrepant expression and activity data have been described and may be based on point mutations, splicing alterations or posttranslational modifications [50][51][52][53]. Other enzymes such as alkaline phosphatase could be involved in nucleotide metabolism [31]. ALP activity was detected at differing levels in HeLa cells and MSCs (highest in BM-MSCs, lower in LA-MSCs and low in both CB-MSCs and HUVECs). Using levamisole as an ALP inhibitor, the MSC inhibitory activity was reduced. ADA, which degrades adenosine to inosine, was also found to be active in all tested cells, with quite high activity in LA-MSCs. Documenting that MSCs induce adenosine signaling, we verified that VASP phosphorylation was increased, indicating adenylate cyclase activity and increased cAMP levels by both MSCs and adenosine. Thus, both CD39-mediated ADP removal plus CD73-mediated adenosine production modulates platelet activation as summarized in Fig. 8. Of high relevance for translation to the clinic, MSCs exerted their inhibitory effects not only in PRP, but also in whole blood. In whole blood, leukocytes and erythrocytes but also soluble enzymes add to purinergic signaling exerting CD39 and CD73 activity and removing adenosine by equlibrative nucleoside transporters, respectively [34,46]. Our data show that adenosine produced by MSCs can affect platelet activation, despite the presence of the other cells, to an extent measurable in diagnostic point-of-care tests. We observed a common inhibitory activity of MSCs with similar effects on all markers and agonist stimulations. Only after U46619 stimulation did the inhibitory effect appear to be weaker. It might therefore be possible that the thromboxane-induced pathway of platelet activation is less impaired by MSCs. Another possibility is that U46619 may act directly on MSCs. Two reports indicate that U46619/thromboxane a(2) affects MSC differentiation, migration and proliferation [54,55]. Fig. 8 Graphical summary of results. Upon agonist-induced platelet activation, ATP and ADP are released. These are converted to AMP by platelet CD39 activity. AMP is converted to adenosine by MSC-expressed CD73 and to a low extent by alkaline phosphatase. Adenosine signals vial A2AR and other P1 receptors to raise cAMP levels and to induce VASP phosphorylation. This reduces further platelet activation. Used inhibitors indicated in red. ADA adenosine deaminase, ADP adenosine diphosphate, ALP alkaline phosphatase, AMP adenosine monophosphate, ATP adenosine triphosphate, TRAP thrombin receptor activator for peptide, VASP vasodilator-stimulated phosphoprotein Importantly, BM-MSCs and LA-MSCs had no effect on resting platelets. This fits data indicating that MSC-seeded nanofibrous scaffolds were protected from platelet adhesion and thrombus formation [21]. Only some batches of CB-MSCs induced activation marker expression in resting platelets, similar to the tested tumor cell line HeLa. The effect of CB-MSCs on resting platelets may indicate an increased thromboembolic risk associated with their application, or in a different setting, a beneficial hemostatic potential [13]. Expression of various prothrombotic and antithrombotic genes, however, was similar for the MSCs from different cellular sources. These data support our previous findings that CB-MSCs differ from BM-MSCs and LA-MSCs in several aspects, namely frequency, differentiation, immunomodulation, cell marker expression and size [23,34]. Besides intrinsic heterogeneity of MSC preparations, CB appears to generate at least two distinct MSC-like populations [23,56,57]. It is a matter of future studies to correlate heterogeneity to function. Interestingly, conditioned MSC medium had no impact on resting platelets or on the degree of agonist-induced activation, in contrast to previous reports suggesting a releasable ADPase activity in polymorphonuclear leukocytes [58]. We conclude that there is no significant production of any soluble platelet-affecting substances under standard cell culture conditions, but that the inhibitory activity is exerted by cell-bound CD73, which metabolizes extracellular AMP to adenosine. As our major goal is to ensure the safety of MSC infusions, it is imperative to understand the effects of MSCs on hemostasis after systemic infusion. Hemostasis is a multistep process involving vasoconstriction, platelet plug formation and coagulation, and finally fibrinolysis. MSC involvement has been evaluated previously, focusing on individual steps: 1. MSC conditioned media may promote vasodilation of pulmonary artery rings [59]. 2. TF expression by MSCs may cause thromboembolism, preventable by use of, or example, heparin [12,15]. 3. Fibrinolytic activity may regulate migration and wound healing [60,61]. As a fourth mechanism by which MSCs can influence hemostasis, this study shows that MSCs prevent excessive platelet responsiveness by CD73-converted adenosine. The strength of this study is the combination of a translationally relevant issue with an important basic research question about the underlying mechanism. Our conclusions build on robust data based on a large number of MSC and platelet combinations, and MSCs from three different tissues plus two control cell types, with the effects demonstrated in both PRP and whole blood and using a variety of different test systems. The work opens some new lines of inquiry and questions to be answered in subsequent studies, including tissue-specific CD73 expression/activity and the interplay of other cell-bound and soluble nucleotide processing factors. Conclusions Our study documents that MSCs do not induce platelet activation and thereby thrombus formation, but rather actively inhibit platelet activation by a CD73 activity generating antithrombotic adenosine. CB-MSCs show batch-dependent differences. Since CD73 activity has been further linked to tissue barrier function, adaptation to ischemic conditions/hypoxia and inflammation, this mechanism may contribute to the tissue-protective mode of action of MSCs. Additional files Additional file 1: Table S1. Primer and probes used for RT-qPCR analysis. (DOCX 20 kb) Additional file 2: Figure S1. Effect of MSCs on platelet adhesion and aggregation under shear flow conditions. To assess effect of MSCs on platelet activation under shear flow conditions, we performed microfluidic experiments using a pneumatically driven channel system (BioFlux, San Francisco, CA, USA) mounted on an inverted microscope capable of live cell reflectance interference contrast microscopy (RICM) as described previously [31]. Briefly, channels were coated with 10 μg/cm 2 fibronectin (from human plasma F2006; Sigma Aldrich, St. Louis, MO, USA). The coated channels were filled with 300 μl of native whole blood with and without 1.5 × 10 5 BM-MSCs upon hematocrit adjustment and perfused with a constant shear stress of 5 dyne/cm 2 . At indicated points in time, RICM photographs of channel footprints were taken and analyzed by counting the number of adherent/aggregated platelets. BM-MSCs n = 3. (TIF 1135 kb) Additional file 3: Figure S2. Effect of MSCs on resting and agonistinduced platelet activation in impedance aggregometry. Impedance aggregometry experiments conducted using Multiplate® analyzer (Roche Diagnostics, Mannheim, Germany) [31]. Before stimulation, hirudinized whole blood samples were preincubated with respective cells or CM for 10 min, a 7-min phase outside the device followed by 3 min incubation in the aggregometer at 36°C under stirring. Then 3.3 μM ADP or 6.7 μM TRAP-6 was added for platelet stimulation. Aggregation assessed for 6 min and determined as area under the curve (AUC). Whole blood incubated with two different concentrations of MSCs, HUVECs or HeLa cells. Then 5 μM ADP or TRAP-6 was added to stimulate platelets and impedance was measured. AUC values normalized to respective control without cells. The authors thank all donors for providing tissue and blood material. They also thank all colleagues from the German Red Cross Blood Donor Service for providing human serum and platelet lysate, and the stem cell laboratory for providing cord blood units. The authors would also like to thank Prof. Ilse Hofmann, DKFZ, Heidelberg, for providing HeLa cells. They acknowledge financial support by Deutsche Forschungsgemeinschaft within the funding program Open Access Publishing, by the Baden-Württemberg Ministry of Science, Research and the Arts and by Ruprecht-Karls-Universität Heidelberg. Funding The project was supported by a grant from the Stiftung Transfusionsmedizin und Immunhämatologie to PN. Availability of data and materials Data supporting the findings of this study are available within the article and its supplementary information files. Authors' contributions PN contributed to conception and design, collection and/or assembly of data, data analysis and interpretation, manuscript writing and final approval of the manuscript. SE-H contributed to collection and/or assembly of data, data analysis and interpretation, and final approval of the manuscript. SU contributed to data analysis and interpretation, and final approval of the manuscript. HK contributed to conception and design, financial and administrative support, and final approval of the manuscript. VH, FK, GB-W and KJ contributed to collection and/or assembly of data, data analysis and interpretation, and final approval of the manuscript. HS and PW contributed to provision of study material or patients and final approval of the manuscript. PB contributed to conception and design, manuscript writing and final approval of the manuscript. KB contributed to conception and design, financial support, administrative support, collection and/or assembly of data, data analysis and interpretation, manuscript writing and final approval of the manuscript. Ethics approval and consent to participate Platelet donors were volunteer healthy persons giving informed consent, who had not been taking any platelet inhibiting medication for at least 2 weeks. The Mannheim Ethics Commission II confirmed that the regular blood donor questionnaire and informed consent documentation waives the need for ethical approval. Consent for publication Not applicable. Competing interests PW receives honoraria from and has membership on Advisory Boards for Sanofi-Aventis. The remaining authors declare that they have no competing interests. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Author details
2018-07-08T16:21:07.404Z
2018-07-04T00:00:00.000
{ "year": 2018, "sha1": "cd9931c6272a45fbfc7df09779914cfc22405475", "oa_license": "CCBY", "oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-018-0936-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd9931c6272a45fbfc7df09779914cfc22405475", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
59132718
pes2o/s2orc
v3-fos-license
The effects of computer games on the achievement of basic mathematical skills This study aims to analyze the relationship between playing computer games and learning basic mathematics skills. It shows the role computer games play in the learning and achievement of basic mathematical skills by students. Nowadays it is clear that individuals, especially young persons are very fond of computer and computer games. Since students are very interested in computers, they can be used to achieve education and instructional objectives. This study aims to search for evidences whether computer games can be used to obtain basic mathematical skills. The study was conducted in 2012, with grade 5 elementary school students (44 in number). 22 of the students made up the experimental group, and the other 22 students constituted the control group. The two groups studied basic mathematical skills in two different ways after their teacher taught them as usual. A group studied the mathematical skills by playing math computer games; the other group did exercises as classical homework after being taught in the classroom. SPSS -‘’t test’’ technique was used to examine the academic achievement of the two groups. All the students were given a Mathematics test of 25 questions related to basic skills operations. This test is used as the pretest and posttest. The result of the study showed that there is no significant difference between the group that learned basic mathematical skills with the aid of math computer games and the other group that learned basic mathematical skills alone without playing computer game. world and enhance their social interaction and their skills.Mathematics helps people to analyze their various experiences, explain and solve problems systematically.It also facilitates creative thinking and provides aesthetic development.It accelerates the development of reasoning skills of individuals in various mathematical situations (MEB, 2009, p 7). However, as important as the subject is, a lot of students still have phobia for it.Some have anxiety in learning it.This is due to the difficulty in comprehending it.This has made uncountable number of students to lack interest in its study; it is so sad to say that some students hate it.According to Garnett (1998), many students face math learning problems of different types; these learning difficulties range from mild to severe, and require instructional attention and various treatment methods.Some of the most common math learning problems include: (a) difficulty memorizing basic number facts; (b) computational and arithmetic weakness; (c) confusion about terminology and the written symbolic notation system of school math; and (d) weak understanding of concepts due to visual-spatial organization deficits.Apart from lower performance in math exercises and tests, these math learning disabilities can also result in avoidance behavior and negative perception of the particular subject.Often, students with math learning difficulties exhibit high math anxiety, which is defined as "a feeling of tension, apprehension, or fear that interferes with math performance" (Ashcraft, 2002). As a result of this, teachers should utilize teaching methods that capitalize o n the importance of mathematics, help students develop their math skills, and increase their self-efficacy beliefs (Meece et al., 1990).Moreover, it is very necessary to help students acquire a positive perception of mathematics, as this can lower math anxiety and higher math performance. Although extensive studies have been done on educational computer games around the world, a wide gap still exists in studies focusing on the effectiveness of computer games in children"s learning of certain subjects in schools.Hence, this research aims to investigate the effectiveness of computer-based game in facilitating children"s learning of basic Mathematics.T o achieve this aim, there are two objectives which the study seeks to fulfill: 1) To determine the relationship between the use of computer games and learning 2) To determine the effectiveness of computer games in children"s acquisition of basic mathematical skills.In order to achieve the above objectives, this study attempts to answer the research following: 1. What is the difference in learning achievement of the students who used computer-game in learning basic mathematical skills and those who did not? LITERATURE REVIEW According to DeBell and Chapman (2006), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games.Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006).Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles.Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008).For example, based on the study conducted on 1,274 1 st and 2 nd -graders, found a positive effect of educational games on the motivation of students. Despite the overall support for the idea that games have a positive effect on affective aspects of learning, there have been different results regarding the role of games in promoting cognitive gains and academic achievement.In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students" learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games.Similarly, Annetta et al. (2009) tested the effects of educational computer games by incorporating them into a 5 th -grade science class and found significantly positive results in the students" performance.Ke (2008) tested the effects of cooperative computer game-playing on the math achievement of 125 5 th -graders compared with competitive game-playing and non-game-playing groups.The authors observed significantly higher improvement in math performance in both computer game-playing groups compared with the non-game-playing group. However, other studies have not shown the same positive effects of games.By controlling important contextual variables such as socioeconomic status (SES), gender, and prior math achievement, Ke (2008) tested the effect of educational computer games compared with traditional paper-and-pencil drills.He did not find a significant effect of games on the math achievement of 487 5 th -graders.In another study, Ke (2008) recruited 4 th -and 5 thgraders to play educational math computer games during the summer math camp and measured their math ability at the onset of the program.At the post-test, the author found no significant effect of computer games on math achievement. Past literature also indicated that game and play are some of the best approaches for learning (Harel and Papert, 1991;Kafai, 2001).However, contemporary society and educational discourse regards human learning only to be achieved through non-playful process as the public has associated gaining knowledge with hard labour.In contrast to this dominant belief that learning is d o n e through great effort and persistence, play and enjoyment can and should be considered as an integral part of a learning process. In a recent study, Pareto et al. (2011) created a teachable-agent arithmetic game that aims in training basic arithmetics skills.The game was evaluated in a study with 153 participants, consisting of 3 rd and 5 th grade students.The results indicate that the game helped students improve their math performance and self-efficacy beliefs.Ahmad and Latih (2010) describe the development of an educational math game on fractions for primary school students.Similarly, Lee (2009) report on the creation and evaluation of an education game on fractions and mention that it improved students" understanding and performance. According to Prensky ( 2001), learning requires extra effort.To do this, students must volunteer to to be involved in learning activities.Students learn something when they are motivated; they give their time as well as their effort.They have the desire to use what they have learned in future (Malone, 1980).Therefore, teachers should motivate and courage them to participate in learning activities if they want their students to learn.This could be achieved through the use of computer games, since they encompass many characteristics that make them valuable tools for the educational process.More specifically, computer games promote active learning (Oblinger, 2006) and the development of various skills (McFarlane, Sparrowhawk and Heald, 2002), while they retain their entertainment and appealing qualities (Kafai, 2001). In this technology age, lifelong and widely used mathematical operations can easily and accurately be done with technologic products.In this situation instead of giving information to the students directly, they can be asked to do activities, and be guided to gain some skills through technology. Games can be used in teaching of various disciplines.However, it said that most teachers do not take games seriously.Educators argue in favour of the social benefits of games more and ignore their educational potential (Squire, 2003).Shute (2011) stresses on the use of instructional games to make learning more fun instead of banning them as a solution to education.These types of video games can be produced.Shute also specifies that, parents can play computer games with their children and spend time together so that they can learn the skills and experience while having fun. However, the problem is that the purpose of the school does not coincide with the purpose of the games.Therefore, it cannot be achieved the incorporation of school programs into games.Recently, there has been the integration of education into games more than the integration of games into education. Games and group work, presentations and activities like drama will make the students willing to participate in the course.These should be taken as a basis for teaching process. Mathematics Program in Turkey lays emphasis on problem solving, communication, association, and the development of reasoning skills to prepare students for life (MEB, 2009). If students succeed in problem solving, this can increase their confidence to solve new mathematical problems and therefore be able to demonstrate creative attitudes (MEB, 2009, p: 12).They will seek to restructure their knowledge when they learn to communicate using mathematics, and they will develop higher-order thinking skills.When they develop their reasoning skills and self-confidence, students will no longer see mathematics and formulas as rules to memorize; rather they will find mathematics enjoyable and meaningful. Turkey renovated Elementary Mathematics Curriculum at the end of 2005, and brought student-centered and activity weighted approach to the school system. This renewed program was based on experiences of developed countries, national and international research in mathematics education and mathematics programs in Turkey.This Mathematics Program leans on the principle "Every child can learn Math".The Math concepts are inherently abstract nature.Considering the children's development level, these concepts are very difficult to detect directly.Therefore, mathematics related concepts are discussed starting from concrete to finite life model. The biggest problem in the expression and understanding of mathematics, as it is known, is the abstract nature of mathematical terms.This new Mathematics teaching program reduced the number of acquisition, and subjects were slightly more concretized.Thus, it is intended to facilitate understanding of the subjects by the students.Despite the innovation of Mathematics Education in Turkey, it has not reached the desired level of success.To support this view, SBS (Placement Test) conducted across the country and international level made TIMSS (Trends in International Mathematics and Science Study) and PISA (Programme for International Student Assessment) to be taken as references. The following data were collected from the Ministry of Education in Educational Technology General Directorate of the site. In all tests, both national and international, it can be said that Turkey is not successful in Mathematical skills.It could be due to the methods, tools and materials used for the instruction and teaching.They should be changed. Till date in Turkey, traditional methods are utilized for teaching Mathematics.These methods make the students passive and not involved in activity work.They may even eliminate the need to think.Therefore, learning mathematics becomes boring for the students.Mathematics classes ought to be full of fun, especially for students in lower classes.This will motivate them to learn and see the need to learn. Today's children are extremely interested in and enjoy computer games.Computer games can be transferred into fun activities for children through the realization of the objectives of mathematics.While there is an alternative way to achieve the goal, insisting on the traditional ways will be laborous and time wasting.Cankaya and Karamete (2008) insist on using the appropriate computer games in teaching mathematics, considering the positive effects. New technologies do not just bring convenience to the users; they also modify their habits, emotions and cognitive abilities (Severin and Tankard, 1992;Griffin, 2000;Erdogan and Alemdar, 2002). Educational computer games are game format that allows students to learn the subject matter, or the software that develops problem-solving skills (Demirel et al., 2003).Yalın (2010) defines instructional media as physical environment where teaching and learning take place.The learning environment has to capture the students" interest, make them follow the activities and enable them to continue willingly in learning event.With this, the students can enjoy the process and also this will provide a more lasting learning. Students learn best when they are motivated (Whitton, 2007).Mathematics learning involving the use of computer games brings motivation to both the students and educators. Using computer games to teach Mathematics to arouse and engage children"s" interest is a widely accepted social idea.A lot of research is done on the subject.Many schools and home computers have internet connection.Computer-assisted learning applications are increasing to capture the interest of the children.Students spend a lot of time and their energy on computer, so it tends to keep them away from their school learning.In this case, it becomes rational to include their school work in the computer games.Jonker and Wijers (2008) say that, Math is not a subject students are interested in easily and naturally.In this case it is necessary to use a method that would make them active.They mentioned Th!nklets Program.They used it for their study in analyzing the Sayan 2849 effects of computer games on problem solving and also for motivation and withdrawals of interest.Many studies results are parallel with it.Kula and Erdem (2005) examined the impact of the development of basic arithmetic skills with computer games.They reported that computer games did not have a statistically significant difference and did not affect the arithemetic skills of the students.They reported that results can be limited with "Add them up" Program (2005, pp: 127-135).Demirel et al. (2003) indicate that computer games have a great potential for both commercial and educational purposes.It is frequently repeated that computer games are beneficial for learning, but it is still a puzzle on how to integrate them into educational programs.Roberson (2004) emphasizes if educational games can be integrated into classroom activities, they will have positive impact on learning.In recent years it is observed that children devote a long time for computer games; average of 4 h per week in 1980; the current use (especially boys) exceeds 10 h per week (Bayırtepe and Tüzün, 2007: 41).The educational games are software game that allows students learn lessons or improve their problem-solving skills (Bayırtepe and Salt, 2007). DeBell and Chapman (2006; 37) revealed that, 59% of the children aged 5-17 and 63% of children aged 11-17 use computers at home to play games.Green and McNeese (2004) stated that if the factors and features of the games should be examined systematically, this theoretical approach will help in the realization of effective teaching. Cognitive changes caused by the new digital technology and communication tools have led to changes in the needs and preferences of young people.Especially in education, preferences and needs of youths are changing.Tapscott (1997) explains that today"s young people are much different from their parents in learning, playing, communicating, working and constructing groups for work and creation.This is perhaps the biggest change taking place in education.Children want to play.In the technological environment their learning styles and habits are changing rapidly.As a main learning area, Mathematics is becoming increasingly boring if it does not involve technology.Even researches demonstrate that the games used for educational purposes should be hidden well inside.Also the entertainment features of the game should be integrated into teaching (İnal, 2005, p145).Kurfalli (2005) studied "The Effect of Computer Games on Education Activity".In the study he asked: how often, how long, where and what kind of computer games do the adolescents play, and also if this work blocks their social activities, in what conditions do they play the games and do they benefit from these games in their learning activities or not?.Kurfalli found the positive effects of computer games on students" learning. Computer game contributes to adolescents" achievement levels in education, develops their creativity, because it allows the exchange of information (Yeşilyurt, 2014, p: 9). Till date, there are not enough evidence on teaching goals that blend education and game.As a result, there is need to have much extensive research on this subject.Tüzün (2006) states that computer games meet children"s needs of development, improvement and success.For children at the developmental age, digital games develop their hand-eye coordination, and it was found that they also enhance problem solving and multimandated skills.In addition, computer games can be used to evaluate the success of students without them being aware.There are advantages of confidential assessment compared to conventional methods.Computer games may be used for this purpose.While students are playing games, they can produce different responses to problems and teachers can understand where the students are not doing well and where they are doing well.Computer games can help to assess students' progress, too.This is not only in determining the students' situation in a specific area, but it also reveals that there are certain areas that need to improve where feedback also should be used.Computer games also provide an opportunity to learn quantitative evaluation.Individual arrangements can be made for this evaluation (Shute, 2011). The research reveals that computer games have both negative and positive effects.These effects are emerging as psychological and physiological problems (Gürcan et al., 2008, p: 7-8). Research material This study was carried out with pretest-posttest models used for the control and experimental groups.The study used computer game, scholastics program originally designed to teach basic mathematics to the experimental group students. The scholastics program was prepared for the students and teachers of elementary schools according to the New Turkish Elementary Curriculum.It can be reached on the internet.With it, students can have audio and visual activities, educational games and test online.They can also read e-books, solve problems, have practical tests, and play games there.It is online and users have to pay for it.This content was commissioned by a particular publishing firm.Progress of students who are members of the program can be controlled by teachers.With the programme, primary students of 2-8 grades are taught Turkish, Mathematics, Social Studies, Science and Technology, and Social Studies of the curriculum and English Lectures and courses as well as the appropriate individual exercises they can use in the classroom.Teachers can give students homework through the program and the results can be checked online.Each grade level to increae motivation, and various educational games for each subject are offered to students.Students may do the tests eight times a year.Exams can be considered individually and collectively.One lesson or lessons can be observed collectively. In this study, this program is used because students and teachers use it in their daily life, and it is developing increasingly. The study aims to generate a variety of data sets, allowing a comparison between students who participated in the game and students who did not.Data sets include gender, race, and locality (urban students versus rural students).The various data groups were tested using identical testing situations and materials to allow a quantitative comparison of the scores of students participating in the game and t h o s e not participating in the game. Experimental design This research was performed in an elementary school in İstanbul, Turkey.There are 44 students who participated in this work; they are 5th grade students in the same class.The research was done from March 25 th -April, 2012.5th Grade Curriculum of Primary Mathematics 1-5 was used, with the title, "Four Operations of Natural Numbers".The experimental group has 22 students and the control group has 22 students.Both groups were given a 25-question pretest.This test is also used as posttest. The students studied the subject by using classical method in the classroom.All the students had the 25 questions test as pretest after their teacher has taught them in the usual way.The students were separated into experiment and control groups.They were grouped according to their pretest points.At first, the students were asked of the group they wished to join.After they were grouped, they were given homework and exercises on the subject.The assignment of the experimental group was to use Scholastics Program"" game to solve maths problem, "" Operations of Natural Number".Then the next day, all the groups were given the same test as posttest which was given as pretest at the beginning of the application.Their points were defined and the pretest and posttest results of the two groups were compared.The data were analyzed with SPSS -""t test"" technique. RESULTS The results of the research are presented in tables.In Table 1, in the pretest and posttest results of the experiment and control groups, the means are distributed from 69.0 to 75.6 points since the standard deviations vary in a range from 19.4 to 16.7.The difference between the mean of pretests and posttests of the groups can be seen in Table 2. As seen in Table 2, there is a difference between the average points of the experiment group and control group.The difference is 0.27 point in favor of the control group before the implementation of the pretest.However, this is not statistically meaningful (p=0.68).Thus, it is submitted that the two groups are equivalent and it is suggested to start the implementation of the pretest. The average points of the experiment group"s posttest are higher (0.55) than the average points of the experiment group"s pretest.It is meaningful and statistically different (p=0.002).It means the experiment group is successful after the test. The average point of the posttest of control group is After the implementation of the homework, both groups have learned and got high points from the posttest and became successful. Statistically differences are The last comparison is between the posttests of the average of the two groups.There is a difference in favor of the experiment group.The difference is 1.82 points, and it is not statistically meaningful (p=0.448)(Tables 3 and 4). DISCUSSION AND CONCLUSION Till date researchers have not been able to clearly state the effects of computer games on the instructional objectives of schools.Thus, more research works are needed.It is difficult to study due to the nature of the instructional objectives and their connection with games.It needs also more detailed analysis.In this study, the problem is "what is the impact of doing exercises and assignment using computer games compared to the classical exercises in teaching basic Mathematical skills?"Like other results, this study has found no significant difference between both methods.Ke (2008) recruited 4 th -and 5 th -graders to play educational math computer games during the summer math camp and measured their math ability at the onset of the program.At the post-test, he found no significant effect of computer games on math achievement.Anderson andBushman (2001) andProvenzo (1991) viewed video games as promoting violence, social isolation, aggression, or negative imagery of women.Video games in this light have been regarded as pure entertainment. On the other hand, Wilson et al. (2006) created an adaptive computer game for dyscalculia and tested it in a five-week evaluation study with nine children with math learning difficulties.The results indicated an increase in the children"s math performance on core number sense tasks, as well as an improvement in their confidence in their mathematical abilities.Annetta et al. ( 2009) tested the effects of educational computer games by incorporating them into a 5 th -grade science class and found significantly positive results in the students" performance.Zavaleta et al. (2005) suggest in their study that the use of a commercial game for elementary school algebra enhanced students" achievement.Kebritchi et al. (2010) investigated the impact of commercial math games on 193 high school students" math performance, and found positive results in the student"s perception of mathematics, motivation, and achievement. Using computer games to learn basic arithmetical skills is as less effective as using the classical ways.Consequently, it can be said that the subject needs more studies and of course from a different perspective. We could not obtain any evidence if one group is better than the other for two reasons: First, the sample size is rather small (n=22); the experiment should be repeated using bigger sample size.Second, before the experiment, results indicated that control group point is slightly higher than the experiment group point.However pretest results indicated that experiment group may get meaningful difference if big sample size could be measured more sensitively. It must not be overlooked that if the number of the students is increased the results can tend to be positive for the experimental group.To know the positive effect of computer games, we need to have more studies using different and wide groups.New researches can be controlled with cross-over trial so that it can take the result closer to a positive one. Development and information technologies draw children closer to computer games.As a result, computer games can be used for achieving educational goals (Altan and Tüzün, 2010).But it is still emphasized that computer games are beneficial for learning and can be used for school instructions (Barab et al., 2008). While using instructional tools like computer games in education, educators must get help from learning theories and researches.These are cognitive theories, multiple intelligences, constructivist theories, cognitive loan theories, cognitive flexible theories, and multiple media theories.It is important for the games to obey the rules of the theories in order to achieve Instructional goals.For example: Multiple Intelligences Theory states that individuals have eight kinds of intelligence; these intelligences should be used during teaching and learning (Green and McNeese, 2004). Table 1 . The results of pretests and posttests of the experiment group and control group. Table 2 . Mean differences and 95% confidence intervals of Pre and Posttests of the two groups. Table 3 . Results of paired groups t test. Table 4 . Correlation of groups of pretest-posttest.
2018-12-17T16:15:18.857Z
2015-11-23T00:00:00.000
{ "year": 2015, "sha1": "76943065c0cfeaa0368e9fced53212d01c73f1ee", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/ERR/article-full-text-pdf/D6BB1F456358", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "76943065c0cfeaa0368e9fced53212d01c73f1ee", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Psychology" ] }
259947759
pes2o/s2orc
v3-fos-license
Compliance with Pregnancy Prevention Recommendations for Isotretinoin Following the Amendment of the European Union Pregnancy Prevention Program: A Repeat Study in Estonia Background Isotretinoin, indicated for severe acne, is a potent teratogen and therefore contraindicated in pregnancy. Thus, the pregnancy prevention program (PPP) for isotretinoin has been introduced. Objectives The aim of this study was to assess the concomitant use of isotretinoin and effective contraception and the rate of potential isotretinoin-exposed pregnancies in females of childbearing age in 2017–2020 in Estonia. In addition, we aimed to evaluate whether compliance with the PPP has improved compared with the previous study conducted in Estonia covering the period of 2012–2016. Methods This retrospective, nationwide study using prescription and healthcare claims data included 2575 females aged 15–45 years who started using isotretinoin between 2017 and 2020. Results For 64.7% of females of childbearing age, no concurrent use of an effective contraceptive was detected while using isotretinoin. A moderately higher contraceptive coverage (35.3%) was observed compared with the previous study (29.7%) (p < 0.001). Complete contraception coverage was highest in females aged 30–39 years with an adjusted OR of 12.8 (p < 0.001) compared with the age group 15–19 years and 2.47 (p < 0.001) compared with the age group 20–29 years. 17 pregnancies coincided with the isotretinoin treatment-related period. The risk for potential isotretinoin-exposed pregnancy was 6.6 (95% CI 3.9–10.5) per 1000 treated females of childbearing age over the 4-year observation period. The risk for potential isotretinoin-exposed pregnancies per 1000 treated females was 1.0 in females aged 15–19 years, 11.6 in females aged 20–29 years, 8.8 in females aged 30–39 years, and 7.4 in females aged 40–45 years (p = 0.009). Conclusion A slight improvement in complete contraceptive coverage during isotretinoin use has not resulted in a decrease in the risk of isotretinoin-exposed pregnancies. The contraceptive usage and risk for pregnancy vary greatly across age groups, suggesting the need for a more targeted approach to improve the effectiveness of the PPP. two complementary user-dependent methods [5].A recent survey conducted in Ireland found that although healthcare professionals were highly aware that isotretinoin should not be prescribed to women of childbearing age unless PPP conditions are met, the application of this knowledge differed significantly in clinical practice [10]. Our study aimed to assess the concomitant use of isotretinoin and effective contraception and the rate of potential isotretinoin-exposed pregnancies in females of childbearing age in Estonia during 2017-2020, and also to evaluate whether compliance with the PPP has improved since the previous study conducted in Estonia covering the period of 2012-2016. Study Design and Data This retrospective, nationwide drug utilization study included females aged 15-45 years who started using isotretinoin between 2017 and 2020.The STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) Checklist for cross-sectional studies was used in the reporting of this study [11]. Prescription and healthcare claims data were obtained from the Estonian Health Insurance Fund (EHIF), the national health insurance provider in Estonia.During the study period, about 95% of the Estonian population was covered by the EHIF [12,13]. To identify females who started using isotretinoin, data on isotretinoin (ATC code D10BA01) prescriptions issued between November 2016 and December 2020 was retrieved.For women receiving isotretinoin, prescription data of dispensed hormonal contraceptives (ATC groups G02B, G03A, and G03DA02) and healthcare claims data of insertion or management of an intrauterine contraceptive device (IUD) (International Classification of Diseases [ICD-10] diagnostic codes Z30.1 and Z30.5) from November 2012 to February 2021 were retrieved.Also, healthcare claims data related to pregnancy (ICD-10 diagnostic codes Z32.1, Z33, Z34, Z35, O00-O08) from November 2016 to March 2021 was obtained. Use of Isotretinoin Females aged 15-45 years filling at least one isotretinoin prescription during the study period were included in the analysis. To calculate treatment episodes, daily treatment doses indicated on the prescription were used.Dosage regimens and physicians' comments were reviewed by two Introduction Isotretinoin is primarily used for the treatment of severe forms of acne vulgaris, a common cutaneous disorder that can have a profound psychologic impact, contributing to low self-esteem, depression, and anxiety [1].Although isotretinoin can be highly effective, both prescribers and patients must perceive the serious risks associated with its use.The most crucial safety concern of isotretinoin in females of childbearing age is teratogenicity [2,3].Like all retinoids, isotretinoin, even if used for a short period, is associated with a high risk of severe, life-threatening pregnancy outcomes and is therefore contraindicated in pregnancy.The most typical major malformations, in about 25% of infants exposed to isotretinoin during the first 20 weeks of pregnancy, are craniofacial, cardiac, thymic, and central nervous system abnormalities [3]. To ensure that females of childbearing potential are not pregnant when starting isotretinoin therapy and do not become pregnant while using isotretinoin, the pregnancy prevention program (PPP) for Roaccutane was first introduced in 1988 [4].In 2003, due to the launch of generic formulations of isotretinoin and divergences in the product information across the European Union (EU), the harmonized EU PPP for all oral isotretinoin-containing medicinal products was implemented [5].In Estonia, measures of the EU PPP were implemented in 2005 after joining the EU.A study conducted in Estonia assessing the compliance with the PPP during isotretinoin treatment in females of childbearing age between 2012 and 2016 showed that 29.7% of females had either full or partial contraceptive coverage [6].The risk for potential isotretinoin-exposed pregnancy was 3.6 per 1000 treated females during the 5-year observation period [6].In literature, the estimates of contraceptive coverage while using isotretinoin vary greatly depending on the methods and data used.In studies using administrative data, the contraceptive coverage has ranged from approximately 30% in Canada [7] and the United States [8] to about 50% in the Netherlands [9].A systematic review of studies and case reports in Europe has shown a pregnancy incidence of 0.2-1.0 per 1000 women of childbearing age using isotretinoin [4]. The concerns about noncompliance with the PPP in clinical practice, notices of shortcomings in the PPP, and inconsistencies in the educational materials led to a European Medicines Agency's (EMA) review on measures for pregnancy prevention during retinoid use.As a result of this review, updated PPP measures were implemented in June 2018, including simplified physician and patient educational materials that pointed out more clearly the teratogenicity risk and the need to use at least one highly effective method of contraception (i.e. a user-independent method) or researchers.Treatment episodes were calculated based on consecutive purchases and the daily dose, also taking into account the overlap.If the gap between the end of the calculated episode and the new purchase was more than 60 days, it was considered a new treatment episode.Only the first treatment episodes beginning from January 2017 to December 2020 were included in the final analysis (first episodes accounted for about 82% of all treatment episodes calculated).To exclude treatment episodes that started before the year 2017, we used a 2-month wash-out period from November to December 2016.At the time, isotretinoin had a 30-day supply limit, thus this 2-month period was considered sufficient. Prescribers' specialties (dermatovenerologists, general practitioners, other) and diagnoses for which isotretinoin was prescribed (acne, rosacea, seborrheic dermatitis, other follicular disorders, or other) were also identified. Use of Effective Contraception Effective contraception was considered as the use of hormonal contraceptives (excluding emergency contraceptives) or non-hormonal IUDs.For estimating the covered time of purchased oral, vaginal, and transdermal hormonal contraceptives, defined daily doses were used [14].For IUDs and implants containing progestogen, the duration of use was set to 5 or 3 years, depending on how long the specific product was intended to be used [15].For depot medroxyprogesterone acetate injection, the duration of use was set to 13 weeks [16].In addition, contraception was also identified by healthcare service indicating an insertion or a surveillance of an IUD with a 5-year duration of coverage since the date of service provided. Complete contraception coverage was defined as consistent use of contraceptives for 30 days before the start of the isotretinoin episode until 30 days after the end of the isotretinoin episode, as described as being mandatory in the summary of product characteristics and the PPP of isotretinoin.Consequently, partial coverage was identified if contraception was missing at some point in time during the period described above.If the isotretinoin episode did not coincide with any use of contraceptive, it was defined as the absence of contraception. Potential Isotretinoin-Exposed Pregnancies Pregnancy was determined based on ICD-10 codes indicating early pregnancy (Z32.1,Z33, Z34, Z35, O00-O08).Potential isotretinoin-exposed pregnancy was detected if a pregnancy-related healthcare service coincided with the period from the beginning of the isotretinoin episode to 30 days after the end of the isotretinoin episode.All detected pregnancy cases were reviewed manually by researchers KK and MI to ensure that none was counted more than once. Statistical Analysis Statistical analysis of the current study was performed similarly to the Uusküla et al. 2018 study [6], to enable comparison of the results.Descriptive statistics were used to summarize patient characteristics and isotretinoin and contraception usage.Females were stratified into the following age groups: 15-19, 20-29, 30-39, and 40-45 years. Fisher's exact test was used to test statistical differences between the age groups and contraception coverage.Factors associated with contraception coverage during isotretinoin treatment were assessed using logistic regression analysis, generating univariate and multivariable estimates of odds ratios with 95% confidence intervals. The risk for potential isotretinoin-exposed pregnancy with a 95% confidence interval was calculated per 1000 treated females, stratified by age groups, year, and contraception coverage. Statistical analysis was performed using R statistical software, version 4.1.2[17]. Ethical Considerations The study procedures were conducted in accordance with local data protection regulations.The study was approved by the Health Development Institute Ethics Committee (Decision 643 on 23 February, 2021 in Estonia). Characteristics of Study Subjects and Treatment Courses During the study period, 2975 females started using isotretinoin in Estonia, and most of them (86.6%, n = 2575) were of childbearing age (15-45 years).The majority of females of childbearing age were young females, 40.4% in the years 15-19 and 36.7% in the years 20-29; 17.6% were 30-39 years old and 5.2% were 40-45 years old at the start of using isotretinoin.The incidence of isotretinoin use was consistent over the study years, with approximately 2.7 users per 1000 females of childbearing age in the population.However, the number of prescriptions per user decreased significantly in 2020, with only 3.4 prescriptions compared with 4.6-4.8prescriptions in the years 2017-2019 (p < 0.001). Effective Contraception Coverage For the majority of females of childbearing age (64.7%, n = 1667), it was not possible to detect concurrent use of an effective contraceptive with isotretinoin.The absence of contraception was highest among young females aged 15-19 years (83.1%). The complete coverage in this study period was significantly higher (4.9%, 95% CI 2.8-7.0;p < 0.001) compared with the results from the previous study of the years 2012-2016.This improvement was statistically significant in age groups 20-29 years and 30-39 years but not for age groups 15-19 years or 40-45 years (Table 1). When comparing females with complete coverage with those who were not or were partially covered, age was the most significant factor after adjustment.Complete contraception coverage increased with age and was highest in females aged 30-39 years, whose adjusted odds ratio (OR) for complete coverage was 12.8 (95% CI 9.26-17.91;p < 0.001) compared with females aged 15-19 years and 2.47 (95% CI 1.93-3.16,p < 0.001) compared with those aged 20-29 years) (Fig. 3).During the study period, statistically significant improvement in complete coverage was seen only for 2019 (adjusted OR 1.38, 95% CI 1.03-1.85;p = 0.030) compared with 2017. Potential Isotretinoin-Exposed Pregnancies During the study period, 17 pregnancies coincided with isotretinoin treatment.Of these pregnancies, 11 (64.7%) were terminated by medical abortion, two (11.8%) were spontaneous abortions, and four pregnancies (23.5%) were ongoing (no data on the outcome).On average, these pregnancies were diagnosed 90 days (SD 67, range 14-279) after the start of isotretinoin treatment.The risk for potential isotretinoin-exposed pregnancy was 6.6 (95% CI 3.9-10.5)per 1000 treated females of childbearing age over the 4-year observation period.Compared with the previous study (3.6 per 1000 treated females), the risk was not statistically significantly higher (95% CI −1.19 to 7.23; p = 0.171). Discussion In this nationwide repeat study, somewhat higher contraceptive coverage (35.3%) was observed in females of childbearing age using isotretinoin compared with the previous study (29.7%) [6] or compared with the overall prevalence of hormonal contraceptive use among females of childbearing age in Estonia in 2019 (28.5%) [18].However, considering that isotretinoin is a potent teratogen, this rate is not sufficient to meet the objective of the PPP, as a higher absolute number of potential isotretinoin-exposed pregnancies was detected than in the previous study [6].Interestingly, the observed risk for potential isotretinoin-exposed pregnancy is much higher in Estonia (6.6 per 1000 treated females) than recently reported in four EU countries (0.1-0.4 per 1000 retinoid users), in an impact study on oral retinoids by the European Medicines Agency (EMA) [19,20].This discrepancy may be partially explained by the incompleteness of the registries used and methodological differences in studies.A similar rate of isotretinoin-exposed pregnancies to our study has been found in Canada (6.2 per 1000 treated females) [7].The risk of potential pregnancies exposed to isotretinoin varied during the study years, with a lower risk reported in 2020.This decline in risk could be partially attributed to a reduction in the number of prescriptions per woman in 2020, possibly due to limited access to medical care during the COVID-19 pandemic.Furthermore, COVID-19 restrictions such as stay-at-home orders and social isolation may have reduced unintended pregnancies by impacting young people's sexual activity.Our study illustrates well the differences in contraception use and risk for pregnancy between age groups.Among females aged 15-19 years, the lowest use of effective contraception and at the same time the lowest risk for pregnancy was observed.These young females are likely to be predominantly not yet sexually active and thus contraceptive use is not relevant.According to a study conducted in Estonia on sexual behavior, the average age of women during their first sexual intercourse was 18.7 years [21].Inconsistent contraception coverage among young females has also been demonstrated in a Belgium study, where 63.8% of females aged 12-21 years taking isotretinoin were prescribed at least one contraceptive, but only 15.7% used it as recommended during treatment [22].Providing better education on the effectiveness of various contraceptive methods and knowledge of the teratogenic risks associated with certain medicines could have a significant impact on individuals in this age group later in life. According to the findings of our study, the most problematic age group seems to be the 20-year-old.A relatively large proportion of partial contraception coverage and the highest risk for exposed pregnancies indicates the volatile sex life of this age group.In Estonia, young females prefer mainly user-dependent methods, most commonly condoms, hormonal pills, and ineffective methods like withdrawal [18,23].Young females may have a knowledge gap in recognizing effective contraception methods and overestimate the effectiveness of contraceptives in typical use [23].For example, one missed dose can lead to pregnancy in contraceptive pills.Considering this, patient education is important not only to avoid a missed dose but also to teach women what to do in such circumstances.Promoting the use of user-independent methods, which do not depend on user adherence, in young females could minimize the risk of contraceptive failure and unintended pregnancy.While IUD usage increased among isotretinoin users in Estonia between 2012-2016 and 2017-2020 (21.9% to 36.7%, respectively) [6], it is still a less preferred contraceptive method among young females.Better education about the effectiveness and advantages of different contraceptive methods seems essential to avoid unintended pregnancies.In addition, referring young patients for contraceptive counseling could be considered.Some of the non-compliance with PPP requirements may be attributed to prescribers.Although the awareness of the teratogenic risks of oral retinoids among healthcare professionals is high in some European countries, only a small proportion of them comply with aspects of the PPP [24].This may be due to several reasons, including time constraints in patient counseling [24], information overload, PPP materials in impractical format, reluctance to read information sent by marketing authorization holders, etc.According to our knowledge, the awareness of healthcare professionals about the isotretinoin PPP has not been investigated in Estonia, indicating a need for further research on this topic.A study conducted in Belgium showed that most healthcare professionals (among respondents, 100.0%GPs and dermatologists, 98% pharmacists) and female patients (87.9% of respondents) were aware of the teratogenic risk of isotretinoin, but only 41.7% of dermatologists, 4.2% of GPs, 24.1% of pharmacists, and 15.2% of female patients indicated that they knew of the existence of the PPP [25].Following the implementation of the revised isotretinoin PPP, the awareness, knowledge, and experience of implementing the PPP in clinical practice has been studied in Ireland.Hughes et al. [10] showed that healthcare professionals were highly aware (≥ 87%) that isotretinoin should not be prescribed to women of childbearing age unless PPP conditions are met.However, the implementation of this knowledge varied greatly in clinical practice.Specialists (dermatology, obstetrics, and gynecology) were more likely (71.4%) to request a pregnancy test before initiating treatment with isotretinoin than general practitioners (31.6%).Similarly, 47.6% of specialists provided patients with a reminder card when initiating the treatment, while only 11.8% of general practitioners did the same.Only 26.1% of community pharmacists provided patients with a reminder card at each dispensing [10].It is questionable whether sending hard copies or emailing safety information to physicians is an efficient way to implement risk minimization measures in daily practice.Discussions are ongoing in Estonia regarding incorporating the automated alert system for risk minimization measures into prescription and pharmacy dispensing software, which could raise awareness of the risk and its minimization measures and improve the monitoring of continuous compliance with measures between prescribers and pharmacists.Similar discussions are taking place about enhancing patient access to risk minimization materials through the Estonian Health Portal. There has been a lot of debate about the burden of PPPs on the health system and patients.Various studies conducted before and after the update of pregnancy prevention programs demonstrate that pregnancies continue to occur even with comprehensive measures in place [26][27][28].In the United States, where a stringent risk management program (iPLEDGE) is in place, approximately 30% of clinicians who regularly prescribe isotretinoin have at times chosen not to prescribe isotretinoin to patients with severe acne because of the burden of the iPLEDGE program [29].Therefore, it is essential that risk minimization measures are balanced to achieve the expected added value and not to limit the patients' access to effective treatment.However in the EU, during the update of the PPP for retinoids in 2018, requirements for pregnancy testing and contraception were a bit loosened, and some requirements were waived (e.g.limitation of prescription to 7-day validity and a 30-day supply) [5], thus the PPP does not seem to have redundant elements.Pregnancy testing and the use of effective contraception are essential measures in women of childbearing potential to avoid fetal exposure to a teratogen.To highlight this, some pregnancies in our study were diagnosed early in isotretinoin treatment, which could have been avoided with prior pregnancy testing.Furthermore, women should be advised that hormonal contraceptives are not effective immediately upon initiation, and thus there is a requirement that contraception should be started before isotretinoin initiation. Limitations of this study are mainly associated with using administrative data.It was not possible to ascertain if a female is not sexually active, thus not needing contraception.Also, some females do not have childbearing potential, for example, due to sterilization.However, the proportion of those females is rather marginal in Estonia [23].In addition, there is no data about condom use, which is considered an effective, yet user-dependent method.The complete contraception coverage may be slightly overestimated as the diagnostic code Z30.5 includes checking, reinsertion, or removal of the IUD.Therefore, for some women, it may have meant removal of the IUD, but was counted as covered.This may also apply for two pregnancies detected in women with complete contraceptive coverage in whom this diagnostic code was present.Thus, there is the possibility that the IUD was actually removed in these women.Another limitation is that the cohort of females of childbearing age using isotretinoin was perhaps too small to reach statistical significance in statistical tests assessing pregnancy risk.A limitation specific to the prescription data also applies here; it cannot be entirely known whether the patient actually consumed the dispensed medicine. Conclusion This nationwide study provides additional information on compliance with pregnancy prevention recommendations for isotretinoin.Despite the simplification of the PPP and educational materials in 2018, our study still demonstrates the ineffectiveness of current efforts.Although complete coverage with contraception during isotretinoin use is slightly improved, it remains very low, and it has not resulted in a decrease in the risk of isotretinoin-exposed pregnancies.The proportion of partial contraception coverage and the risk for exposed pregnancies was especially pronounced amongst females aged 20-29 years-the most problematic age group according to our study.Attempts to improve the effectiveness of the PPP are necessary.This could be done by improving education of different target groups and the application of modern assistive technologies rather than setting new requirements.Future studies are needed to assess whether the interventions planned would have the desired outcome. Fig. 1 Fig. 2 Fig. 1 Proportion of different contraceptive methods among females of childbearing age who used contraceptives with isotretinoin in Estonia in 2017-2020 by age group
2023-07-19T06:18:53.902Z
2023-07-18T00:00:00.000
{ "year": 2023, "sha1": "f268972c3f668a75911c93bd53d67ebcac288bc4", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40801-023-00381-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e651794c1cb4043c994ee4a78a62ee578494ebe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55680294
pes2o/s2orc
v3-fos-license
Clinical and angiographic profile in patients of western Rajasthan undergoing percutaneous coronary interventions: a single centre experience Background: This study was aimed to evaluate clinical and angiographic profile of patients undergoing percutaneous coronary intervention at the Department of Cardiology, Mathura Das Mathur (MDM) Hospital attached to Dr. Sampurnanand Medical College, Jodhpur. Methods: This study was hospital based prospective observational study conducted in the department of cardiology at MDM hospital. This study included 1166 patients who underwent percutaneous coronary intervention at cardiac cathlab of MDM hospital from January 2016 to April 2017. Procedural details noted included vascular access route, lesion characteristics, number of lesions intervened, stents used and periprocedural pharmacotherapy administered. Results: A total of 1166 patients (mean age56.3±10.4 years) with 76.5% male and 23.5% female were included in the study. Smoking and hypertension were the most common risk factors, present in 64% and 56% patients respectively. Diabetes mellitus and obesity were observed in 24.5% , 18.0% patients respectively. Anterior wall MI was the most common mode of presentation (36.2%). Single Vessel Disease (SVD) was the most common angiographic pattern observed in 62% patients; left anterior descending artery (LAD) was the most frequently involved vessel (65.9%); and type B lesions were most prevalent (48%). Most of the procedures were elective (61.4%) and femoral route was used in the majority (76%). Radial access was obtained in 24% of patients. Primary PCI was done in 6% of cases while pharmaco-invasive approach was adopted in 32.6% of patients. Drug eluting stents were deployed in 100% of the cases. The overall procedural success rate was 95.4%. Procedural mortality was nil and periprocedural complications occurred in 16.0% patients. Conclusions: This first PCI study from western Rajasthan provides an overview into the salient features of CAD among regional population and focus on the characteristics of PCIs performed with their outcomes. INTRODUCTION Coronary artery disease (CAD) is a leading cause of death worldwide and over three quarters of these deaths occur in low and middle income countries. 1 India is turning into the 'global capital' of coronary artery disease (CAD), contributing to 60% of global burden of CAD, and the prevalence is rising unabated. 2 CAD tends to occur at a younger age in Indians with more extensive angiographic involvement. 3 CAD varies across geography, socio-demography, and ethnicity with marked interregional heterogeneity across the country. 4 In the state of Rajasthan the population, like any other developing community, is fast undergoing lifestyle Department of Cardiology, Dr. S. N. Medical College, Jodhpur, Rajasthan, India changes, but the unusual stress and strain due to fast paced changed lifestyle has modified the epidemiology of CAD in this population. Paralleling this increased prevalence, the treatment of ischemic heart disease has also witnessed some revolutionary changes in last couple of decades. 5 In particular, percutaneous coronary interventions (PCI) which include percutaneous transluminal coronary angioplasty (PTCA), stenting, and related techniques represent a major therapeutic advance in the management of CAD. PCI is effective in relieving symptoms and it improves survival in certain subsets of CAD patients. 6,7 Through advances in equipment and technical skills the profile of patients undergoing PCI is constantly evolving, with increasingly more complex patients and lesions being treated with this modality. 8,9 Despite this increasing phenomenon, there is serious lack of data regarding risk factors, angiographic profile and clinical outcomes in patients undergoing PCI in India, which formed the basis to perform this study. [10][11][12] This is the first study from western Rajasthan conducted to explore the clinical profile of patients with CAD undergoing percutaneous revascularization in terms of risk factors, clinical presentation, and angiographic characteristics; and to analyze procedural outcomes at our hospital. Our objective was to determine baseline regional data and compare it to various national and international data available. METHODS This study was hospital based prospective observational all comers study conducted in department of cardiology at MDM hospital. This study included 1166 patients who underwent percutaneous coronary intervention at cardiac cathlab of MDM hospital between January 2016 and April 2017. Patients of coronary artery disease were diagnosed on the basis of clinical history, 12 lead ECG findings, biochemical markers like Troponin I and/or non-invasive tests like treadmill test and 2D echocardiography. Patients with varied clinical presentations (stable angina, unstable angina, ST elevation MI and Non-ST elevation MI) who subsequently underwent coronary angiography with revascularization were included in the study. Patients with severe renal insufficiency defined as creatinine clearance <30ml/min were excluded from the study. Usual atherosclerotic risk factors like smoking, hypertension and diabetes mellitus were identified and documented in each patient. Diabetes mellitus was diagnosed on the basis of fasting plasma glucose >126mg/dl or HbA1c >6.5% or symptoms of diabetes plus random blood glucose level >200mg/dl. Hypertension was considered to be present if the patient was taking antihypertensive drugs at time of presentation or if blood pressure recorded was more than or equal to 140mm of Hg systolic or more than equal to 90mmHg diastolic on at least 2 separate readings. Obesity was diagnosed on the basis of BMI ≥30kg/m 2 . Angiographic characteristics including site, severity, type and extent of lesion and numbers of arteries involved were analyzed. Coronary artery disease was categorized as single vessel disease (SVD), double vessel disease (DVD) and triple vessel disease (TVD) according to number of vessels with >50% angiographic stenosis. 13 The angiographic lesions with ≥70% stenosis was stented with drug eluting stents, for left main (LM) disease when it showed ≥50% stenosis. Severe stenosis in smaller vessels (reference vessel diameter ≤2.25mm) were either left alone or plain balloon angioplasty was done depending upon the extent of myocardium supplied by the same. Infarct related lesions with no evidence of viability in respective territories were excluded from the study. Procedural details noted included vascular access route, number of lesions intervened, stents used, peri-procedural pharmacotherapy administered and peri-procedural complications, if any. Operational terms Stable angina: It was diagnosed on the basis of clinical (chest pain-typical or atypical) and non-invasive evaluation (≥1mm horizontal or down sloping STdepression on exercise ECG or perfusion defects on technetium 99 sestamibi scan). Myocardial infarction (MI): It was diagnosed in the presence of two of the following criteria: pain suggestive of myocardial ischemia lasting for at least 30min; unequivocal new electrocardiographic alterations; or positive results of qualitative troponin T or I assay (ROCHE diagnostic kits, Germany). Patients with both STEMI and NSTEMI were included. STEMI was diagnosed when ST elevation of ≥2mm in two or more contiguous precordial leads, or ≥1mm in at least two contiguous limb leads or when new or presumably new left bundle branch block was observed on ECG. Unstable Angina: It was diagnosed in presence of typical ischemic chest discomfort of increasing severity and ST segment depression of 1mm on limb leads or 2mm on chest leads with negative results of qualitative troponin T or I assay. Type A Lesions: It included lesions having all of the following characteristics; discrete (<10mm length), concentric, readily accessible, non-angulated segment (<45 0 ), smooth contour, little or no calcification, less than totally occlusive, non-ostial location, no major side branch involvement, and absence of thrombus. Type B Lesions (moderate risk): It included lesions having any of the following characteristics: tubular (10 to 20 mm length), eccentric, moderate tortuosity of proximal segment, moderately angulated segment (≥45 0 but <90 0 ), irregular contour, moderate to heavy calcification, total occlusions <3 months old, ostial in location, bifurcation lesion requiring double guidewires, and some thrombus present. excessive tortuosity of proximal segment, extremely angulated segments >90 0 , total occlusion >3 months old, inability to protect major side branches, and degenerated vein grafts with friable lesions. Coronary artery territories and segments: The left main coronary artery was considered a segment and a territory of its own. Proximal segments comprised the proximal parts of the left anterior descending (LAD), the left circumflex (LCX), and the right coronary arteries (RCA). Mid segments consisted of the mid parts of the 3 main coronary arteries, and of the proximal 1 to 2 cm of major diagonal and obtuse marginal branches. Segments distal to mid segments were considered distal. Ostial stenosis: A stenosis was classified as "ostial" when the origin of the lesion was within 3 mm of the vessel origin involved. Thrombus: It was defined as a discrete, intraluminal filling defect with defined borders and largely separated from the adjacent vessel wall. Contrast staining might or might not be present. Tortuosity: Stenosis distal to two bends >75 0 was considered moderately tortuous, and those distal to three or more bends >75 0 were considered excessively tortuous. Bifurcation stenosis: Stenosis involving the parent and daughter branch if a medium or large branch (>1.5mm) originated within the stenosis and if the side branch was completely surrounded by stenotic portions of the lesion to be dilated. Calcification: Calcification was recorded if readily apparent densities were seen within the apparent vascular wall of the artery at the site of the stenosis. Chronic total occlusion: A total occlusion [thrombolysis in myocardial infarction (TIMI) flow grade 0], judged to be 3 months duration on the basis of clinical and angiographic findings, was considered as a chronic total occlusion (CTO). Irregular contour: A stenosis was classified as having irregular contour if the vascular margin was rough or had a "saw tooth" appearance. Procedural success: The procedure was considered successful if the visual angiographic estimate of residual coronary stenosis was <10% in stented segments or <50% in balloon angioplasty segments, with the presence of TIMI III flow in the target vessel; without side branch loss, flow-limiting dissection, or angiographic thrombus; and without associated in-hospital major clinical complications (e.g. death, MI, stroke, or emergency CABG). 15 Procedural complications: These included death, procedure related MI, emergency CABG, periprocedural stroke, vascular complications (access site hematoma, retroperitoneal haemorrhage, pseudoaneurysm, arteriovenous fistula, arterial dissection and/or occlusion), periprocedural bleeding, coronary perforation, acute stent thrombosis, flow limiting coronary dissection, side branch loss, arrhythmias requiring specific interventions, and contrast induced acute kidney injury (AKI). All these complications were defined according to the recent guidelines. 14 Statistical methods Statistical analysis was performed by SPSS software package (version 21.0, SPSS Inc, Chicago, Illinois, USA). All continuous variables were expressed as mean± standard deviation (SD), and categorical variables were reported as frequency and percentages. Continuous variables were analyzed with the help of Student's t-test. A p value <0.05 was considered statistically significant. RESULTS Overall, during the period of 16 months, a total of 1166 patients who fulfilled the eligibility criteria were included in the study. Patient characteristics and clinical presentation A total of 1166 patients (mean age-56.3±10.4 years) with 76.5% male and 23.5% female were included in the study ( Table 1). The patient population undergoing PCI at our hospital was relatively young mean age 56±10.4 years, with females presenting a decade later than males. 15 Age range for male patient was between 21-74 years and for female patients it was 44-76 years. Smoking and hypertension were the most common risk factors, present in 64% and 56% patients respectively. Diabetes mellitus and obesity were observed in 24.5%, and 18% patients respectively. Dyslipidemia was observed in 14% of the patients with most common pattern being high triglycerides and low HDL. Anterior wall MI was the most common mode of presentation (36.2%) and total of 1235 lesions were reviewed by angiography. Type B lesions were most prevalent (48%). Drug eluting stents were deployed in 100% of the cases. The overall procedural success rate was 95.4%. Procedural mortality was nil but periprocedural complications occurred in 16% patients which included episodes of hypotension, rigors, respiratory distress attributed to contrast allergy. Local site complications like haematoma formation occurred in 14.5% and pseudoaneurysm occurred in 6% of patients. Slow flow phenomenon was managed with intracoronary bolus of nicorandil, nitroglycerine, diltiazem, adenosine, eptifibatide or rarely intracoronary adrenaline. Two cases of iatrogenic leftmain coronary artery dissection occurred which were managed with additional stenting of left main. Coronary angiographic profile Amongst the patients taken up for angioplasty Single Vessel Disease (SVD) was the most common angiographic pattern, observed in 682 patients (58.4%), followed by Double Vessel Disease (DVD) in 336 patients (28.8%), and Triple Vessel Disease (TVD) in 147 patients (12.6%). Most common vessel involved was the LAD, seen in 63.6% patients, followed by RCA in 51.5% and LCX in 29.2%. LMCA disease was seen in 44 patients (3.8%), all of whom had multi-vessel CAD ( Table 2). When these lesions were categorized according to the ACC/AHA lesion classification system, 319 (25.8%) were Type A lesions, 655 (53.0%) were Type B lesions, and 261 (21.1%) were type C lesions. The lesion characteristics are described in Table 3. Notably, calcified lesions were distinctly infrequent, constituting 3.7% of all lesions; while eccentric, and thrombus containing lesions were common, accounting for 52.6%, and 15.8% of all lesions respectively. Bifurcation lesions and CTOs accounted for 21% and 5.7% of all lesions respectively. Coronary interventions Out of patients undergoing PCI 746 (64%) were elective, 373 (32%) were pharmaco-invasive, 47 (4%) was primary. Femoral access was obtained in majority (76%) while radial route utilized in 24% of patients. A total of 1355 stents were deployed, all were drug eluting stents (DES). Plain old balloon angioplasty (POBA) without stenting was performed on 30 lesions, all of which were distally located with small reference vessel diameter (<2.25mm). LAD was the most common vessel stented with 331 stents used. Post dilatation with non-compliant balloons was performed in 1300 out of 1355 stents (96%). The most common antiplatelet regimen used was the combination of aspirin and clopidogrel in 64% patients. Prasugrel and Ticagrelor were used in 28% and 8% patients respectively. High dose statins (atorvastatin 40/80mg or rosuvastatin 40mg) were routinely given to all the patients before and after the procedure. Glycoprotein IIb/IIIa inhibitors were used in 36% of patients, mostly used as a bolus for slow flow/no flow and in those with high thrombus burden. Thrombus aspiration devices were used in 8.0% patients. Procedural outcomes The overall procedural success rate was 95.4%. Major reason for failed procedure was failure to cross the lesionwith guidewire. Procedure related complications occurred in 187 (16.0%) patients (Table 5). Most common among these were slow flow/no reflow (9.1%) which was managed during the procedure by intracoronary administration of nitroglycerine, nicorandil and eptifibatide and vascular complications (2.7%) like haematoma formation and development of pseudoaneurysm. DISCUSSION The present study provides an insight into the profile of patients undergoing PCI at our institute, and analyses the procedural indications, technical intricacies and clinical outcomes in these patients. Epidemiological studies have shown that prevalence of CAD is increasing rapidly with increase of conventional risk factors. Indians have one of the highest rates of heart disease in the world. The disease also tends to be more aggressive and manifests at younger age. In our study, the mean age of patients were 56±10.4 years which is comparable to other studies done in India that is, CREATE registry (56±13 years) and Jose and Gupta study (57±12 years) but lower than the western populations as in COURAGE trial (62±5 years). [16][17][18] Male sex is more prone to CAD when compared to their premenopausal females. This finding was also observed in INTERHEART study in South Asian men with AMI. 19 There is no clear cut definition of young MI; various authors have defined different age limits for young CAD. Coronary Artery Surgery Study (CASS Registry) defined young men as below 35 years and young women as below 45 years of age. 20 In our study, we defined young males as below age of 40 years which accounted for about 7% of patients undergoing PTCA. There is strong correlation between cigarette smoking and CAD and smoking was found more commonly in young adults than older individuals (72% vs 44%). Smoking increases platelet aggregation, fibrinogen levels, coronary vasospasm and decreases fibrinolytic activity and coronary flow reserve. Cessation of smoking at any point of time is beneficial. Autopsy studies have revealed that coronary arteries of smokers have more extensive fatty streak lesions and develop at an early age than the non-smokers. Indians now constitute the largest population of diabetics in the world. The number of people with diabetes in India is projected to cross 57 million by 2025. 21 In our study, diabetes was present in 21.3% of males and 35% of females and these patients had increased prevalence of DVD and TVD. Hypertension is another important risk factor for CAD. In our study 56% of patients were hypertensive which was higher than the prevalence of hypertension in South Asian cohort of INTERHEART Study (31.1%). 19 Dyslipidemia was more frequent in older males than young patients; Chen et al observed that hypertriglyceridemia and low HDL levels were common in younger patients. 22 MI without previous episodes of angina pectoris was more common in younger patients with CAD. Studies on histopathology have shown that atheromatous plaques seen in young patients are lipid rich with relative lack of acellular scar tissues. These plaques are more unstable and likely to rupture. The most common presentation among young patients is STEMI in comparison to UA or NSTEMI. In current study, younger patients were found to have higher incidence of non obstructive lesions, SVD, DVD, while incidence of TVD was more in older patients. Mohammed et al also observed that SVD was more common in young patients and TVD was more common in older patients. 23 Young patients in most studies presented with less number of vessels involved than older patients. Procedural mortality and peri-procedural MI was nil in our study as compared to 1% and 1.9% in Srinagar registry. 24 In our study, procedures were relatively safe and minor complications which occurred were managed conservatively. As around 84% of our study population was admitted with diagnosis of ACS, having slow flow/ no reflow as most common complication can well be explained by thrombotic milieu. Procedural outcome was good with patients doing well on dual antiplatelet therapy with regular follow up in OPD. Based on the observation from present study, screening of risk factors for CAD could start at an earlier age in Indian male, cessation of smoking, promotion of physical activities and limitation of saturated fat and salt intake should be strongly encouraged. Adequate control of blood pressure and normal glycemic status is imperative. Since atypical presentations are common, high index of suspicion is necessary for early diagnosis. The present study had some important limitations. First, this was a single center study with a relatively small sample size and thus may be with referral bias. Second, we only included patients undergoing PCI in this study with many patients, who could not undergo coronary angiography or PCI for a variety of reasons, being excluded. Thus, some of our findings may not be accurately reflective of the spectrum of CAD in the population at large. Thirdly, because of the limited sample size the procedural outcomes were reported in general and distinction of results between simple vs complex or emergent vs selective procedures was not made. Lastly, no data on follow up, in whom it was performed, were collected in this study. Hence, further short term and long term follow up data needs to be collected in this patient cohort to provide further insight into their clinical outcomes. CONCLUSION This is first study from region of western Rajasthan which provides such detailed overview of not only epidemiological characteristics but also an insight of procedural outcomes, safety and complications. With exponential increase in number of patients developing coronary artery disease, it is imperative that this study would enable us to upgrade our information system and work towards improving quality of care by providing feedback on wide range of performance indices and recognizing lacunae.
2019-03-18T14:02:44.599Z
2018-08-25T00:00:00.000
{ "year": 2018, "sha1": "225c37b257d2463d126aeb8a5bb09099019629e3", "oa_license": null, "oa_url": "https://www.msjonline.org/index.php/ijrms/article/download/5273/4184", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5f96fc44b80b669dc050026249bdedb0256760e2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }